Alex Zinin wrote:
Good. Let's think about this a little more before jumping on it.
Meanwhile, one thought I had while reading this thread: Stewart, you
suggested that the max delay is found during the SPF calculation. It seems
(or I hope ;) you didn't mean finding destination-specific max values, but
instead merely using SPF's graph exploration property--visiting all
While this would work algorithmically, I would instead prefer finding the
max among all available announcements, whether the corresponding nodes are
momentarily reachable or not. This would be more robust, easier to debug,
and save extra fluctuations when the network topology changes. If any of
the routers becomes unreachable for an extended period of time, its
LSAs/LSPs would finally age out and hence taken out of consideration.
In the general case we have to assume that every router has been configured
with a different value for the time that it's operator thinks it will need.
So the problem is how to make sure that the network has a consistent view
of the max value.
My thought was that since the set of LSPs that were used to calculate the
topology had to be consistent over the network (or the network would be
unstable), then if the delay parameter was imbedded in the LSPs it would
be consistent amongst the routers that were converging.
By using the value extracted from the current topology info, you avoid
a whole load of messy protocol associated with routers joining, or leaving
and wanting to change the value - it just happens automatically.
By using historical values - as you seem to propose, you need to consider
how you propose to synchronize the addition or removal of routers from
the max-time algorithm. I am proposing that this happen automatically
as part of the convergence calculation, whilst your approach needs to
consider quasi asynchronous removal of a router that has the net max
value, just as the topology changes. Extracting the info from the LSPs
makes such actions a synchronous event.
If I look at your second para:
> While this would work algorithmically, I would instead prefer finding the
> max among all available announcements, whether the corresponding nodes are
> momentarily reachable or not.
> This would be more robust,
What can be more robust than extracting the parameter from the set of
topology info that EVERYONE is using for THIS transition?
> easier to debug,
Not convinced, you know which routers are in the net, you know what the
time should be. The other way you get strange bugs due to the
asynchronous aging of LSPs
> and save extra fluctuations when the network topology changes.
I am not sure how much of a problem this is in practice.
> If any of
> the routers becomes unreachable for an extended period of time, its
> LSAs/LSPs would finally age out and hence taken out of consideration.
In the past this was a garbage collection activity with no topology
significance, and did not need to be synchronized. We are now making
it a topology significant event that needs to be synchronized so that
max-time is synchronously removed across the network.
Rtgwg mailing list