[email protected]
[Top] [All Lists]

Re: Next Steps for IPFRR

Subject: Re: Next Steps for IPFRR
From: Russ White
Date: Tue, 21 Mar 2006 09:11:48 -0500
Taking this to the list, when we approached the motives for the basic
IPFRR and Loop Free alternates the gains were clear.  We went from
possibly doing things in the wrong order to a logical approach and
improvement.
As we evaluate more the complex IPFRR means, I am wondering how do we
evaluate comparing the techniques? I spoke with a few people and I think
many struggle with comparing the alternatives.
I think we should really focus in two areas:

-- What is the worst case? If we have to sacrifice some coverage to have a reasonable worst case, then it seems a logical tradeoff to take. If we can get full coverage with a worst case bounded so it's not more than x worse than "normal" convergence (for some value of x, possibly 1), then that seems like a reasonable goal to shoot for. While we all like to think of the "normal" case, in reality, the pathological case is the one that really hits us in the real world.
-- What is the predictability/consistency of the algorithm? If the
algorithm, in a given network, acts optimally in 8 out of 10 repetitions
of the same failure, and pathologically in the other 2, this probably
isn't a good thing. :-) Random modality based on race conditions is, in
general, a bad thing for successful network design. Even modality in the
sense of "if you have this type of failure, your convergence time is
bounded at x, while if you have that type of failure, your convergence
time is bounded at y," is really really bad for network design. If I
have to think about how to prevent failure type A from occurring while
also trying to aggregate traffic and address space and design flooding
domain boundaries, and think about security, and.... The more we pile on
to the design task by adding modality to the convergence
characteristics, the more we've made successful network design
impossible to achieve.
For single failure much of our literature refers to two measurements:

1)% of changed paths covered 2)% of traffic covered (we often assume a balanced traffic load and
equate to point 1).
And we should know that #2 isn't the same as #1.

A suggestion would be to look at the good put of traffic (sum of
delivered packets over the network at instants in time). I suspect in
many networks scenarios we are could have diminishing gains. For example
as we reach 100% coverage for single failures,  can we not have still
have packets dropped due to momentary congestion (since packets on
alternate paths now take longer and less efficient paths)?
How do we formalize this (or other) measurements so we can evaluate the
techniques. In my opinion, I think a criterion like this is necessary
before we can evaluate next steps.
I think the "goodput" is related to factors outside the control the
routing protocol, to some degree, so it's almost impossible to
accurately "know" what it's going to be. However, if we minimize
modality, we increase the chances of being able to achieve goodput in
the real world (this is a lost, and often forlorn, concept we've been
losing sight of in our simulation driven world).
:-)

Russ

_______________________________________________
Rtgwg mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/rtgwg

<Prev in Thread] Current Thread [Next in Thread>