[email protected]
[Top] [All Lists]

RE: Fwd: I-D ACTION:draft-atlas-ip-local-protect-uturn-01.txt

Subject: RE: Fwd: I-D ACTION:draft-atlas-ip-local-protect-uturn-01.txt
From: "Naidu, Venkata"
Date: Thu, 28 Oct 2004 10:50:53 -0400
-> Alia, we have talked briefly at the IETF a couple of times. 
-> I am interested 
-> in the IP fast reroute concept and hope that we can use it 
-> in our network 
-> some time in the future. I have a concern however. It is 
-> regarding the 
-> complexity of designing and keeping the topology so that 
-> uturn or even 
-> better, loop-free approach gives 100% coverage. Is my 
-> feeling correct that 
-> you need to keep a  very dense topology to reach the 100% goal?

  Good question. 100% coverage is specific to number of 
  failures. If an algorithm/approach is designed to
  cover 100% coverage for 1-failure may not cover 100% 
  for 2-failures (simultaneous failures), even if the same 
  topology is sufficient to cover 2-failures.

  For example, 1-failure 100% coverage topology for any
  V vertices would need at least E edges. E-1 edges are
  enough for a connected graph. So, E is still O(V), which
  is sparse. So, sparse topology is sufficient for 1-failure 
  100% coverage. In the real world, these types of topologies
  are very common. Look at token ring, sonet rings etc. All
  these ring topologies cover 1-failure very well. Because,
  graph is still connected if a node/link fails in a ring.

  Coming to k-failure (simultaneous) 100% coverage topology,
  the graph becomes dense and dense. More over, such graphs
  should have very particular properties, such as t-spanners.

  It is a very interesting research topic to find out when
  the topology becomes sparse to dense (i.e., E=O(V) to O(VV)).
  As the number of failures increase by one, the number of 
  edges increase exponentially. So 2-failure 100% coverage 
  would need a dense graph, IMO.

Venkata.

_______________________________________________
Rtgwg mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/rtgwg

<Prev in Thread] Current Thread [Next in Thread>