Saturday, April 27, 2013

RSTP/MSTP Part - IV SVLAN Demarcation of services and RSTP MESH topology


Night 02:00 AM
Day: Tuesday

A Call comes suddenly from one person.

" Sir  I had a RSTP ring, I created a new traffic link on the same ring. I did this to ensure another service flow is to be created for a different customer. However just on creating the same link my existing service went down."

Prima facie, when a transport guy hears such a call then it seems very wierd to him. I mean how can a different traffic go down when you just create a seperate link on your topology. For a person who is dealing with transport and for a person who is actually doing transport operations, such a statement is wierd.

However, let us not forget something that Provider Bridge has some different traits from the conventional transport. Not that these are very very out of the world but then as I had discussed in my earlier blog post unlike the TDM where a Trail or a Cross connect is the service Element in the case of Provider Bridge the Trail or Cross connect in the TDM is NOT the service element, it is only the INFRASTRUCTURE. The actual service element is the VPN. So the isolation of services is not done by the means of trail but it is done by the means of SVLAN.

Now let us understand what this guy had done. So we go back to the figure that was there before. A service between A and B that is already created.


There is already a traffic flowing from from Point A to Point B which is located in Client No -2. Now there is one more customer that is taking handoff from Client No 3 and this is a different customer.

So what was the first step done by the guy of the transport.


Now just see what happens when he is doing this.  He feels he will put this new set of  customer on a seperate Link between the root and the client -3 forgetting that the RSTP topology is already converged and the service of customer -1 is already running in this topology. But he decides ignoring all these to create another trail in the topology and this is what happens to the RSTP after he creates the new link.

Now RSTP is bound by its inherent rules, for n number of paths to the root bridge there will be n-1 paths in the blocking state. So in this case the Blocking path is re-calculated. Now you have the Path between Client -1 and Client -2 as blocking and Client-3 and Client 4 as blocking. However the new link that is created from Root to Client -3 is in forwarding mode.

RSTP has done its job but what about the service? The VPNs were actually  built round the ring and are still there. So for the traffic from Point A to Point B, where the VPNs are there they encounter two Blocking paths. This is shown in the figure below.

This is making the whole service go down for Service -1 and that is why my friend called me.

So is this a fault????     NO

Is this a configuration ERROR????? YES

So what should have been the Ideal way to configure the service number -2 in this case, without hampering any traffic?

To this we should again go to the previous posts where the service demarcation is done by the means of SVLAN.


As you can see in this figure you have diffent VPN for different Services in teh same topology and seperated by seperate SVLANs. The SVLAN or the service Vlan actually Seperates the services in a way that there is no Cross talk between different services. Of course they share the same infrastructure.

Each service can be Rate limited to its BW CAP by means of Policy and then both the VPN services can be delivered on teh same infrastructure.  Like this multiple services can be delivered.

This results to a proper ring optimization and good control of the traffic.


But does that mean that you cannot have Multiple topologies in the RSTP configuration?

Is it always that RSTP should always be in the form of a RING?

The answer is NO...

RULE NO 5: YOU CAN SIMULATE MULTIPLE ALTERNATE LINES OF DELIVERY WITH THE RSTP HOWEVER CARE HAS TO BE TAKEN THAT THE SERVICE IS ALSO ACCORDINGLY MAPPED.

So then let us see how the service mapping of Service - 1 between A and B should have been in the topology when another Link is actually present between the Root and the Client -3 considering the same topology convergences that happened before.


If we see at this figure and consider the VPN creations for Service -1 which is between A and B the changes are seen at location Root and Client -3.

In the Root where earlier you had only two WAN ports added you are now adding all the three WAN Ports.

And in Client 3 instead of making a transit VPN with two WAN ports we are not making Transit VPN with 3 WAN ports.

This will result in double failure protection also as now we have two alternate paths as opposed to one, however the most important thing is that the service has to be in accordance.

Same rules and infrastructure applies also for Service from X to Y.

The main thing to remember is that RSTP is an algorithm that is dependant on the topology creation so if RSTP is to be chosen then the Services have to be traced out according to the topology.

It is just like the case that in a linear equation if you have two variables and only one reference equation then you will not be able to solve it. Hence you need to keep one part Variable and one part in reference to it.

In the event you are using RSTP

1. The RSTP algorithm takes care of the topology transitions in the case of failure. ( So this is the variable and varrying part in your network).

2. The service should include all the WAN ports that are part of the RSTP topology. ( This should be the constant part).

In any dynamic algorithm that relies on Topology transitions (OSPF/ERP/IS-IS/BGP) the service is always consisting of the fact that the routing has to have possibilities of all the WAN.

So is the case in RSTP.

My advice to my transmission Fraternity:

1. Once you make the service as per the topology then things are very perfect.
2. Need not create new topologies for new services, because you are actually doing an imbalance to the equation.
3. If you do need to provide alternate paths then make sure that the Ports are also mapped accordingly in the path.

 HAVE A SUPERB WEEKEND AHEAD.........

Cheers....

Kalyan

Coming up next:

How does Switching take place in RSTP 

Thursday, April 25, 2013

RSTP/MSTP Part-III How to create Services in the RSTP Topology

My Dear Friends of the Transmission Fraternity,

The last two posts about RSTP were actually how the functioning of RSTP happens, how there is a convergence happening, Root bridge etc. These properties and configurations of RSTP were actually related to the fact that there is a creation of a RSTP domain. However, what we have not understood in these entire thing is that how are services created in this domain and how is the traffic actually flowing inside the RSTP ring.

Knowing how to create the RSTP ring is just the start of the process. Actually the Data planning requires three major planning aspect and especially in scenarios like RSTP.

1. Topology planning (Covered in Parts I and II)
2. Service Planning and Alternate route Planning ( Is being covered in this section)
3. QoS planning and Service prioritization. ( Will be covered later as we learn about QoS).

RULE No 4: THE CREATION OF SERVICE IN A RSTP NETWORK SHOULD TAKE CARE OF THE ENTIRE RING CONVERGENCE:

Remember one thing over here that RSTP is a property that is closely associated with a topology. So how RSTP will behave is actually decided by the topology and not by the service. The service has to be planned on this ring taking a cue or taking a basis from how the RSTP is actually behaving.

To understand the creation of service let us first take the case of the topology of RSTP that we are taking into consideration for these services.



As you can see in the picture that there is a topology of RSTP , well defined, and we want to send the traffic from Point A to Point - B. We want to send in a way to satisfy the following things.

1. The traffic is able to cater well from A to B.
2. Whenever there is a ring failure in the active section there will be a switch of Path and there will be a proper re-routing of traffic without any kind of loops.
3. At no point of time there should be flooding in the network.

While this kind of traffic is being provisioned there will be the following panic points raised by the people who are actually just shifted from transmission and doing these kinds of configurations.

PANIC No: 1   My traffic is not flowing inspite of the fact "I HAVE CREATED TRAILS and VPN"

Hmm!!!!! let us see what exactly happens in a case like this. The customer, typically a tx guy, calls up and says, I have created "RSTP Trails" and then also traffic is not passing. What do we see exactly in a scenario like this.

The VPNs are created in the following fashion in the L2 modules.
As you can see in this figure that there is VPN created by the VPN has only been created in Point A and Point B. The Client -1 bridge is an intermediate node which doesn't have a VPN at all. The traffic now actually comes from Point A and enters the link between Root and Client -1 and then it enters Client -1. However, since Client -1 has no VPN created the packet actually drops over there.

Hence there is not end to end traffic.

The preconceived notion of the TX engineer is "I have already created the RSTP ring so why should there  be creation of VPNs".... HA HA HA.

Actually the thing that needs to be understood is as follows, while RSTP is taking care of your topology convergence the traffic switching and routing is taken care by VPN. So without VPN routing at every point there cannot be flow of traffic. 

Just like you need Pass Through Cross connects at intermediate points, you also need the VPN Transit at the intermediate switching points.

Now after this rectification is done there is one more problem that is faced by the TX engineer eventually. They come out with the following panic statement.

PANIC NO:2"I have flow of traffic fine, but my traffic is not switching on the removal of a link"

Hmm serious problems raised by the TX NNOC to the TX support and well most of the time this is what happens.

Please see the figure below to understand how the VPNS are created.


No prize for  guessing what the mistake in this configuration is. No doubt about the fact that under no failure scenarios the traffic will flow fine from Point -A to Point - B Via the Client -1 but when there is a failure in the link between the Root bridge and Client -1 then there will be traffic drop.

Most of the Transmission Engineers think that at this time RSTP should take its course.....Well my dear friends, remember what I told before, RSTP is a loop avoidance mechanism and not the protection. RSTP can catalyze a protection response faster but not be the protection itself. Let's see one thing, in a SNCP kind of scenario if you don't provision the other side of the Path then will a Protection take place????? The answer is No... So there is an inevitable requirement to actually make the VPN round through the ring.

First of all let us understand how the problem is occurring in this case with a series of Pictures.


As seen in this figure there is a link failure in the link between Root Bridge to Client -1.


Now over here the blocking link actually comes to forwarding. So as to say that the RSTP convergence has happened.

However, since in the other path there is no VPN created The traffic is actually no reaching the other point through the other path.


So now it is clear that the VPN has to be replicated round the full ring for the proper running of the services. The figure below actually shows how to have services done in the RSTP network.


The figure above is the real way to create the service in the RSTP ring. The Ethernet service is actually routed through both the ways, just like a SNCP Trail which has dual PTPs at drop locations and pass-through at every location, main or alternate.

In such a case the loop avoidance is done by the help of the blocking link. See initially when there is a broadcast the traffic will flow through both sides but the Blocking link will prevent replication and flooding of the traffic.


Some points to remember in the service kinds.
 
1. The service is always a E-LAN instance.
2. There is always MAC learning enabled as the drop points always have to make a decision between two or more paths based on MAC learning of the destination equipment.
3. The intermediate services for just the pass through can be done with Mac learning disabled.


Coming up Next::::

RSTP in MESH.... And service kinds..... 

Monday, April 15, 2013

RSTP / MSTP PART-II Selecting the Optimal Blocking port

Dear Friends of my transmission fraternity,

In the previous blog post I had given the first two rules about how to provision when the RSTP is used as a major loop avoidance mechanism. However, I probably forgot to mention one very important thing in the earlier post.

I said "RSTP is not a protection mechanism it is more for loop avoidance" then how is protection achieved in a Layer 2 circuit?

Answer is a Layer -2 circuit is created when a VPN is formed and a VPN is nothing but a kind of a cross connect that you have in the SDH. However we cannot directly co-relate a VPN with a cross connect because in a cross connect the PTP and the CTP are all hard wired however in a VPN the process is different. There may be two or more than two connectivity points in a VPN and these are bound by something called as a vFIB or Virtual Forwarding Information base. The frame makes a decision on the egress port based on the Mac Table learning and if it is not able to make a decision, well it is flood. So a VPN has two modes, Flooding mode (When addresses are not known) and Forwarding mode (when addresses are known).

So if you already have more than one egress points in a VPN then be sure that the protection is already imbibed, so a VPN would first follow the MAC table and if a path fails then it takes the path that is actually present in the flooding mode. However the basic problem is the point when more than one path is active. Then in flooding state the traffic may be flooded to all the ports and then there may be a condition of loop.

It is to avoid this loop and streamline the protection that RSTP is used.

So when there is a failure the blocking path is unblocked and the traffic is seamlessly switched to the alternate path.

There are also elements like flushing involved in this to actually push the traffic to the alternate path however, this will be covered later.

Rule No: 3  Selecting the optimum blocking path for the RSTP topology

Continuing with the set of rules let us understand that the RSTP also selects the blocking path by the means of

a) Path Cost.
b) Port Priority.
c) Designated Port Number  (Ref Wiki Article on RSTP).

While a novice would actually let the protocol let do this activity and select the blocking port by itself an expert planner would actually exploit this mechanism and influence the selection of the blocking port. Remember one thing while planning of network is done for maximum reliability it also involves an element that when the active and the contingency are both available then there will be more and more stress on the effective utilization of dual bandwidth.

So as to the fact that when the ring is up we can have a dual BW utilization.

Attached is a picture of a mis-planned RSTP ring with respect to path cost and blocking link.


Now what is really wrong in this setup?

First of all the Root Bridge is not planned, which makes all the sanctity of Rule -1 as void. Secondly the traffic is from hub to spokes, and if the right side link is always blocking then in ideal condition there will be a problem in the sense that you will always have 100Mb/s of traffic in the Idle times also. So as to say that when the ring is intact you will only have 100 Mb/s of traffic flowing and no gain in the same ring.

The picture below will explain it.


So as you can see that when the ring is intact the maximum possible BW interchange between the Hub and the spokes are only 100 Mb/s. In a data planning scenario this is like a very novice planning. An efficient planner will not do this, never.

So what is the right way of going ahead? How can we actually ensure that we have a more controlled instance of the RSTP to our benefit.

So friends, let us understand the nuances of creating the blocking link to our benefit. If we see the example above then it is very clear that you have 4 spoke locations in the entire setup and the best possible location of having the blocking link is the link between the second and the third spoke.

To do this we need to understand that which is the factor of  blocking port that is actually in our hands.

1. Designated Bridge Port........ No.
2. Port Priority....... May be but when you have more number of ports in hand it is better not to touch that.
3. Path Cost..... Yes.

The path cost is something that is in our hands and so if we make the path cost of the segment between the second and the third link to be highest then there is most probability that the link would be the blocking link.

Also we keep in mind that the other links' path cost are kept at a much lower side. Because remember the basic statement of RSTP.

"For N number of paths to the root bridge from a bridge N-1 Paths would be in a blocking state"

So the computation of the paths is done taking the sum of all the cost of the segment that is coming from the root to the expected bridge.

Look at the picture below.

 

The picture explains the following.

> First of all the root bridge is selected as the HUB.  So both the egress ports and all the ports are in actually forwarding state.

> Then the link in-between the second and the third  bridge is assigned the highest path cost. This makes the link as the blocking link.

However, let us understand how optimal is that. For this look at the next picture.


Now as you can see that there is a double usage of the same 100 Mb/s infrastructure. This is happening due to optimally planning the RSTP blocking path. This means that in the ideal scenario where there is no ring cut there will be double the BW available for all the sites. Efficient planning like this will actually lead to more use of EIR ( Extra Information Rate) which can be sold at BE prices.

Thus this kind of planning either saves revenue in a traffic sparse area by conserving resources or generates more revenue in a traffic opulent region by actually investing less resources.


In the next part we will study how to construct services in the RSTP configuration. Lot many parts to go.

Till then some tips.

1. Understand the requirement of BW per pop rather than taking the holistic picture.
2. Compute what will be the total requirement of BW and carefully place the blocking link.
3. Try to divide the RSTP ring in a pattern where efficient division of BW can happen, this is not technology, this is basic mathematics which you and I all have studied and you don't need a crash course for it.
4. Listen to the requirement of customer.... Because if he wants to kill a mosquito do not bring a tank.


And happy planning and provisioning.

Cheers!!!!!

Kalyan...

Saturday, April 13, 2013

RSTP / MSTP how to use them and how they are mis-used. (PART-1)

Dear Friends of my Transmission Fraternity,

First of all I would like to apologize to my transport fraternity for being so late in posting a new topic. Actually these days was very busy and was actually searching for the right topic to actually start this chain of posts.  Many of my friends have now evolved from being a pure TDM guy to a person who has a good hand ont he TDM as well as Ethernet technologies.

However, like every new person the newly evolved people also have a tendency to go overboard and get carried away. Just like babies, who take the first successful step and then get carried away and take further steps which are ostentatious in nature. The result, a fall, a stumble or even worse getting hurt.

My today's topic is dealing with one such stumbles that newly evolved people have. But before I start please note one thing and one thing very seriously. Technology is something that always changes and evolves, so carrying a baggage of so called "Experience" in your head would actually not help. Learning technology should be understood with the fact that every day you are a fresher.

The word "Experience" is good to understand what were your mistakes in past and what "Should NOT be done" however what SHOULD be done is governed by serious rules of technical understanding and YOUR and only YOUR SKILL........

Today we are going to talk about RSTP/MSTP in the network of Layer-2. Understand that RSTP/MSTP should be only used when you are having a Layer-2 emulation. When the Ethernet is only needed to be encapsulated in SDH and sent from one point to another then the Layer-2 should not be mixed with it.

As I have mentioned in the previous blog the EoS trail is just an infrastructure and not an entity that decides the flow of the traffic. The actual decision of traffic flow is taken by the VPN or the EVC.

Misconceptions about RSTP:

1. RSTP is a protection mechanism in Layer-2.
2. RSTP should alwaysbe used in L2 implementations.

 Facts:

1. RSTP is a loop avoidance mechanism when Ethernet rings are formed.
2. RSTP is not mandatory, it should be used and judiciously enough for L2 simulations. Better off to be used when the number of NEs are less.


While I am not explaining what is RSTP  because you can really find this in the link below.

RSTP Explained properly in Wikipedia

We should be clearly understanding when to use RSTP and most important when not to enable RSTP.

RULE -1: USE RSTP ONLY IN SMALLER DOMAINS

The rule is very simple. Remember RSTP is a data plane protocol and has a lot of exchanges of BPDUs from one to another. So unnecessary loading of more and more elements in the RSTP domain may create and is infact creating problems in many networks. The best example is to create smaller domains of RSTP.

In the case these domains need to be interconnected wth each other this should be done in two ways.

1. By means of Trunk trails where RSTP is disabled.
2. By means of Layer -3 segmentation.


As you can see in the picture that various domains or rings of RSTP are actually segregated by interconnections of RSTP disabled links. If the traffic was to travel from a NE in ring -1 to a NE in Ring 2 then it actually would have the RSTP loop Prevention in the ring and then would travel the gateway links.

The gateway links can be protected by means of Layer-1 protection or ASTN. If these links are that between two routing elements then they are actually running the OSPF or a simple CIDR alternate routes.

What happens if you enable the RSTP on these links?????

Well, a mistake that all my tranmission friends do, and yes pay heavily for it and also ring the guts out of the support centres.

Understand that RSTP is a topology based protocol. If this is enabled  throughout then you will not have multiple domains of RSTP but a single converged domain of RSTP. Hence there would be only one root bridge in this entire setup. And this may be any one. The BPDU flow path is now much more complex than simple ring paths and this makes making flows and VPNs more difficult and trouble shooting even more difficult.

This also results into more overloading of the CPU of the NEs which result to NEs being stuck and NEs being mis managed and traffic going hay wires. In short....Total .... Sheer....MESS......
Of course, this also lets your boss and your management think; "TDM waalon ko ye manage karna aayega ya nahin"....

So my transmission fraternity, please do not do this mistake and understand the intricacies.


RULE NO: 2 SELECT A PROPER ROOT BRIDGE

If you read very carefully the document of RSTP then you will understand that it is the selection of Root Bridge that happens first. Letting a protocol actually select the root bridge is the work of a nerd fresh pass out, who lets the system actually control him. A clever planner and a true transport "ENGINEER" and not a "FREAKING ONE MONTH COURSEWARE" will actually call his own shots and plan the root bridge in the right manner.

So interpret the document carefully of the protocol. The purpose of the Root Bridge is actually to guide the entire process of the network. So the root bridge should always preferably be the aggregation point of the Network. Most of the transmission networks are of Aggregate and collector/access kinds so in this it is very easy to locate as to which is an aggregator and which is an access.

The true engineer would actually lower the bridge priority of the aggregation node an then would keep the priorities of the other access nodes at par. Also he would keep the bridge priority of the alternate node that would become bridge at the second lowest figure so that if the root bridge fails that NE and only that NE becomes the alternate root bridge.

This system gives him good control over the network and then it also enables him to have better trouble shooting and more towards a self healing network.

I am putting two pictures over here. One of this is Right way and one of this is the wrong way. Let us see the wrong way first.


The next picture is the right way of configuring the RSTP ring. Which truly a transport engineer does.

What happens if you do not follow this rule????

Hmm, there can be many excuse not following this rule. eg

1. Why should I not stick to default? Well, default password for windows is also no password and even that is asked to change.

2. My guidelines from planning says not to change default values??? I wished your guidelines also repaired your faults when they occured and trouble shot themselves. But unfortunately they don't do that.

3. I will have to maintain a table?  If you are sensible enough you will know for sure what is the aggregation node. So you won't need to maintain a table.

The bottom line is that when you yourself give the bridge priorities then the root bridge and alternates are sure shot ones. So even if there are any replacements in the rings then these replacements do not change the overall computation of the RSTP.

This means that we will actually have more control all the time, even when such replacements, removals and addition activities are done in the network. You don't need to worry about your RSTP going for New Bridge selections again and again.

Another thing is that it helps you optimize traffic. Which we will discuss in the next part.


I will post many such articles on the deadly RSTP and MSTP.... Till then my transport friends please remember the following.

1. Do not play with parameters in Data, as each and every one of them have their significance.
2. Before doing deployments keeping only default in mind just understand the technology.


Still to come in the next parts.

1. Blocking port selection.
2. Traffic optimization.
3. Flow routing in RSTP.

Have a great week ahead.

Cheers!!!!!

Kalyan.

Thursday, October 11, 2012

Understanding Cross talk prevention in EoS Circuits



The piece of article that I am writing today is actually influenced by one of the questions that was raised by Javin Shah. Javin had asked me on facebook a very interesting question  pertaining to the last blog-post.

“ In the recommended configuration of using only one EoS across two different elements for different services, what would happen if there is a problem or looping in one of the services? Would it also not loop my other service? In such a case why should I not go with the separate EoS trail approach?”

This is one of the major worries due to which a transmission engineer actually prefers to do separate EoS trails for separate services and not consolidate them in one NNI. This is the point where the user is actually considering the EoS to be actually a service and not an infrastructure.

When I say the word “Infrastructure” let us first understand what is infrastructure in the terms of telecom transmission provisioning.

Infrastructure is actually an entity on which (and this preposition is very important) the actual service runs. So if the service is a VC-12 trail the infrastructure for the VC-12 trail is the Channelized VC-4 and the infrastructure for the Channelized VC-4 is the Optical Fiber link.

Please see the figure below for a ready reference. 


As we can see in the Figure it is seen that the actual service (entity carrying the traffic) is the VC-12 Service trail from one point to another point. However this service is actually based on the Channelized STM-1 on which it rides or the Terminated VC-4 and the Terminated VC-4 is based on the Optical Fiber link maybe of STM-1/4/16 or 64.


A thing to note over here is that :

One Optical Fiber link may contain many Channelized STM-1s
One Channelized STM-1 may contain many VC-12 Services.

So the Optical fiber is the first layer of infrastructure and the Channelized STM-1 is the second layer of infrastructure.

Now if a user desires to may two trails of VC-12 service it is not required to actually have another set of Optical Fiber or another set of Channelized STM-1.  This means that the infrastructure is common for two trails of VC-12. However the traffic of one trail of VC-12 does not inter-mix with another trail of VC-12 and also the service ill-effects are also not carried over. Each VC-12 trail behaves as an individual service.

So as to say the service in the case of both the trails are segregated and do not interfere with each other, whether positively or negatively.

Same actually applies for the case of Ethernet services on EoS. However here the definitions slightly change as the layer of service is shifted one Layer up. Remember, we are doing this on Layer-2. In the case of our Layer -2 services or VPN the following picture actually shows the mapping of the infrastructure and the Service. 



K-L-M IN SDH IS SAME AS VLAN IN ETHERNET (FOR OPERATIONAL PURPOSES):

In SDH/TDM services the K-L-M indicator is the service differentiator similarly in the case of Ethernet services the service De-limiter is actually the VLAN. Just as in SDH as for a new K-L-M a new service trail is built in the case of Ethernet we have a different VPN for a different VLAN.

Hence, the VLAN actually forms the basis of the demarcation of the service. 

HOW IS VLAN DIFFERENT FROM THE K-L-M:

Having understood the concept of VLAN as a service is actually different from K-L-M. The Vlan a tag that is put on the data payload so that it can be identified and carried in the Packet Network through a VPN transparently without any kind of interference. The link below provides a conceptual property of the VLAN.


In the case of VLAN there are priorities also so that we can have multiple streams in one Vlan with separate properties. The prioritization is something that is apart from SDH that happens on the Ethernet networks (Will be explained later).

A K-L-M is a logical indicator in the physical layer wheras the VLAN is a instance separation of different streams.

However, for operational constraints just like in a fiber we can map different services to different K-L-Ms the same we can actually do for the Ethernet services, mapping to different Vlans, in the EoS infrastructure.



WHAT IS TO BE UNDERSTOOD FURTHER:

Just like we do not have any cross-talk between the K-L-Ms of two different trails on the same Fiber link same way two different VLANs do not share information with each other even if they are on the same EoS trail.

Hence, when one service is affected or looped it is only that service which will face a problem and not the other service as the VPN/VLAN is different.

This is explained in the figure below. 


SO WHAT SHOULD MY TRANSPORT FRATERNITY REMEMBER:

1.       The service layer is to be identified for each and every service. A new service in Ethernet is not a new trail, the trail can be the same however the service is identified by VPN. This is the basic reason why the end user doesn’t mention port number of K-L-M in such cases. They mention VLAN.
2.       Data of one Service never interferes with the data of another service.
3.       If there is a malfunction/looping/broadcast in one of the segments of the service then the other service is never impacted.
4.       The user should remember that the Vlan can be reused just like the K-L-M can be reused in a different segment.


SO MY FRIENDS, REMOVE YOUR APPREHENSIONS AND GET OUT OF THE TYPICAL TRAIL PROVISIONING CUCOON AND IDENTIFY WHAT TO PROVISION WHEN…………







Tuesday, October 2, 2012

VCG, LCAS and the Pass-Through SDH concept


“There is member Failure in my VCG----- Link is going Down”

We may often come across such a problem.  I am saying this taking a cue from the previous post and also the kind of cases the operations me come across with.

My friends from the transport group, who have recently plunged into this field (EOS)  should be actually conversant about three things over here.

1.       VCG: Stands for Virtual Concatenation Group and this only sits on the Card or module that is responsible for the encapsulation of Ethernet to SDH and having the GFP or GEoS object.

2.       LCAS: Link Capacity Adjustment Scheme; and this deals with the Dynamic Payload Control. This ensures that the entire link is not going down when only one member of the VCG is going down.  The link below actually will describe LCAS in Detail and this is from the ITUT.


3.       Pass-through Elements:  These are elements in the intermediate section of the link that only deals with the cross connection between two different VCs. This doesn’t contribute to any kind of data processing and only contributes to the channelization of the path for the Data traffic.

A typical configuration of the link, in the unprotected domain is shown as follows.




A thing to note in this link is that the drop points of the SNCP connection are the Data card EoS objects and the LCAS is enabled also at the Data card at the End Multiplexer. The work of the pass-through object in this kind of connectivity is only limited to the SDH.

Also keep one thing in mind that the Pass-through object does not in any way contribute to any data processing. The VC-4s are all individual VC-4 that are connected by means of a cross connect fabric which is purely TDM.

If we logically look at the implementation then the implementation is actually perceived as follows.


As we can see in this figure,   Ethernet processing, LCAS, Virtual concatenation are all done on the terminal Data cards and they are the ones which are actually also acting as the SNCP drop points. The SNCP switchover also happens as the VC Members of the VCG on the DATA card and NOT ON THE SDH card.

Now why I am saying this? Let us look at a fault over here in the next figure.


As we can see that the complain is noted as one of the VC in the line had an AIS which was hitting one of the members of the EoS group. This was actually bringing the entire EoS group Down (Which it should not be).

However, the expectation of the person looking at this fault maybe as follows (shown in the next figure). 


As we can see in this figure that the fault attendant was actually expecting the connection to switch at the pass through level, which for obvious reasons as mentioned above should not be the expectation. This is because the pass-through element is not having the protection connection and nor having the information of any kind of grouping that should be there.

So the expectation that the switch should take place at the pass through level is a wrong expectation in such kind of configurations.

So then how to resolve this problem?

Ø  First of all it should be ensured that the LCAS is enabled at both the locations (end – points) and are running the same variant and also are compliant with each-other.
Ø  The SNCP connection should be checked, because if the SNCP is perfect then there should not be any kind of problem in switch-over. Actually the SNCP switch-over should happen at the terminal end only for one VC-4 and this should not lead to the link going down with LCAS enabled at both locations.
Ø  The SNCP variant that is being run should be made as a Non-intrusive SNCP which will respond to also errors in VC. Please remember SNCP works on a VC-4 level whereas the LCAS will work on the VCG (Multiframe level). And since in the pass-through element there is neither SNCP or Virtual concatenation, this operation is always done from the endpoints.

Hence, if we see clearly then the actual resolution should be as shown in the figure below.



So what are the things to remember in such configurations?

1.     The traffic is Ethernet encapsulated in SDH and the encapsulation and termination actually happens at the endpoints where the EoS object or Virtual Concatenation Group is present. The SNCP connection endpoints and the switching points are also present over there and not anywhere else.
2.     The pass-through element is just like a repeater which patches the VC-4s. This is a point where we can actually have a J SWAP that is changing of the VC-4 number like in any other TDM cross connect.
3.     Member going down should be addressed at the VCG level.
4.     Always keep the LCAS in synchronized mode.
5.     If required check trace of each and every VC-4.
6.     SNCP to be kept in non – intrusive mode.


So in the next post will share some interesting facts about implementing optimization in the LAN network using L2 devices and also the concept of Vlan.  Till then.....Goodbye.... 







Saturday, September 29, 2012

EoS in the initial phase of Evolvement ( EoS is an Infrastructure and not a Service!!!)


As we evolve towards the complete Ethernet back-haul from the traditional TDM we have taken an entry level step of actually starting by evolving the network by including the Ethernet traffic on the EoS. We also studied about what are actually the cost advantages in the initial phase of including the EoS as a part of evolvement.

For a person who has been with the transport TDM technology for a long time has something to cherish with this. EoS is not a technical shock for this person. He/She is able to absorb this gradually, however EoS is not SDH and that needs to be clearly remembered by the person. They should not be carried away by the fact that, “Well this is just another variant of SDH so let us do all the things that we have been doing so far.”

This is a very overconfident and a very incorrect thinking by a transport engineer. This thinking itself leads to problems in the network and thus forces your planners in the network to think of expensive means to actually grow the network and most-importantly outgrow you. I am sure nobody wants such imbalance in the network and also in the organization and so with the change of field we need to understand the change of rules.


So here are some major tips for My Transport Fraternity

1.       EoS is not a service trail. It is actually meant for carriage of service and is not the service itself.

2.       A EoS interface is not a SDH interface. It is similar to a GiG port of a router or a Switch only that in this case the port interface in the card is logical and is cross connected with the SDH KLM.  This means every EoS trail that you make actually relinquishes a port so you have to understand that every new service is not a new port in the EOS. The new EOS port should only be relinquished when and only when there is a link to be created to another destination that is not “Physically or geographically” connected to the ring.  I am posting below some Pictures that are the right approach and the wrong approach of realizing services in the case of EoS. We will take the example as follows:

     There are two customers that are taking a drop from the same Multiplexer that has a data card. One of these customers has a Service commitment of 20Mb/s and another customer has a service commitment of 30Mb/s.  The point A and Point B are connected by one SDH Protected Link as shown in the figure below. 




Let us see the right and the wrong way of implementing them.

Let us look at the WRONG Implementation First. (Which Most of Indian Transport Planners do)





What is wrong in this?

While for many transport engineers/Planners and Noc engineers this configuration may seem to be the best configuration that can be created actually this is the most horrible configuration that one can make.

Ø  First of all for different customer using different EoS itself proves that the transmission in not planned an optimized properly.
Ø  This means that for every customer there would be a EoS infrastructure trail entitiy and this also means that the card ports will exhaust very soon.
Ø  For the planner, this means that very soon and sooner you would be constantly needing new cards.  Your management is soon going to take you to task and ask why the hell you are going on adding new cards. Trust me.
Ø  More complexity in the topology. Because the SDH link is same but there are two or more (Depending on the customers) EoS links. 
Ø  This configuration means that the transport engineer or planner is still looking at the implementation as a pure SDH implementation, which it is not and not looking at it from the Ethernet perspective. He should be told that the customer is actually looking for “ETHERNET SERVICE “ and doesn’t really care as to how many EoS trail the TX guy has made. 
Ø  Implementation is not optimized and in some cases can be disastrous is RSTP is involved. (This will be explained in my next blog posts).


Now let us have a look at the Right implementation. (Which very few of Indian Transport Planners Do)






So what is so right about this configuration?

Ø  First of all the planner has actually conceived the services well to be Ethernet service so the segregation is being done at the Ethernet level initially by means of separate VPNS.
Ø  Secondly the planner has used only one EoS trail that is of 50 Mb/s and can be scaled up as per his/her requirements in the future, thus he/she is looking at the EoS as an infrastructure and not as  a service and thus able to optimize the usage of the Data card/switch.
Ø  He/She is actually able to look at the service as a complete Ethernet service that may ride over the transport however; different services over the same physical topology need not take different infrastructure.
Ø  The 20Mb/s and the 30Mb/s is actually done on the Rate Limiter in the VPN.
Ø  The planning leaves more space for further augmentation of BW if required and also addition of another service over the same EoS trail infrastructure.
Ø  The configuration actually enables Bandwidth sharing and EIR upto 50Mb/s for each service keeping the committed SLAS intact to 20Mb/s and 30 Mb/s respectively.
Ø  This guy/gal is actually achieving the entire facilities of Ethernet Services without loading his CAPEX and thus impressing his planning bosses and management. 


Thus my dear companions of Transport Fraternity, we need to remember the most important thing in the first phase of evolution.

“ EoS is a boon when it is considered as an infrastructure of carrying Ethernet services on the TDM back-haul, however it becomes a major liability and a cause of headache when it itself is considered as a Service.”

In very simple words “STOP CREATING SEPARATE EOS TRAIL INFRASTRUCTURE FOR SEPARATE ETHERNET SERVICE REQUIREMENTS AND STOP BEING YOUR OWN EXECUTOR.”

EOS has a very good advantage in your network now if the Data traffic volume is less and can be accommodated in the transport Vs the Native Ethernet.

1.       EoS is the only infrastructure where you can achieve variable rates of BW in the link. Remember you can only achieve pipes of 150, 2, 32, 64, 48, 300, 600 Mb/s and various such combinations only in the EoS. This is because EoS can have Concatenations of various levels of the SDH Path objects like VC-12, VC-3 and VC-4.


2.       EoS is the only infrastructure where in addition to the L2 protection schemes and running of L3 internal protocols you can also achieve TDM carrier grade protections like MSP1+1, MS-SPRING and SNCP I and N.

For my companions from the Routing and “ALL – IP” Fraternity……

EoS is not a SDH implementation. It is a mere link between two devices that may be L2 or L2.5 or L3 using the SDH infrastructure. This is something like PoS, however unlike PoS this actually looks for the Ethernet header in the Raw Input.

EoS is as much capable to carry all the functions of the “ALL-IP” Transport (and please note this word TRANSPORT)  as any native  Ethernet Carrier.

Actually in some cases, for initial phase of deployments, if used judiciously as mentioned above, it is much better, less costly and more efficient than native variant of Ethernet.


In the next blog post we will see about how to optimize the physical interface also taking some help from the routing fraternity and thus reducing your cost on transport. 

"More you be with the science in the judicious manner, less you will spend on unnecessary events in your network."