Monday, August 18, 2014

CATV distribution and Conditional Multicast

"Looking through the glass always does not help building vision, vision is built when one actually steps out of the door to the open land and experiences it with all the senses that he/she possesses."

Aristotle

My Dear Friends of Transmission Fraternity, 

How are all of you? I hope you have read the last article on Multicast data transmission that I had put forward. The matter that I am going to present in this post is actually a continuation of the same.

1. Recap on the Multicast Traffic:


A multicast traffic is a stream of Data that has a destination address which actually when entering a switching device floods all the ports with the same stream except the port from where it is injected.
The behavior of the traffic is same as broadcast as per the flow is concerned however there are somethings that are different in the multicast traffic.
Multicast traffic can actually be grouped by means of IGMP and can actually be controlled by user based policy as compared to a broadcast traffic that can only be restricted by the means of BSC (Broadcast Storm Control).
Multicast transmission actually exploits the idea of the flooding state of the switch to its advantage and thus optimizes resources in the network to envision a drop and continue architecture of the traffic delivery. This is how more an more traffic can be pumped to more and more locations.
The Network reservation of the resources does not depend on the number of users but actually depends on the number of channels/streams that are do be delivered in the multicast delivery model.

2. Optimizing more resources on the Multicast delivery scheme:


Previously we saw the very basic of the transmission of Multicast traffic in the system, however this kind of plain multicast traffic is very good when it comes to the upper distribution layer. In a normal CATV delivery system there is a upper distribution layer and then there is a lower distribution layer or can also be called as the access layer. Generally the upper distribution layer is full of resources and there is not much of conditional access that is deployed over there. It is however the lower distribution layer that is devoid of the BW and where a conditional multicast can do wonders in the delivery scheme.

Let us see the actual model of the Multicast delivery scheme in a CATV network in ideal scenarios.

Distribution in a CATV network system
As we can see that a CATV entity is not just one entity. A CATV, the person who is actually your cable operator is not the person who is owning the content of the channel. So there will be a particular content owner who is having the access of the channels. This will be a big company. They have a master location from where the content is fed into their main router and this starts a multicast drop and continue across all the locations.

So then there is a content owner, which is actually the owner of the channels then there is this upper layer Distribution, who is called as the content distributor. Big Distribution companies, ISPs or Managed CDNs actually to this at their layer. Then there are SUB-Operators or Local Cable Operators (LCOs) who actually have their own infrastructure in certain areas of the city to feed to their subscribers directly. The LCO model can also have a SUB-LCO entity that actually owns one single point of distribution to the customer, but such kind of deployments and arrangements are really less in number.

2.1 Distribution done by the Content Distributor:


As explained before the content distributor does a plain multicast delivery as the probability of repeat-ability in that layer is very bleak. In that layer all the sub-operator would actually demand an access to all the channel feed at the same time in order to distribute to their subscribers from their end whenever required. In this case the availability of all the channels should be there in the Upper distribution Network all the time. So this is a kind of an unconditional plain multicast that is there in the network with drop and continue arrangement.  The picture remains the same as shown below.

Drop and Continue Distribution and Upper Distribution Layer
As we can see in the picture clearly that there is a proper load sharing of the drop and continue architecture that is present in the upper distribution layer. This ensures proper flow of traffic to all the points with least processing load on the routing entities.
Assuming there are about 200 Channels in the system and each channel has on an average BW of 5 Mb/s this is a total infrastructure of 1Gb/s that is provisioned. There can be more BW provisioned if there are more number of channels. to be very precise how the BW engineering can be done in the Upper distribution layer will be explained later as it will have considerations for SD, HD, 4K and Audio channels.

2.2 Distribution done at the Local Cable Operator Layer:


While the distributor may have the luxury of having a 1G or a 10G BW provisioning to provide all the channels in a plain multicast fashion in the system, same may not be the case for the access network provider or the local cable operator. The local cable operator has the feed of the entire channel in his DSLAM box from which he multicast the content to the access. However, there is a difference over here. The part that is played by the local cable operator in this case is a bit advanced as he does a kind of Conditional Multicast in the system. The Local cable operator uses IGMP to actually do the distribution.
To have theoritical details about IGMP please read the following links

IGMP Wikipedia Link
IGMP V2 (RFC 2236)
IGMP V3 (RFC 3376)

2.2.1 Traffic in the Head end:


The head end of the LCO acts as the IGMP Root of the system and takes in all the channels from the distribution layer of the distributor. Let us see the figure for this.

The Local Cable Operator Distribution topology
As we can see the local cable operator distribution topology actually takes in all the channel feed from the Upper layer distribution, however it does not feed in all the channels at the same time to the access network. 

2.2.2 Repeatability/Probability factor and conditional distribution:


The local cable operator is working in a particular region of the city and so there is quite a possibility that all channels may not be in demand at the same time in the network. Let us take the example of a particular region in the afternoon. At this time the demand may be only of the channels that are showing movie and some news channels. 

Let us say that across this region that we have depicted in our diagram there is a request for CNN, HBO and FTV Only from two regions. 

Users requesting from channel as they are pressing the Channel button

As we can see over here that the users have requested the channels and these requests go from the leaf DSLAM to the Root DSLAM  as channel requests. The Root DSLAM will register the fact that the request has only come for CNN, HBO and FTV in the network and so will only populate the network resources with these three Channels.

Thus effectively the network will not take the entire load of all the channels but will only take the load of 3 channels that have been requested by the network so the load decreases to 15Mb/s from a mammoth 1Gb/s.

The Root also takes care of the fact that from which side or which direction of the network the requests are coming so as to provide the Feed only to that direction. In this case the bottom leaf is not demanding any content and so there is no need to populate the bottom legs with the content. However, there are demands of three channels from left and two from the right. So the Root ensures to send the stream in a way that 15Mb/s flow towards the left part of the ring and 10 Mb/s flow at the right side of the ring. This further optimizes the Bandwidth.

Let us have a look at the picture below for a clear understanding.

On the Left there is consumption of 15Mb/s and on the right there is consumption of 10Mb/s

As and when new request from new channels will come more resources will be filled.

2.2.3 New Leaf Demanding the same Channel:


In our case above the leaf a the bottom was not demanding any channel. However, let us understand that what happens if the user now in the bottom leaf turns on the Set Top Box and starts demanding for a content that is already flowing in the system. Say in this case the user is demanding HBO content that is already flowing in the system. In this case it is seen that the HBO content is already been sent to the left side leaf and so the bottom leaf immediately gets the content from the left side leaf.

New Leaf demanding the content that is already flowing the system
Now, seeing this there may be a very interesting question that what if the leaf would have demanded a content that is available from both the ends, say FTV. Well, in that case the thing that comes into play is the link cost. A link cost is a complex function of Link BW, quality and available resources. The lowest link cost will actually bear the traffic in that case. Of course when there is a failure there will be a different scenario.

Also there may be another question over here in all of your minds. There is a content say Fox news that is being only demanded by the bottom Leaf and that is being served by the Root, now the left side leaf one of the user requests for Fox news then what happens? Will the stream be sourced back from the botoom leaf? The answer is no. The stream when it is already flowing to the bottom leaf , if,  is making a transit from the left DSLAM , then the stream is just made to drop on the left DSLAM and continue to the bottom leaf. However if the path of the stream to the bottom leaf is from the right then depending on the link cost the stream can be sourced from the bottom to the left or directly from the root.

2.3 How does the Cable operator benefit from this? 


Due to this mechanism of the conditional distribution using IGMP there is a heavy optimization that the cable operator does on the network resources that it owns. In this case he may not need to provision a full 1Gb/s bandwidth or he may provision but use the same bandwidth also to provide Internet connectivity and voice. This is called as the triple play which will see in later posts. Assuming that he does provide Broad band also on the same resource then a separate engineering need not be done on providing the TV and the broadband. He can easily put the broadband on the same infrastructure albeit on a lower CoS than the TV. So when there is less space occupied by the TV then there will be more BW to play with in the broadband. Also in the same link infra there can be a Video on Demand service.

The main idea is that as and when people watch the same channels over the same access network more and more BW is optimized for better distribution and thus OPEX decreases considerably. So say, for instance, when a match is on most of the people would be hooked to the match and there will be more resource savings and thus providing better quality of the system and more efficiency in the broadband.


So friends, from this we understand that Conditional Multicast in the access network with IGMP (V2/V3) is very much beneficial for the optimization of the whole network. Especially in the cable operator sector where there is very less infrastructure to play with, this comes as a boon in which the operator can graduate to a small scale ISP. 

We will cover more and more details on this thing called as Multicast. This is very interesting. 

Till then Bye and Take care, 

Cheers,

Kalyan

Friday, August 1, 2014

Video Multicast Systems (Introduction) and Drop and Continue logic

"The word Telecom, is no more related to Conversations, it is related to expressions, declarations, user-experience and emotions"

From my Recent Sojourns

My dear friends of the Transmission Fraternity, 

Hello! How are you? Yes, Long time!!!! But better late than never. In the past few months I almost had a kind of writers' block but then it feels really good to come out of that state and being able to lay down to you many things that I experienced in the past few months. Of course all of it cannot be comprised in one post but then there will be many of them I assure you. 

There are times in the life of an engineer that he would want to rediscover himself, he feels stagnated and wants to enter new domains, learn new things, satisfy his technical hunger and for that very reason I had taken a kind of sabbatical from the world of technical blogging, because it felt that I had only the same things to write without much variety. 

Today I bring to you one very important part or rather a section of recent telecom trends and that is a video multicast system.  

1. Multicast Traffic:
To understand this clearly we need to first get a clarity on what exactly is multicast and what is actually not. whenever the communication is done in both the ways with equal priority in both directions and equal adherence this is called as a unicast. Most of the telephone conversations, Internet sessions, P2P are an example of Unicast kind of system. In the communications field almost 95% of the cases are that of Unicast, because then and only then there can be the very important "Communication", but as I said in the first statement the trends in Telecom are changing and today Multicast is a reality. 

Let us in a very simple way understand how a multicast traffic flows. 

General Process of Multicast traffic Flow

As you can see that there is one location which has most of the content that needs to be sent and this is actually sent to a group of users that are actually wanting to view the content. Something like watching television or hearing to radio or things like that. However, one may have a very elementary question and that is "If this is what is Multicast, then what is Broadcast?" Well legitimate question and very nice too. Even I had the same doubt all over because the pattern of traffic flow in Broadcast is also the very same. 

However there are certain minute differences that a Broadcast traffic has with a Multicast traffic and these are. 

1. Broadcast is unconditional to a subnet whereas Multicast can be made conditional.
2. Multicast can be requested whereas Broadcast is not requested it is just received. 
3. There can be grouping of destination and source addresses in the case of Multicast but not Broadcast. 
4. Multicast can be restricted at a user plane, but broadcast no. 

All these four characteristics we will see in four different posts that is going to follow so that there is no overdose of study in our regular life that in itself is so busy. 

Let us now delve into some of the main uses of Multicast. 

1. IPTV
2. Any one to Many service. 
3. Multipoint Storage. 

2. Video Transmission of 200 Channels:
Of all the many uses of multicast the best use is that in the case of Video. Video transmission through multicast is one of the best ways of handling the heavy nature of traffic that video has. Imagine you are having around 200 channels and each channel of around 5Mb/s each minimum, if we were do to a unicast transmission then this will have to be like 1gbps per user. the same amount of reservation would also have to be done in the access/aggregate network. However, this is not at all commercially feasible, because if this were done then Television services would cost you a bomb, even more than a business class flight ticket from Mumbai to Los Angeles. 

So what do we do in this case. To understand this let us follow the figure below. 

4 main clauses of Video Multicast

Looking at the clauses above it is apparent that the content is huge but then it is repetitive per user. So there needs to be a kind of a mechanism where the same content is delivered every where and the content is actually sent to a location then replicated over there. 

2.1.Drop and Continue Model
This replication of data is a primary thing in Multicast that gives it an edge in high bandwidth services delivery like Live video. 

To understand more let us see the figure below. 

Drop and Continue Transmission Architecture of Multicast

The figure above is in real sense called as the drop and continue model in the filed of multicast transmission. The Bandwidth of the ring is determined by not the number of users but actually the amount of total content that has to be transmitted. So 200 channels of 5 Mb/s each makes the Bandwidth equal to 1000 Mb/s or 1Gb/s. So the operator has to make a 1Gb/s ring in the system irrespective of how much is the number of users. The topology can also be MESH or whatever based on the number of fall-back paths that the operator desires. 

In this case the operator is actually user volume independent and the bandwidth of the topology/infrastructure remains constant despite use in the number of viewers. 

2.2 Drop and Continue Logic:

Multicast addresses are always of the type 01:00:XX:XX:XX:XX in the MAC sublayer and in the Layer-3 they are all having the Class D IP addresses that are actually more or less reserved for Multicast. So each and every channel has a Multicast IP address that is sent to the user. In the case of Multicast there is no Mac learning so in the Layer-2 the traffic is always in flooding mode without any restriction. 

Keep this in mind that had this traffic been a Broadcast traffic there would be the restriction of a Broadcast Storm Control (BSC) which is not the case in the event of a Multicast traffic. Hence seeing the Multicast category of the Mac address the BSC does not kick in and traffic is flooded to all the ports in the same VLAN. 

As you can see in the drop and continue port the traffic ones it comes in then the traffic is sent to the access and also replicated along the infrastructure. 

So my friends Drop and continue is not a magic, it is actually exploiting the most important faulty state of a switch and that is the flooding state. To exploit flooding property of a switching device in a controlled way leads to an efficient transmission of multicast traffic. 

Till then take care of yourself and always remember whenever you plan with science it is the best, but science only comes in the purest forms when you decide to shun your ego. 

Cheers, 

Kalyan




Sunday, May 4, 2014

Check your stress level friends; Life is not a reversible event.....

" A Profession is only meant to satisfy your technical and financial hunger, making an obsession out of a profession is not desirable."
                                                                                          Recent events......

My dear friends of the transmission fraternity,

After a long hiatus due to some occupational reasons I take this opportunity to write a post again in my blog. I wanted today to actually discuss about the emerging trends in video transmission and its future but my thoughts and inputs were disturbed by some very uncalled for news.

Friends, why is it like this that sometimes committed telecom executives who are there for the cause are really not rewarded and not actually understood in their better span of the career? Why is it so that some people who have actually laid their hands in developing and maintaining a fault free network are not actually very concerned about their lives when it comes to health and fitness?

Readers of my blog, I would want to actually stress one point over here and make this very straight, "Machines are for men however men are not for machines...." Please do not let the machines take a toll and controll over your life.

Yesterday night I heard three such sad news where transmission experts of the age of 30 - 35 have fallen prey to heart and brain stroke and have had to bid a final adieu to this world at a very tender and productive age. Sad, very sad, however let us understand before coming to any advisory conclusions as to why such a thing actually happens?

We may be advised a lot about taking care of our health but let us face it the stress levels in the field of telecom is immense and the pressures of the Indian telecom scenario is huge. Over and above that the Indian Telecom Industry gives very few options for a person to have a parallel growth path. After visiting countries of the west such as US and Europe and understanding their organisation structure I have come to a conclusion that dramatic changes are required in the working model of an Indian Telecom Firm.

In India the process of quantifying experience is archaic and obsolete. Experience primarily means a person with high amount of work-ex, eventhough he/she might be sitting on a stone. So primarily a person who is working for say 15 years and has a big brand in his resume will be given a higher preference than a person who is say 5 years old but has got good knowledge base. This understanding leads to frustration, and killing of ideas that a fresh mind can actually contribute towards the development of newer technologies and newer ways of revenue generation in the organization.

I am not saying seniority should not be respected but there should be a parallel path of growth. Not every person can be a manager and not every person can be an expert technically so there should be two parallel ladders of growth as it is there in the west.

This frustration often leads to higher stress levels at ages of 30-35 where a person is infact in the last leg of  proving his excellence but is not allowed to sometimes. Stress leads to such diseases.

Heart stroke, brain stroke, burn-out, inappropriate metabolism, messed up lifestyle are some of the very common disorders. The person is so obsessed with work that he doesn't actually understand the risk to his health and also messes up his family life.

My advice to my friends, who are facing these is go a bit slow. There is no fun in risking your life or health in something that is only carrying a material value for a small bit of time. If something untoward happens then it is you and your family who will suffer. The industry will and must go on at its pace.

Please, please my friends do not stretch too much. If a network goes down for some moments it is not leading to mass destruction, it is only a distruption of service, which can be fixed by patient and logical troubleshooting. Not that do not be committed but also be health concious. If you are at an age and in a position where night events are taking a toll on you, stop it. Talk to your seniors, if they understand then well and good and if they do not look out for options... It is not the end of the world. After all your knowledge and commitment are your assets only till the time you are healthy and living.

My sincere condolences to the family members of Mr Vivek Pillai and Mr Sanjay Gohel. I happened to meet Vivek once in a technical discussion, have not had the opportunity to meet Sanjay. May the almighty give the strength to their families to bear this loss and always be with them. I am sad and bereaved that telecom world has lost such young an potential talents.

This is a tremendous loss.

We as a fraternity, are always behind their families.

Amen.

Kalyan Mukherjee

Sunday, December 8, 2013

Why, How and Where to move to a Native Ethernet Back-haul?

" Transport evolution from one technology that is being phased out to another technology is a scientific thought process and is not governed by a raw abhorrence of the technology that is phasing out"

                                                 ------ A recent conversation output

My dear friends of the transmission fraternity, 

SDH (Synchronous Digital Hierarchy), a technology that actually governed the transport networks of the operators for more than 3 decades is slowly but steadily phasing out. This is plainly due to the fact that the services are moving more towards a packetized format than being a TDM oriented service.

I NEED ADVANCED FEATURES:(Wrong Reason)

However, if this statement is true, one thing is also true that Ethernet is very well able to ride over the SDH as EoSDH with or without MPLS capabilities making use of the existing back-haul infrastructure. The variant gives all the facilities of a packet switched network in its full context, meaning that you can have BW sharing, instancing, QoS but the only issue is that the media is SDH based. However if you are going to a more native variant of ethernet then also all the facilities of the MPLS and Ethernet if available over a more native architecture.

So if the decision of going from TDM back-haul to a more Native Ethernet back-haul is actually for the quest of more advanced features like BW sharing, QOS, Instancing  then the reason for movement is wrong and not justified. Probably the guys of planning could not actually plan the network well and could not exploit the features of next generation in the existing network, so they are doing nothing but adding more over-heads to the network expense. Probably the management is also not concerned because they might be having he money and the shine of new back-haul is actually curtaining the logic to a large extent.


THERE ARE OVERHEADS IN EOSDH: (Wrong Reason)

One of the main reason other than the advanced features to move to a more native Ethernet back-haul may be that EoSDH in the essence of it contains a lot of overheads that may be required to be added in order to make the GFP encapsulation, thus affecting the throughput. The real equation of throughput is explained in the figure below.

As we can see in the figure that of the total overheads that are added in the EOS architecture the contribution of EOSDH GFP with GFP FCS is only 12 Bytes.  (* in the picture are the bytes that are added irrespective of the fact that your are going on Native Ethernet or on EOSDH).

The GFP FCS is an optional requirement so it means that the compulsive contribution of the EOSDH in totality is only 8 bytes.

Now the question is does this actually affect the throughput of your system? The answer is no.

This is because of a common science. When Ethernet packet or a stream of Ethernet frames actually traverse through the line then there is an object called as the IFG. IFG is the inter-frame Gap that may be continuous or it may come after a burst of frames. Generally this IFG is of 12 Bytes where these extra overheads are actually accomodated. So the fact is that if this is a shaped continuous traffic then movement through the EOSDH will actually not make any difference to the throughput as the overheads are actually packed in the IFG.

So the overheads of the EOSDH actually do not make any sort of difference to the throughput of the entire Ethernet line. Also if this would have been then so many point to point ILLs would not be running on leased networks of service providers who conventionally carried them on the EoSDH part only.

SO WHAT MANDATES AN ETHERNET NATIVE BACK-HAUL

Ethernet Native back-haul is mandated by the following functions.

1. Rise in the Ethernet BW requirement Vis a Vis a common TDM BW requirement.
2. Last mile devices like BTS, Node B moving from a TDM handoff to a more Ethernet handoff.
3. Requirement of reduction of Form Factor.

All these three points can actually be addressed simultaneously by looking at the structure of a device that is carrying the Ethernet over SDH.

The device that is carrying the Ethernet over SDH is actually utilizing dual capacity of the box. It is using a part of the Ethernet Fabric and it is also utilizing the TDM Matrix capacity of the same box. This means that if the requirement is actually 1Gb/s of BW utilization then the actual reservation that is done in the box is 1Gbps of the Ethernet Fabric and 1Gb/s from the SDH matrix. The figure below explains the same.


So as we can see in the figure that the capacity is used in both the ETH and the TDM matrix. This leads to dual overheads on the box and as and when there is an increase in the BW there will be more and more requirement to increase both the ETH fabric and the TDM matrix.

So typically in the aggregation and in the core regions of transport where the quantity of Bandwidth is typically high there the proposition of carrying it on a EOSDH may prove to be more expensive as the rise of Ethernet BW also results to a rise in the requirement of TDM matrix, which can be avoided by going native.

The dual increase of two fabrics actually also mandate a rise in the form factor and power usage, which is an unnecessary loading on the OPEX that is not justified.

Also as the last mile devices move more and more to giving out the Ethernet output then there will be more and more requirement of actually taking them natively as this will result in less consumption of the form factor of the box and less consumption of power.

Now I need not be worried on my TDM matrix to carry my ETH traffic in the device as this device is optimized to carry it natively as well.

SO HOW DO WE ACTUALLY DEDUCE THE BOX?

A box that is carrying Ethernet as ethernet with all the advanced features and also has a capability to carry the native TDM in its native form with a TDM matrix limited to the quantity of that is required is actually the type that we can be looking for. Of course there should be also a facility to take EoSDH and TDMoE as and when required.

As shown in the figure most of the ETH traffic is carried natively and so is the TDM traffic. There are need based connectivity for optional EoSDH and TDMoE. So this box actually provides the full spectrum of transmission and give the best out of the two worlds.

Needless to say that when there is a kind of division across both the domains of Ethernet and of TDM then there is also a reduction in the form factor as now each and every matrix can be weighed in the individual context and ones dependency on other is very less.

NOW THIS ANSWERS THE  WHY AND HOW ; FOR THE WHERE READ ON:

It is not necessary to go native everywhere as this may lead to a kind of a demand for fiber cables everywhere. So one major thing that determines where actually to go for this kind of boxes is very crucial.

The operator must look for the BANDWIDTH USAGE / UTILIZATION as explained already in a previous blog article.

Utilization Based Provisioning

So the decision to go native or not actually depends on how much this utilization can be optimized on. Actually if we see clearly then the concentration of BW is more towards the aggregate and the core so it is much suitable to go native on the core and the aggregate so as to conduct an Ethernet Off-Load.  The access where there can be a considerable over-provisioning can actually continue to be in the EoSDH flavor until it tends to choke the matrix.

Actually by doing this following things are happening which is more good for the network.

1. OFF LOAD IN THE CORE.
2. UTILIZATION OF THE SAME DEVICES.
3. NOT MUCH LOAD ON THE CAPEX.
4. SERVICE CONTINUITY.
5. PAY AS YOU GROW.

Hence a more intelligent decision would be to follow the scientific norms of transmission planning and to actually deduce the introduction of Native architecture in places where it is required.

This will allow gradual packetization without affecting your pocket.

Remember.....

While the company sales guys are actually responsible for the TOPLINE it is the network PLANNING that contributes the most for the realization of a good BOTTOMLINE. 

Till then all of you take care and Follow Science not Trend.

Cheers,

Kalyan

My E-mail incase you want to mail me

Thursday, December 5, 2013

Provider Bridge and Provider Edge

"The essence of planning a packet network is to understand the flow of traffic first, not only from a level of 40,000 ft but also on ground......."

                                                                                               After meeting various people

My Dear Friends of Transmission Fraternity, 

Packet network planning is all about knowing the flow of traffic and the entire in and out of traffic while the transport is being planned. When we talk about a packet network we usually talk in the terms of L2/ MPLS/ L3. The L3 is not actually a transport solution but more of a terminal solution to the traffic, so we can consider the L2 and the MPLS to be a kind of a technology that actually helps in efficient transport of the traffic.

Two basic terminologies come into the being in realizing such transport systems.

1. Provider Bridge.
2. Provider Edge.

While these are the two major technologies that are actually involved in the realization of the packet network we need to first understand one word very clearly "PROVIDER"

So who or what is called as a provider?

A provider entity is any system that helps in interchange and exchange of any traffic that may be Unicast, Broadcast or Multicast.
This is a simple example and definition of  a provider. So this means that any network that is shipping the traffic content from one point to another point with or without intelligence may be referred to as a provider. A provider can be a telco or it can be any network entity in the system that transports traffic.

This provider can act as a Bridge or as an Edge. The term Bridge and Edge is actually used in the context be the way the traffic is flowing through the network elements and the location or the point at which the MAC address is learnt.

L2 PROVIDER BRIDGE NETWORK:

A L2 Provider Bridge Network is actually realized by connecting various L2 switches together in the network. As known the L2 switches work on the principle of Mac learning and forward packets through the ports in the manner the source mac addresses are learnt. Every L2 switch entity is thus a Mac learning entity in the system.

So if a L2 provider bridge network is selected then at each and every point actually the traffic is forwarded after the Mac address is learnt.

Let us see the following example in the picture.


The figure actually shows how the traffic flow happens in a provider bridge. As we can see that at every transit point the traffic is actually bridged. Bridging means reforwarding of traffic by means of learning mac addresses. So if a traffic has to enter Node 1 and go out of Node-4 to a particular destination mac address, the destination device address should be learnt in all the NEs in the entire network so that the traffic can be bridged.

Each and every flow instance is called as a bridge and since the traffic is always having to pass through these bridges as they transgress the Network elements these are called as Provider Bridge elements.

Limitations of a Provider Bridge network:


  1. Mac address to be known at every point so there cannot be any kind of full point to point services in the network that is not requiring to learn mac. 
  2. As and when the customers are increasing in one of the endpoints the vFIB capacity of all the network elements are to be upgraded. 
  3. The whole system and all the NEs in the network are to be upgraded in configuration as and when the number of users are increasing. 
  4. When there are more number of NEs introduced in the transit there will be a considerable addition to the delay latency time that it takes for the traffic to flow. 
PROVIDER EDGE NETWORK:

A provider edge network tends to eliminate all the limitation aspects that are actually found in the Provider Bridge network. The Provider Edge network actually works on the principle of tunneling of traffic from one point to another point. So if a packet is to be sent from say Node-1 to Node-4 there is a tunnel that is created from Node-1 to Node-4 and on this the packet is actually put on. 

The below picture will clear this:


As shown in the picture this is actually a mechanism where the traffic is tunneled across the intermediate node so that the intermediate nodes do not need to know at all about the Mac Addresses.

This principle makes the network more scalable and more agnostic to mac learning. The mac learning concept is used if and only if there are multiple endpoints in a service. The intermediate points are not a part of the service integration but only points where the traffic is made in and out. 

Since the tunnel is a point to point entity there needs to be actually no realization of MAC in the system and the traffic is sent end to end. 

This can be done without the learning of the MAC. What actually happens in the transit point is that the traffic enters with a "Label" and goes out with another LABEL. 

that is the reason why the tunnel is also called as an LSP (Label Switched Path). 

The Label Switched path can have its own protection as we discussed in the MPLS section also. 

So what should be remembered by my friends of the Tx Fraternity:

1. Discuss and realize how you want to scale up the network. 
2. Understand and think which design is suitable PB or PE. It is not that PB should always be rejected. In smaller networks PE can be a major overkill. 
3. Know the service, whether it needs mac learning or not. Unnecessary bridging can kill processes in the card. 


Every process in telecom transmission is a function of "Science" and not a result of "Trend". Remember you can be a lamb and follow the trend like following the herd, or you can be scientific and make your network so well that it actually sets the Trend. 

In the war between Science and Trend.... Science has and will always win. 

Till then, 

Cheers, 

Kalyan

Saturday, September 14, 2013

Efficient BW Planning and Provisioning in Data Networks

THE MAIN ISSUE FROM TRANSFORMING FROM A CLASSICAL TDM NETWORK TO A MORE COMPREHENSIVE NEXT GENERATION NETWORK THAT HAS A HOMOGENEOUS MIXTURE OF TDM, PACKET AND OPTICS IS THE INABILITY TO MODIFY AND MOULD ONE'S PERCEPTION....

From my recent experiences (Some bad... Some good...Some "horrible")


My dear friends of the Transmission Fraternity, 


Good Day to all of you and wish you a very happy weekend ahead. In India, this is the time of Ganesh Puja and like any other colorful festival this is of a very high importance to the social and religious fabric of India. But... wait a minute...... how does Lord Ganesha figure out in our field of Telecom???? Confused? Well not so. 

Today let us understand the logic of celebrating Ganesh Festival and its significance to Telecom. Planning and provisioning in Telecommunications is much about taking the right decision in the most economical manner to optimize the network, so that the delivery and then the sale of the service is more and more on the profitable side of the business than just an obligation of delivering it. This is just like how Lord Ganesha, by his intelligent ways of solving problem salvages any situation without resulting to much of overheads and conflicts. 

A person who is actually designing the network in such an aspect so that it can generate long term revenue without investment of much of physical resource, should be a planner, a more scientific one, however one who is only doing it for the heck of delivering the service, is more of an implementer or simply a "Copy & Paste" person. If the network is not bringing profitable revenue, then for sure the planning of network has been wrong. More quickly it brings profitable revenue, more efficient has been the planning. Just like Lord Ganeshas ways, a planning engineer should be the revenue optimizer and reliever of technical pains that an organization may foresee to deliver a service. 

Let us take an example for understanding this:


There is a network of Wi-Fi and Mobility where there are  5 access nodes in the access rings with one NodeB each and one Wi-Fi access point each. The peak BW of a Node B is say 28Mb/s and that of the Wi-Fi is around 44 Mb/s.  There needs to be an access network planned for the aggregation of this 5 nodes.  How to do it?

There may be several ways of implementing this aspect and there may be several ways to deliver the services.  Let us see the conventional (less profitable) and the non -conventional (More - profitable) way of delivering this. 

CONVENTIONAL WAY:

The conventional way is the pure transmission way where every BW is important, whether it is being used or not. 

So the following calculation is used. 

5 Node Bs of 28 Mb/s = 5x28 = 140Mb/s 
5 APN of 44 Mb/s = 5x44 = 220 Mb/s 

Total = 140 + 220 Mb/s = 360 Mb/s

Conventional Planner of a good nature :

On a SDH back-haul network this would be taken to be a full 360 Mb/s of Provisioning which would mean a full ring of 360 Mb/s = 8XVC-3s or even 3XVC-4s.  Hence a ring of 8XVC-3 is established in the entire system of EoSDH with or without MPLS and then the following logical network is realized. 


As seen in this picture each and every access node has a drop one of Wi-Fi and one of the NodeB. The total Bandwidth expended in the ring is actually 400Mb/s or 8XVC-3. This is more than the sum of the entire peak Bw of all the sites. 

So the planner here is assuming the following and this is the mistake. 

> In this access ring there will be always peak traffic from all the NodeBs and all the Wireless APNs all the time. 
> There will not be any kind of treatment of traffic to which should be high priority and which should be low priority. 
> Both the services should get the dedicated BW, whether they are using or they are not using in the transport infrastructure.

However, there are some good things that the planner has done which can actually classify him to be a better data planner and having potential. 

> Planner has used less number of WAN ports. 
> Planner has used more instancing and is actually instancing the service in different SVLANs or different instances to avoid any kind of cross talk. 
> There is proper isolation between two Node Bs and two APNs of Wifi in the network. 
> There is proper isolation of Node B and APN in the same access node also. 

I am still assuming that the planner has done this, and if yes, at-least  the basics of data planning is clear to the planner. 

NON CONVENTIONAL WAY:

The non conventional way actually is more data-centric than more transport centric. Let us understand one thing to understand the non conventional way  of planning.  We had BTS and we had the BSC.  So each BTS had around 2 E-1s to 4 E-1s each. This would mean that each BTS is having a capacity to run around 120Voice channels at the same time. With some amount of compression this can accommodate 400 users also simultaneously. So if an area is having more number of subscribers did we increase the carriers. The answer was no. 

A thumb-rule calculation was the fact that for every 1000 subscribers you can actually spend an E-1. This factor was even higher in the case of mobility as there were good amount of compression and a probability of mobile subscriber leaving a site and joining another was very high. 

The only thing was that, this calculation was done by the engineers at the switching centres who are having an idea of the traffic quantum in  a cell site and thus used to give the figures to the transmission. 

This was when you are considering a 2G kind of deployment. However, when the deployment phase goes to more of packet based access (Where the last mile device is Ethernet Handoff) then this flexibility of BW allocation is not to the MSC or the RNC but it is more to the transport. This is because the handoff is now not a fixed multiple of 2Mb/s but a combined port of 100Mb/s or 1Gb/s. 

Not all sites will transmit the peak BW at the same time (Concurrence):

The non-conventional planner actually understands this philosophy of concurrence and  tries to reach to a rational BW in the access ring. The planner uses this concurrence phenomenon to actually reach to a conclusive BW and thus ensures that there is full guarantee of the High available services and there is good amount of availability of low class  services. 

Thus the planner initially does a good amount of ratio and normalizes it to the nearest VC-4 or VC-3. 

In this case the planner sees that the total peak BW of the entire service is 360Mb/s however the amount of traffic will never be of that level. 

Contention and Prioritization (What is important and what is share-able)

Out of this 360Mb/s the signalling and synch traffic (Mandatory and always flowing) accounts to 20Mb/s in the entire ring (2 Mbps per site NodeB or APN).  The rest are good amount of prioritized traffic. 

In the 3G there is a voice per Node B of around 4 Mb/s so that gives another 20Mb/s of committed traffic. 

So 20 + 20 = 40Mb/s of this traffic is totally committed and have to be provided in the line irrespective of anything else. 

The rest 320Mb/s is data and out of which some are priority data and some are non - priority data (Best Effort). 

the figure below explains the entire BW profiling. 


This is the figure that the user is deriving for the entire BW distribution. A high end packet service will contain many streams of data some of which are real time and some of which are non real time. The non real time are the ones that may not be susceptible to congestion as there is re-transmission, only thing that is affected in such cases is the throughput. Hence these services are given the maximum over-subscription. In this figure the Yellow portion is actually attributed to the Non - Real Time data.  Maximum over-subscription is done to this portion only. 

Let us see in the figure below how the BW is attributed and contented in the network. 


So due to this now the BW is provided in the following manner. 

20Mb/s + 20Mb/s + 160*0.5 + 160*0.1
= 20 + 20 + 80 + 16
= 136 Mb/s 
= 1XVC-4 or 3XVC-3

By this process a total saving of Back-haul of 5XVC-3 is done which is equal to 250Mb/s. So the planner is effectively saving 250Mb/s of physical resources and delivering a good service of around 360Mb/s in it. As the number of site increases the user can rationally increase the resources. 

So the final figure is. 


Now in this case when the Voice is of low usage the BW is apportioned to the Packet data. 
There is good prioritization. 
There is of- course good instancing. 

In short... The planner is not only thinking about service delivery but also thinking about Profitable service delivery. Definitely due to low usage of back-haul BW there will be less usage of Physical resources and thus increasing the longevity of the network elements and milking more money from it. 

Such planners like to make the company profitable and want to always have more RPU with less investment. 

ARCHAIC AND OLD SCHOOL PLANNERS:

While we discussed about good planners and very good planners, who try to benefit the network and some of them actually make the network as good profit engines by increasing the longevity, there are some who actually are physical resource guzzlers, and do not have any care of the amount of physical resource usage. They are just planning the network for the heck of service delivery and have no interest in network profitability. They are a pain to the sales personals in the operator industry as they totally reduce the profitability of the network and thus bring no substantial earnings from the service delivery. They are happy just making the traffic through and getting the ping from one side to another. The sales persons of such operators should pray to Lord Ganesha regularly, as each and every order marks in the overall reduction of profitability in the organization. 

Understanding the case above the old school of planning will actually realize the network in the following manner. 

From each access site they would pull two EoSDH trails, one for the NodeB traffic and one for the Wi-Fi APN. 

Resulting into a Logical Topology ( Only the Good planners would understand the word Logical topology though) like this. 


As shown in the figure above let us now calculate the number of VCs that are being used. 

For one Node B 14VC-12 Main and 14 VC-12  for protection Leading to 28XVC-12 each site.So for  5 such node Bs this is 28X5 = 140 VC-12s = 280 Mb/s 

For One Wi-Fi = 1XVC3 for main and 1XVC-3 for protection = 2XVC=3 = 100Mb/s. So for 5 Sites this is 5X 100Mb/s = 500 Mb/s

Total for all provisioning = 780 Mb/s ( WOOOOOW and they still think they will make profit!!!!)

Also let us now look at the number of ports. 

At each access location 2 wan ports are used and in the aggregate 2 for each access this makes 10 WAN ports as opposed to only 2 in the case of Good planning. 

So this means 10 Wan ports and as new access sites are added on the same ring this means more and more WAN ports on the aggregate. 

So in some days the Aggregate module is kept underutilized but the ports are exhausted. 

The planning is done in such a way as if the planner had some kind of an Animosity towards the rising profits of the organization and would like to cut it off by any chance. 


Well this was an example of Efficient .... Very Efficient  and Not at all Efficient way of deploying the same service. 


I leave it upto you my friends as to what you want to become. I would also urge the Sales people in the operator industry to rethink and revisit the technical aspects of their network. 

Telecom service provider is a business where the revenue (Top - Line) is an effort of the Sales people for sure but the Profit ( Bottom - Line) is dependent on how good the planners are in the conception and the implementation phase. 

Remember making more trails doesn't give money, running traffic on it does. 

May Lord Ganesha Bless us all on that note. 

Cheers!!!!

Kalyan

Sunday, September 8, 2013

Viability of MPLS in a traditional network

My Dear Friends of the Transmission Fraternity,

I am sorry, I had to again go on a recluse due to various reasons, some busy schedule and some very urgent professional commitments. But here I am, and I would like to today discuss on one of the burning topics that is taking the Telecom Transmission Industry by storm.

MPLS (Multi Protocol Label Switching)

I know this is a kind of deviation from our regular discussions on the RSTP that I promised but trust me this is important. Why this is important? Well because most of the reasons that are being used to actually use/misuse this technology, especially in India is wrong totally trashy.

So now let us in a very pure mind understand the viability of MPLS in a network that is traditionally a transmission network based on TDM. Can you realize MPLS on it.

If you ask this in the market as an operator, who is trying to enter to the data segment for the first time, you will get very very different answers. Some yes and some no. Some people would tell you that MPLS can never run on a TDM network, some would tell you that yes if the end devices are routing devices and the TDM acts only as the overlay you can realize it.... Well to tell you the truth... These are all vague answers to your pertinent queries. In fact these are not "Honest" answers at all.

To substantiate my point, I would like to draw your attention on this link of Wiki which tells about MPLS. Neutral, short but enough for an operator / beginner to understand to evolve his/her network.

Wikipedia Link describing MPLS

As told correctly in this link MPLS is a Media independant technology, working in between the L2 and the L3 layers of the OSI layer.

So this answers your vital question that MPLS can be actually realized on the boxes that you have purchased to run the TDM network. It can co-exist with a TDM/ATM/FR network that is traditionally old and can also evolve to the new network that are more Ethernet oriented.

The million dollar question is how does this thing happen?????

Well as described in my previous blog posts, if you have read them, there is a concept of EoSDH, which stands for Ethernet over SDH.  An EOSDH trail is a clear infrastructure entity, or a carrier on the layer-1 that encapsulates the Ethernet traffic over the SDH media. This encapsulation happens through the means of GFP (Generic Framing Procedure).  The EoSDH trail or the cross connect acts as a simulation of the ethernet link in the SDH back-haul. Many people, as I said before, mistake this for a SDH trail in totality, however they are wrong as the EOSDH trails have 100% properties of Ethernet and basic properties of SDH inculcated. So this should be treated more like a embedded Ethernet Link than a pure SDH trail.

When you do not have the fibers to make dark fiber Ethernet Link or you have very little requirement of Ethernet Bandwidth, not justifiable to actually spend a dark fiber on the case then EoSDH is a big tool to actually optimize your network.  Here SDH is used as a carrier and not as a service, and the actual service lies on the Ethernet plain.

When this EoSDH trail also has abilities of Label Encapsulation and Traffic engineering then this becomes a trail that can be used to run MPLS on the existing box with existing SDH capabilities.

So how is a MPLS enabled EoS Different from a Normal EoS?

The infrastructure that is created by a normal EOS using only Ethernet is actually a infrastructure that is subjected to the L1 and L2 rules. Under these conditions most of the devices in the Ethernet layer would act as a Provider Bridge. A Provider Bridge is actually an entity that will do the forwarding of the traffic based on Mac Learning or MaC lookup at every point, whether this is the start, end or the transit point.

So in a provider bridge kind of a scenario all the L2 devices have to remember the MAC tables of the entire topology for each and every services. Also as the number of devices increase and the services go high up the number of Mac address entries in the forwarding table would also go High.

This is prevented by making of VPNs (Virtual Private Networks) and VSI (Virtual switching instance) in the network. So VSI is a mechanism that can segregate the services from each other and create a virtual path however, what the VSI alone cannot do is actually giving a dedicated traffic engineered path for each and every sevices across two points in a ring or in  a Network without Mac learning in the Transit nodes.

Below is a picture of how a provider bridge would work with a traffic flowing from point -A to Point-D.  For simplicity I have taken the path of traffic as a liner L2, we will talk about protections later.



As seen in this picture we can see that at every transit location there is a need to learn the MAC address. So the traffic from A to D needs to have vFIB learnings at A, B, C and D. Similarly a traffic from A to C will need to have this in A, B and C.

This method is called the method of  Bridging. At every point the traffic is forwarded or Bridged on the basis of the Source and DA using the MAC table. MAC learning is the essence of learning addresses and forwarding traffic.

Needless to say as the number of transit node will increase so will the number of lookup location and so will be number of learning instances.

A MPLS enabled scenario actually eliminates this. This is shown in the next figure.


 As you can see in this figure there is one more entity in Yellow. This is a logical path that is defined from A to D. This is called as a LSP (Label Switched Path) or more colloquially Tunnel.  Due to this tunnel the traffic which was subjected to a VSI/VPN in A and in D between two ports is not only between one access port and the tunnel.

So the VPN constitutes of on A side the access port in red and the tunnel in yellow and the same is in the D side. This would mean no switching instances in B and C.

So the Mac learning would only be required at the points A and D typically Edges, thus the point A and D are now refered as PE or Provider Edge. While B and C are only going to look at the labels and merely act as transit points without looking at the MAC.

Advantages of this:

Less latency and more scalability. This also ensures more end to end BW integrity, which we will, I hope so see in our next Blog Posts.

This also helps to realize a more point to point architecture in traffic forwarding without the invenstment of physical resources thus saving time and saving money.

What my friends need to remember????

It is not essential to replace the whole install-base that you have purchased with your hard earned money, just becuase you want to include MPLS. Understand the purpose of MPLS before dancing on it.

MPLS is a technology, which is agnostic to device types. The MPLS algorithm and software can be put into any kind of a device be it a L2 switch or a L3 router.

MPLS is a technology that deals with multi-protocol support so if there is a kind of argument on IPv4 and IPv6 be sure this is not MPLS as MPLS doesn't care about which internal protocol is being forwarded.  Many people would flash their papers and give you this dope.

At the end of the day it is the ETHERNET SERVICE  that is actually carrying the traffic, so this needs to be given utmost attention. MPLS is an underlying technology to enhance the delivery of the SERVICE.



 
 
 
 
Till then .... Have Fun....
 
Cheers,
 
Kalyan