Ethernet, what on earth it is doing in a back-haul environment?
Many of my transport friends feel “why do we, after-all
need Ethernet in the transport back-haul environment? Why on earth everything
is changing to Ethernet? We were well off with the traditional TDM and were
well versed with it. All we needed to do was to provision some trails, create
some cross connects and then boom the switch guy used to do things. “
The transmission planning used to plan the fiber routes
and the trails and the NOC used to provision the same. In some cases of
troubleshooting, well just give a loop to check. Loop – break….loop-break……loop-break
and find the problem. So why today we have to have this complicated thing
called ehternet?
Well, my dear readers, every child has to grow and so
will your network. As the customers keep on increasing their demands also
increase. As they see more things happening outside our country so they also
want the same thing. This ushers a kind of requirement for heavy BW and of
course a very clever way to engineer it.
TDM, classically works on a point to point model
without any BW sharing. So as to say if you have 10 sites in a ring each
having a drop of 10 Mb/s then in case some site is not using the BW the
others cannot use up momentarily the same BW. Which means there is no sharing
of BW, the access is hard coded to 10 Mb/s and even though there is a need of
BW at some other place while one site is not using it this cannot be provided.
Please look at the figure below for complete
understanding.
As we can see in this figure that there are 5 locations
in the STM-1 ring that are parented to the aggregate locations. All these
locations are Mux or Multiplexer locations.
For each location using the traditional TDM deployment there is a
mapping of 5XVC-12 to each of the location.
So what is the limiting factor.
1.
Suppose Site -1 is having a heavy requirement
say around 14 Mb/s in some instance and Site 3 is only using 4 Mb/s then the 6
Mb/s form Site-3 cannot be dynamically allocated to Site -1 as a timely lease.
This has only to be done by manual provisioning.
2.
As the committed BW increases in the entire
requirement the number of physical interfaces also increase to a great
extent. Hence if tomorrow the need is 22
Mb/s per site then it has to be clubbed with 11 E-1s and that too not in a
shareable mode. The scalability is actually limited. This is because of the
interface limitation.
Due to these two major factors there was a need of
Ethernet in the network. There needed to be a kind of a device connected to
these first mile wireless location which are actually able to handle huge BW.
And this my friends can only be at present possible by an Ethernet device. Hence the access interface had to be an Ethernet
interface which can scale upto 10Gb/s base band. This suffices the second requirement
of the above two.
What about BW sharing
then? How does Ethernet help in doing this?
Well the basis of Data services are the fact that not
every one is using the same BW at the same time. So at any point of time the entire
capacity of the ring can be floated across. In Ethernet there is not hard
coding of BW provisioning like there is in TDM. What we call trails in TDM
actually boils down to a service in Ethernet.
However this service is having a special characteristic. This characteristic
is that it can actually have dynamic BW sharing. So as to say that the entire
BW of the ring is actually available to all.
Typical Ethernet configuration is shown in the figure
below.
In this figure as you can see there are no hard coded
VC-12 mapping for the traffic. There is definitely a service entity but that is
not the trail. This is called a Ethernet service and this is having two
parameters over here.
CIR: Committed Information Rate, meaning the
amount of BW that the site is always honored to get and should get irrespective
of whatever happens.
EIR: Extra information Rate, meaning the amount of
BW that the site can achieve extra provided the resources in the network are
not being utilized.
This means that if we make services like this for all the
5 sites each of them may get 100Mb/s peak. However, in case all the sites are
using BW at the same time then at least 10Mb/s is guaranteed. So there is no need of frequent BW grooming.
In high end data requirements one thing is to be taken as
a postulate and that is “Not every one is going to peak at the same time”.
This basic assumption actually forms the basis of
deploying Ethernet in back-hauls that are more data rich vis a vis 3G, LTE,
HSIA, EvDO, Retail Broadband etc etc.
The
main thing to remember:
The main thing to remember about classical TDM is that in
classical TDM one deals with Multiplexers. The work of a multiplexer is
actually to add and drop traffic from a lower tributary to higher rates in a
framed manner. Hence, a TDM provisioning of transport is not essentially
concerned about the utilization of the traffic. So in a classical TDM
environment the parameters most essential for the transmission guys are
actually concerned about health of the path with respect to errors and fault.
However, with the onset of Data services and Ethernet
deployment, the multiplexing environment takes a shift and moves to switching environment.
The transport guy is also responsible for switching of traffic, which was
previously a responsibility of the Switch guy. This is because the elements in
the transport are no longer multiplexers, they are actually switches, which measure,
meter and also show the transmission guys the utilization of the BW.
This helps the transmission guy, especially the planner
to actually monitor the traffic and provision as per requirement thus
judiciously saving resources and delivering high BW services at the same time.
This main thing is the only important thing for a TDM
transmission guy to understand while he and his company is moving towards a
Ethernet back-haul.
So friends, do not be paranoid about this change.....Just accept it as a new technology......
Nice Sirji....
ReplyDeleteThank you Javin....
Delete