Sunday, December 8, 2013

Why, How and Where to move to a Native Ethernet Back-haul?

" Transport evolution from one technology that is being phased out to another technology is a scientific thought process and is not governed by a raw abhorrence of the technology that is phasing out"

                                                 ------ A recent conversation output

My dear friends of the transmission fraternity, 

SDH (Synchronous Digital Hierarchy), a technology that actually governed the transport networks of the operators for more than 3 decades is slowly but steadily phasing out. This is plainly due to the fact that the services are moving more towards a packetized format than being a TDM oriented service.

I NEED ADVANCED FEATURES:(Wrong Reason)

However, if this statement is true, one thing is also true that Ethernet is very well able to ride over the SDH as EoSDH with or without MPLS capabilities making use of the existing back-haul infrastructure. The variant gives all the facilities of a packet switched network in its full context, meaning that you can have BW sharing, instancing, QoS but the only issue is that the media is SDH based. However if you are going to a more native variant of ethernet then also all the facilities of the MPLS and Ethernet if available over a more native architecture.

So if the decision of going from TDM back-haul to a more Native Ethernet back-haul is actually for the quest of more advanced features like BW sharing, QOS, Instancing  then the reason for movement is wrong and not justified. Probably the guys of planning could not actually plan the network well and could not exploit the features of next generation in the existing network, so they are doing nothing but adding more over-heads to the network expense. Probably the management is also not concerned because they might be having he money and the shine of new back-haul is actually curtaining the logic to a large extent.


THERE ARE OVERHEADS IN EOSDH: (Wrong Reason)

One of the main reason other than the advanced features to move to a more native Ethernet back-haul may be that EoSDH in the essence of it contains a lot of overheads that may be required to be added in order to make the GFP encapsulation, thus affecting the throughput. The real equation of throughput is explained in the figure below.

As we can see in the figure that of the total overheads that are added in the EOS architecture the contribution of EOSDH GFP with GFP FCS is only 12 Bytes.  (* in the picture are the bytes that are added irrespective of the fact that your are going on Native Ethernet or on EOSDH).

The GFP FCS is an optional requirement so it means that the compulsive contribution of the EOSDH in totality is only 8 bytes.

Now the question is does this actually affect the throughput of your system? The answer is no.

This is because of a common science. When Ethernet packet or a stream of Ethernet frames actually traverse through the line then there is an object called as the IFG. IFG is the inter-frame Gap that may be continuous or it may come after a burst of frames. Generally this IFG is of 12 Bytes where these extra overheads are actually accomodated. So the fact is that if this is a shaped continuous traffic then movement through the EOSDH will actually not make any difference to the throughput as the overheads are actually packed in the IFG.

So the overheads of the EOSDH actually do not make any sort of difference to the throughput of the entire Ethernet line. Also if this would have been then so many point to point ILLs would not be running on leased networks of service providers who conventionally carried them on the EoSDH part only.

SO WHAT MANDATES AN ETHERNET NATIVE BACK-HAUL

Ethernet Native back-haul is mandated by the following functions.

1. Rise in the Ethernet BW requirement Vis a Vis a common TDM BW requirement.
2. Last mile devices like BTS, Node B moving from a TDM handoff to a more Ethernet handoff.
3. Requirement of reduction of Form Factor.

All these three points can actually be addressed simultaneously by looking at the structure of a device that is carrying the Ethernet over SDH.

The device that is carrying the Ethernet over SDH is actually utilizing dual capacity of the box. It is using a part of the Ethernet Fabric and it is also utilizing the TDM Matrix capacity of the same box. This means that if the requirement is actually 1Gb/s of BW utilization then the actual reservation that is done in the box is 1Gbps of the Ethernet Fabric and 1Gb/s from the SDH matrix. The figure below explains the same.


So as we can see in the figure that the capacity is used in both the ETH and the TDM matrix. This leads to dual overheads on the box and as and when there is an increase in the BW there will be more and more requirement to increase both the ETH fabric and the TDM matrix.

So typically in the aggregation and in the core regions of transport where the quantity of Bandwidth is typically high there the proposition of carrying it on a EOSDH may prove to be more expensive as the rise of Ethernet BW also results to a rise in the requirement of TDM matrix, which can be avoided by going native.

The dual increase of two fabrics actually also mandate a rise in the form factor and power usage, which is an unnecessary loading on the OPEX that is not justified.

Also as the last mile devices move more and more to giving out the Ethernet output then there will be more and more requirement of actually taking them natively as this will result in less consumption of the form factor of the box and less consumption of power.

Now I need not be worried on my TDM matrix to carry my ETH traffic in the device as this device is optimized to carry it natively as well.

SO HOW DO WE ACTUALLY DEDUCE THE BOX?

A box that is carrying Ethernet as ethernet with all the advanced features and also has a capability to carry the native TDM in its native form with a TDM matrix limited to the quantity of that is required is actually the type that we can be looking for. Of course there should be also a facility to take EoSDH and TDMoE as and when required.

As shown in the figure most of the ETH traffic is carried natively and so is the TDM traffic. There are need based connectivity for optional EoSDH and TDMoE. So this box actually provides the full spectrum of transmission and give the best out of the two worlds.

Needless to say that when there is a kind of division across both the domains of Ethernet and of TDM then there is also a reduction in the form factor as now each and every matrix can be weighed in the individual context and ones dependency on other is very less.

NOW THIS ANSWERS THE  WHY AND HOW ; FOR THE WHERE READ ON:

It is not necessary to go native everywhere as this may lead to a kind of a demand for fiber cables everywhere. So one major thing that determines where actually to go for this kind of boxes is very crucial.

The operator must look for the BANDWIDTH USAGE / UTILIZATION as explained already in a previous blog article.

Utilization Based Provisioning

So the decision to go native or not actually depends on how much this utilization can be optimized on. Actually if we see clearly then the concentration of BW is more towards the aggregate and the core so it is much suitable to go native on the core and the aggregate so as to conduct an Ethernet Off-Load.  The access where there can be a considerable over-provisioning can actually continue to be in the EoSDH flavor until it tends to choke the matrix.

Actually by doing this following things are happening which is more good for the network.

1. OFF LOAD IN THE CORE.
2. UTILIZATION OF THE SAME DEVICES.
3. NOT MUCH LOAD ON THE CAPEX.
4. SERVICE CONTINUITY.
5. PAY AS YOU GROW.

Hence a more intelligent decision would be to follow the scientific norms of transmission planning and to actually deduce the introduction of Native architecture in places where it is required.

This will allow gradual packetization without affecting your pocket.

Remember.....

While the company sales guys are actually responsible for the TOPLINE it is the network PLANNING that contributes the most for the realization of a good BOTTOMLINE. 

Till then all of you take care and Follow Science not Trend.

Cheers,

Kalyan

My E-mail incase you want to mail me

Thursday, December 5, 2013

Provider Bridge and Provider Edge

"The essence of planning a packet network is to understand the flow of traffic first, not only from a level of 40,000 ft but also on ground......."

                                                                                               After meeting various people

My Dear Friends of Transmission Fraternity, 

Packet network planning is all about knowing the flow of traffic and the entire in and out of traffic while the transport is being planned. When we talk about a packet network we usually talk in the terms of L2/ MPLS/ L3. The L3 is not actually a transport solution but more of a terminal solution to the traffic, so we can consider the L2 and the MPLS to be a kind of a technology that actually helps in efficient transport of the traffic.

Two basic terminologies come into the being in realizing such transport systems.

1. Provider Bridge.
2. Provider Edge.

While these are the two major technologies that are actually involved in the realization of the packet network we need to first understand one word very clearly "PROVIDER"

So who or what is called as a provider?

A provider entity is any system that helps in interchange and exchange of any traffic that may be Unicast, Broadcast or Multicast.
This is a simple example and definition of  a provider. So this means that any network that is shipping the traffic content from one point to another point with or without intelligence may be referred to as a provider. A provider can be a telco or it can be any network entity in the system that transports traffic.

This provider can act as a Bridge or as an Edge. The term Bridge and Edge is actually used in the context be the way the traffic is flowing through the network elements and the location or the point at which the MAC address is learnt.

L2 PROVIDER BRIDGE NETWORK:

A L2 Provider Bridge Network is actually realized by connecting various L2 switches together in the network. As known the L2 switches work on the principle of Mac learning and forward packets through the ports in the manner the source mac addresses are learnt. Every L2 switch entity is thus a Mac learning entity in the system.

So if a L2 provider bridge network is selected then at each and every point actually the traffic is forwarded after the Mac address is learnt.

Let us see the following example in the picture.


The figure actually shows how the traffic flow happens in a provider bridge. As we can see that at every transit point the traffic is actually bridged. Bridging means reforwarding of traffic by means of learning mac addresses. So if a traffic has to enter Node 1 and go out of Node-4 to a particular destination mac address, the destination device address should be learnt in all the NEs in the entire network so that the traffic can be bridged.

Each and every flow instance is called as a bridge and since the traffic is always having to pass through these bridges as they transgress the Network elements these are called as Provider Bridge elements.

Limitations of a Provider Bridge network:


  1. Mac address to be known at every point so there cannot be any kind of full point to point services in the network that is not requiring to learn mac. 
  2. As and when the customers are increasing in one of the endpoints the vFIB capacity of all the network elements are to be upgraded. 
  3. The whole system and all the NEs in the network are to be upgraded in configuration as and when the number of users are increasing. 
  4. When there are more number of NEs introduced in the transit there will be a considerable addition to the delay latency time that it takes for the traffic to flow. 
PROVIDER EDGE NETWORK:

A provider edge network tends to eliminate all the limitation aspects that are actually found in the Provider Bridge network. The Provider Edge network actually works on the principle of tunneling of traffic from one point to another point. So if a packet is to be sent from say Node-1 to Node-4 there is a tunnel that is created from Node-1 to Node-4 and on this the packet is actually put on. 

The below picture will clear this:


As shown in the picture this is actually a mechanism where the traffic is tunneled across the intermediate node so that the intermediate nodes do not need to know at all about the Mac Addresses.

This principle makes the network more scalable and more agnostic to mac learning. The mac learning concept is used if and only if there are multiple endpoints in a service. The intermediate points are not a part of the service integration but only points where the traffic is made in and out. 

Since the tunnel is a point to point entity there needs to be actually no realization of MAC in the system and the traffic is sent end to end. 

This can be done without the learning of the MAC. What actually happens in the transit point is that the traffic enters with a "Label" and goes out with another LABEL. 

that is the reason why the tunnel is also called as an LSP (Label Switched Path). 

The Label Switched path can have its own protection as we discussed in the MPLS section also. 

So what should be remembered by my friends of the Tx Fraternity:

1. Discuss and realize how you want to scale up the network. 
2. Understand and think which design is suitable PB or PE. It is not that PB should always be rejected. In smaller networks PE can be a major overkill. 
3. Know the service, whether it needs mac learning or not. Unnecessary bridging can kill processes in the card. 


Every process in telecom transmission is a function of "Science" and not a result of "Trend". Remember you can be a lamb and follow the trend like following the herd, or you can be scientific and make your network so well that it actually sets the Trend. 

In the war between Science and Trend.... Science has and will always win. 

Till then, 

Cheers, 

Kalyan