Thursday, October 11, 2012

Understanding Cross talk prevention in EoS Circuits



The piece of article that I am writing today is actually influenced by one of the questions that was raised by Javin Shah. Javin had asked me on facebook a very interesting question  pertaining to the last blog-post.

“ In the recommended configuration of using only one EoS across two different elements for different services, what would happen if there is a problem or looping in one of the services? Would it also not loop my other service? In such a case why should I not go with the separate EoS trail approach?”

This is one of the major worries due to which a transmission engineer actually prefers to do separate EoS trails for separate services and not consolidate them in one NNI. This is the point where the user is actually considering the EoS to be actually a service and not an infrastructure.

When I say the word “Infrastructure” let us first understand what is infrastructure in the terms of telecom transmission provisioning.

Infrastructure is actually an entity on which (and this preposition is very important) the actual service runs. So if the service is a VC-12 trail the infrastructure for the VC-12 trail is the Channelized VC-4 and the infrastructure for the Channelized VC-4 is the Optical Fiber link.

Please see the figure below for a ready reference. 


As we can see in the Figure it is seen that the actual service (entity carrying the traffic) is the VC-12 Service trail from one point to another point. However this service is actually based on the Channelized STM-1 on which it rides or the Terminated VC-4 and the Terminated VC-4 is based on the Optical Fiber link maybe of STM-1/4/16 or 64.


A thing to note over here is that :

One Optical Fiber link may contain many Channelized STM-1s
One Channelized STM-1 may contain many VC-12 Services.

So the Optical fiber is the first layer of infrastructure and the Channelized STM-1 is the second layer of infrastructure.

Now if a user desires to may two trails of VC-12 service it is not required to actually have another set of Optical Fiber or another set of Channelized STM-1.  This means that the infrastructure is common for two trails of VC-12. However the traffic of one trail of VC-12 does not inter-mix with another trail of VC-12 and also the service ill-effects are also not carried over. Each VC-12 trail behaves as an individual service.

So as to say the service in the case of both the trails are segregated and do not interfere with each other, whether positively or negatively.

Same actually applies for the case of Ethernet services on EoS. However here the definitions slightly change as the layer of service is shifted one Layer up. Remember, we are doing this on Layer-2. In the case of our Layer -2 services or VPN the following picture actually shows the mapping of the infrastructure and the Service. 



K-L-M IN SDH IS SAME AS VLAN IN ETHERNET (FOR OPERATIONAL PURPOSES):

In SDH/TDM services the K-L-M indicator is the service differentiator similarly in the case of Ethernet services the service De-limiter is actually the VLAN. Just as in SDH as for a new K-L-M a new service trail is built in the case of Ethernet we have a different VPN for a different VLAN.

Hence, the VLAN actually forms the basis of the demarcation of the service. 

HOW IS VLAN DIFFERENT FROM THE K-L-M:

Having understood the concept of VLAN as a service is actually different from K-L-M. The Vlan a tag that is put on the data payload so that it can be identified and carried in the Packet Network through a VPN transparently without any kind of interference. The link below provides a conceptual property of the VLAN.


In the case of VLAN there are priorities also so that we can have multiple streams in one Vlan with separate properties. The prioritization is something that is apart from SDH that happens on the Ethernet networks (Will be explained later).

A K-L-M is a logical indicator in the physical layer wheras the VLAN is a instance separation of different streams.

However, for operational constraints just like in a fiber we can map different services to different K-L-Ms the same we can actually do for the Ethernet services, mapping to different Vlans, in the EoS infrastructure.



WHAT IS TO BE UNDERSTOOD FURTHER:

Just like we do not have any cross-talk between the K-L-Ms of two different trails on the same Fiber link same way two different VLANs do not share information with each other even if they are on the same EoS trail.

Hence, when one service is affected or looped it is only that service which will face a problem and not the other service as the VPN/VLAN is different.

This is explained in the figure below. 


SO WHAT SHOULD MY TRANSPORT FRATERNITY REMEMBER:

1.       The service layer is to be identified for each and every service. A new service in Ethernet is not a new trail, the trail can be the same however the service is identified by VPN. This is the basic reason why the end user doesn’t mention port number of K-L-M in such cases. They mention VLAN.
2.       Data of one Service never interferes with the data of another service.
3.       If there is a malfunction/looping/broadcast in one of the segments of the service then the other service is never impacted.
4.       The user should remember that the Vlan can be reused just like the K-L-M can be reused in a different segment.


SO MY FRIENDS, REMOVE YOUR APPREHENSIONS AND GET OUT OF THE TYPICAL TRAIL PROVISIONING CUCOON AND IDENTIFY WHAT TO PROVISION WHEN…………







Tuesday, October 2, 2012

VCG, LCAS and the Pass-Through SDH concept


“There is member Failure in my VCG----- Link is going Down”

We may often come across such a problem.  I am saying this taking a cue from the previous post and also the kind of cases the operations me come across with.

My friends from the transport group, who have recently plunged into this field (EOS)  should be actually conversant about three things over here.

1.       VCG: Stands for Virtual Concatenation Group and this only sits on the Card or module that is responsible for the encapsulation of Ethernet to SDH and having the GFP or GEoS object.

2.       LCAS: Link Capacity Adjustment Scheme; and this deals with the Dynamic Payload Control. This ensures that the entire link is not going down when only one member of the VCG is going down.  The link below actually will describe LCAS in Detail and this is from the ITUT.


3.       Pass-through Elements:  These are elements in the intermediate section of the link that only deals with the cross connection between two different VCs. This doesn’t contribute to any kind of data processing and only contributes to the channelization of the path for the Data traffic.

A typical configuration of the link, in the unprotected domain is shown as follows.




A thing to note in this link is that the drop points of the SNCP connection are the Data card EoS objects and the LCAS is enabled also at the Data card at the End Multiplexer. The work of the pass-through object in this kind of connectivity is only limited to the SDH.

Also keep one thing in mind that the Pass-through object does not in any way contribute to any data processing. The VC-4s are all individual VC-4 that are connected by means of a cross connect fabric which is purely TDM.

If we logically look at the implementation then the implementation is actually perceived as follows.


As we can see in this figure,   Ethernet processing, LCAS, Virtual concatenation are all done on the terminal Data cards and they are the ones which are actually also acting as the SNCP drop points. The SNCP switchover also happens as the VC Members of the VCG on the DATA card and NOT ON THE SDH card.

Now why I am saying this? Let us look at a fault over here in the next figure.


As we can see that the complain is noted as one of the VC in the line had an AIS which was hitting one of the members of the EoS group. This was actually bringing the entire EoS group Down (Which it should not be).

However, the expectation of the person looking at this fault maybe as follows (shown in the next figure). 


As we can see in this figure that the fault attendant was actually expecting the connection to switch at the pass through level, which for obvious reasons as mentioned above should not be the expectation. This is because the pass-through element is not having the protection connection and nor having the information of any kind of grouping that should be there.

So the expectation that the switch should take place at the pass through level is a wrong expectation in such kind of configurations.

So then how to resolve this problem?

Ø  First of all it should be ensured that the LCAS is enabled at both the locations (end – points) and are running the same variant and also are compliant with each-other.
Ø  The SNCP connection should be checked, because if the SNCP is perfect then there should not be any kind of problem in switch-over. Actually the SNCP switch-over should happen at the terminal end only for one VC-4 and this should not lead to the link going down with LCAS enabled at both locations.
Ø  The SNCP variant that is being run should be made as a Non-intrusive SNCP which will respond to also errors in VC. Please remember SNCP works on a VC-4 level whereas the LCAS will work on the VCG (Multiframe level). And since in the pass-through element there is neither SNCP or Virtual concatenation, this operation is always done from the endpoints.

Hence, if we see clearly then the actual resolution should be as shown in the figure below.



So what are the things to remember in such configurations?

1.     The traffic is Ethernet encapsulated in SDH and the encapsulation and termination actually happens at the endpoints where the EoS object or Virtual Concatenation Group is present. The SNCP connection endpoints and the switching points are also present over there and not anywhere else.
2.     The pass-through element is just like a repeater which patches the VC-4s. This is a point where we can actually have a J SWAP that is changing of the VC-4 number like in any other TDM cross connect.
3.     Member going down should be addressed at the VCG level.
4.     Always keep the LCAS in synchronized mode.
5.     If required check trace of each and every VC-4.
6.     SNCP to be kept in non – intrusive mode.


So in the next post will share some interesting facts about implementing optimization in the LAN network using L2 devices and also the concept of Vlan.  Till then.....Goodbye.... 







Saturday, September 29, 2012

EoS in the initial phase of Evolvement ( EoS is an Infrastructure and not a Service!!!)


As we evolve towards the complete Ethernet back-haul from the traditional TDM we have taken an entry level step of actually starting by evolving the network by including the Ethernet traffic on the EoS. We also studied about what are actually the cost advantages in the initial phase of including the EoS as a part of evolvement.

For a person who has been with the transport TDM technology for a long time has something to cherish with this. EoS is not a technical shock for this person. He/She is able to absorb this gradually, however EoS is not SDH and that needs to be clearly remembered by the person. They should not be carried away by the fact that, “Well this is just another variant of SDH so let us do all the things that we have been doing so far.”

This is a very overconfident and a very incorrect thinking by a transport engineer. This thinking itself leads to problems in the network and thus forces your planners in the network to think of expensive means to actually grow the network and most-importantly outgrow you. I am sure nobody wants such imbalance in the network and also in the organization and so with the change of field we need to understand the change of rules.


So here are some major tips for My Transport Fraternity

1.       EoS is not a service trail. It is actually meant for carriage of service and is not the service itself.

2.       A EoS interface is not a SDH interface. It is similar to a GiG port of a router or a Switch only that in this case the port interface in the card is logical and is cross connected with the SDH KLM.  This means every EoS trail that you make actually relinquishes a port so you have to understand that every new service is not a new port in the EOS. The new EOS port should only be relinquished when and only when there is a link to be created to another destination that is not “Physically or geographically” connected to the ring.  I am posting below some Pictures that are the right approach and the wrong approach of realizing services in the case of EoS. We will take the example as follows:

     There are two customers that are taking a drop from the same Multiplexer that has a data card. One of these customers has a Service commitment of 20Mb/s and another customer has a service commitment of 30Mb/s.  The point A and Point B are connected by one SDH Protected Link as shown in the figure below. 




Let us see the right and the wrong way of implementing them.

Let us look at the WRONG Implementation First. (Which Most of Indian Transport Planners do)





What is wrong in this?

While for many transport engineers/Planners and Noc engineers this configuration may seem to be the best configuration that can be created actually this is the most horrible configuration that one can make.

Ø  First of all for different customer using different EoS itself proves that the transmission in not planned an optimized properly.
Ø  This means that for every customer there would be a EoS infrastructure trail entitiy and this also means that the card ports will exhaust very soon.
Ø  For the planner, this means that very soon and sooner you would be constantly needing new cards.  Your management is soon going to take you to task and ask why the hell you are going on adding new cards. Trust me.
Ø  More complexity in the topology. Because the SDH link is same but there are two or more (Depending on the customers) EoS links. 
Ø  This configuration means that the transport engineer or planner is still looking at the implementation as a pure SDH implementation, which it is not and not looking at it from the Ethernet perspective. He should be told that the customer is actually looking for “ETHERNET SERVICE “ and doesn’t really care as to how many EoS trail the TX guy has made. 
Ø  Implementation is not optimized and in some cases can be disastrous is RSTP is involved. (This will be explained in my next blog posts).


Now let us have a look at the Right implementation. (Which very few of Indian Transport Planners Do)






So what is so right about this configuration?

Ø  First of all the planner has actually conceived the services well to be Ethernet service so the segregation is being done at the Ethernet level initially by means of separate VPNS.
Ø  Secondly the planner has used only one EoS trail that is of 50 Mb/s and can be scaled up as per his/her requirements in the future, thus he/she is looking at the EoS as an infrastructure and not as  a service and thus able to optimize the usage of the Data card/switch.
Ø  He/She is actually able to look at the service as a complete Ethernet service that may ride over the transport however; different services over the same physical topology need not take different infrastructure.
Ø  The 20Mb/s and the 30Mb/s is actually done on the Rate Limiter in the VPN.
Ø  The planning leaves more space for further augmentation of BW if required and also addition of another service over the same EoS trail infrastructure.
Ø  The configuration actually enables Bandwidth sharing and EIR upto 50Mb/s for each service keeping the committed SLAS intact to 20Mb/s and 30 Mb/s respectively.
Ø  This guy/gal is actually achieving the entire facilities of Ethernet Services without loading his CAPEX and thus impressing his planning bosses and management. 


Thus my dear companions of Transport Fraternity, we need to remember the most important thing in the first phase of evolution.

“ EoS is a boon when it is considered as an infrastructure of carrying Ethernet services on the TDM back-haul, however it becomes a major liability and a cause of headache when it itself is considered as a Service.”

In very simple words “STOP CREATING SEPARATE EOS TRAIL INFRASTRUCTURE FOR SEPARATE ETHERNET SERVICE REQUIREMENTS AND STOP BEING YOUR OWN EXECUTOR.”

EOS has a very good advantage in your network now if the Data traffic volume is less and can be accommodated in the transport Vs the Native Ethernet.

1.       EoS is the only infrastructure where you can achieve variable rates of BW in the link. Remember you can only achieve pipes of 150, 2, 32, 64, 48, 300, 600 Mb/s and various such combinations only in the EoS. This is because EoS can have Concatenations of various levels of the SDH Path objects like VC-12, VC-3 and VC-4.


2.       EoS is the only infrastructure where in addition to the L2 protection schemes and running of L3 internal protocols you can also achieve TDM carrier grade protections like MSP1+1, MS-SPRING and SNCP I and N.

For my companions from the Routing and “ALL – IP” Fraternity……

EoS is not a SDH implementation. It is a mere link between two devices that may be L2 or L2.5 or L3 using the SDH infrastructure. This is something like PoS, however unlike PoS this actually looks for the Ethernet header in the Raw Input.

EoS is as much capable to carry all the functions of the “ALL-IP” Transport (and please note this word TRANSPORT)  as any native  Ethernet Carrier.

Actually in some cases, for initial phase of deployments, if used judiciously as mentioned above, it is much better, less costly and more efficient than native variant of Ethernet.


In the next blog post we will see about how to optimize the physical interface also taking some help from the routing fraternity and thus reducing your cost on transport. 

"More you be with the science in the judicious manner, less you will spend on unnecessary events in your network."






Tuesday, September 18, 2012

Deploying Ethernet…..Technology VS Cost…..AND GFP


Deploying Ethernet…..Technology VS Cost…..AND GFP


In the last blog I talked about the technical requirement of deploying Ethernet. Let us face it if your access devices like BTS, Node B, E-Node B come with a Ethernet interface then there is no other option but actually to have Ethernet handoff at the access. Needless to say that if the access is Ethernet then obviously the aggregation handoff or the handoff at the point of BSC or RNC would also be Ethernet.

Hemant, asked a very pertinent question as to how costly would it be to actually replace a network with Ethernet. To this there is a very cost effective solution. In the service provider domain there are two kinds of traffic at present.  70% traffic at present is the 2G or TDM interface traffic, while 30% traffic is the ehternet traffic. So how cost effective would it be to actually transform the network to a full fledged Ethernet back-haul.

Let us understand one thing, we are talking of only one carrier over here and this carrier is either the TDM SDH or it is the Ethernet.  The 2G traffic is mostly static and it is not bound to increase. Also this traffic is hardcoded and follows a strict pattern. It  is the Ethernet traffic which is actually growing and which can be optimized. So the call whether to completely migrate to a Ethernet back-haul will be taken/should be taken on the  volume of each nature of traffic.

First of all let us understand that 80% of 2G traffic cannot be shifted overnight to a  Ethernet back-haul using CES (Circuit Emulated Services).  It is like destroying your house to make a cupboard appear beautiful.  Hence, prima facie there needs to be a kind of technology that helps to address the problems of Ethernet carriage in the same carrier network with the same kind of boxes that are placed in the network.

Enter the need for encapsulation kind of scenario for Ethernet to ride over SDH. However, Ethernet  is a asynchronous protocol and SDH is a synchronous protocol.  So how do we actually make Ethernet ride over the SDH.  For this very purpose there is a protocol called GFP (Generic Framing Procedure) Defined in ITU G.7041. Check the link below. 



This protocol of GFP is actually the key encapsulation towards providing a proper transport mechanism of Bursty Ethernet traffic over the synchronous  SDH frame.


Thus the ideal network would somewhat look like this.




 In this network as we can see the essential back-bone is the SDH and there is both homing for Ethernet and the TDM on the same back-bone.

What does this do for us then?

With the coming of GFP there is now an option to take in the Ethernet Input and then finally encapsulate this and send this over the existing SDH network.  This would efficiently reduce the entry cost of Ethernet in a network of a service provider which is primarily transport and would provide a very good transitional path for the network to be converted to pure Ethernet.

Network evolution, as pointed out correctly, is also a function of cost and RoI. If the present revenue is mostly from the TDM services and the rpu for the Ethernet services is lower in nature then it really makes no sense to set up a parallel overlay for the Ethernet and invest lot of resources and locking the CAPEX. 


This gives a very good entry level proposition for the service provider to actually go into the Ethernet service foray and to actually start efficiently the Ethernet services.

1.       Mobile back-hauls on Ethernet.
2.       Retail.
3.       Enterprise.
4.       MSO.
5.       VPN.
6.       ILL.

And many other services.


What my transport fraternity needs to remember?

Ø  Though the network is SDH the essential services are Ethernet and they follow all the rules of Ethernet switching in a WAN environment.
Ø  Each and every Data card is a different Logical Entity in the entire setup. There are many people who have this bad (VERY BAD ) habit actually to say “MY ROUTER IS CONNECTED TO THE MUX”.  This is a potentially non-technical statement as routers are never connected to MUX. The router is always connected to a switching network forming a NBMA (Non Broadcast Multiple Access). Only in this case the network infrastructure is SDH.
Ø  Do not treat the network of Data like to treat that of SDH. This is not a video game, but an infrastructure that provides gaming services. In SDH each service is a new trail however in Data on EoS each EoS trail is not a service but a server which can contain many services.


I will tell more about how to efficiently optimize and manage the EoS network on the lines of my last statements in the next releases of posts.

Till then, PLAN with SCIENCE, PROVISION with CARE and  MAINTAIN with LOGIC……….





Sunday, September 16, 2012

Why the need of Ethernet in Back-Haul???


Ethernet, what on earth it is doing in a back-haul environment?


Many of my transport friends feel “why do we, after-all need Ethernet in the transport back-haul environment? Why on earth everything is changing to Ethernet? We were well off with the traditional TDM and were well versed with it. All we needed to do was to provision some trails, create some cross connects and then boom the switch guy used to do things. “

The transmission planning used to plan the fiber routes and the trails and the NOC used to provision the same. In some cases of troubleshooting, well just give a loop to check. Loop – break….loop-break……loop-break and find the problem. So why today we have to have this complicated thing called ehternet?

Well, my dear readers, every child has to grow and so will your network. As the customers keep on increasing their demands also increase. As they see more things happening outside our country so they also want the same thing. This ushers a kind of requirement for heavy BW and of course a very clever way to engineer it.

TDM, classically works on a point to point model without any BW sharing. So as to say if you have 10 sites in a ring each having a drop of 10 Mb/s  then  in case some site is not using the BW the others cannot use up momentarily the same BW. Which means there is no sharing of BW, the access is hard coded to 10 Mb/s and even though there is a need of BW at some other place while one site is not using it this cannot be provided.

Please look at the figure below for complete understanding.


As we can see in this figure that there are 5 locations in the STM-1 ring that are parented to the aggregate locations. All these locations are Mux or Multiplexer locations.  For each location using the traditional TDM deployment there is a mapping of 5XVC-12 to each of the location.  So what is the limiting factor.

1.       Suppose Site -1 is having a heavy requirement say around 14 Mb/s in some instance and Site 3 is only using 4 Mb/s then the 6 Mb/s form Site-3 cannot be dynamically allocated to Site -1 as a timely lease. This has only to be done by manual provisioning.
2.       As the committed BW increases in the entire requirement the number of physical interfaces also increase to a great extent.  Hence if tomorrow the need is 22 Mb/s per site then it has to be clubbed with 11 E-1s and that too not in a shareable mode. The scalability is actually limited. This is because of the interface limitation.

Due to these two major factors there was a need of Ethernet in the network. There needed to be a kind of a device connected to these first mile wireless location which are actually able to handle huge BW. And this my friends can only be at present possible by an Ethernet device.  Hence the access interface had to be an Ethernet interface which can scale upto 10Gb/s base band. This suffices the second requirement of the above two.

What about BW sharing then? How does Ethernet help in doing this?

Well the basis of Data services are the fact that not every one is using the same BW at the same time. So at any point of time the entire capacity of the ring can be floated across. In Ethernet there is not hard coding of BW provisioning like there is in TDM. What we call trails in TDM actually boils down to a service in Ethernet.  However this service is having a special characteristic. This characteristic is that it can actually have dynamic BW sharing. So as to say that the entire BW of the ring is actually available to all.

Typical Ethernet configuration is shown in the figure below.


In this figure as you can see there are no hard coded VC-12 mapping for the traffic. There is definitely a service entity but that is not the trail. This is called a Ethernet service and this is having two parameters over here.

CIR: Committed Information Rate, meaning the amount of BW that the site is always honored to get and should get irrespective of whatever happens.

EIR: Extra information Rate, meaning the amount of BW that the site can achieve extra provided the resources in the network are not being utilized.

This means that if we make services like this for all the 5 sites each of them may get 100Mb/s peak. However, in case all the sites are using BW at the same time then at least 10Mb/s is guaranteed.  So there is no need of frequent BW grooming.

In high end data requirements one thing is to be taken as a postulate and that is “Not every one is going to peak at the same time”.

This basic assumption actually forms the basis of deploying Ethernet in back-hauls that are more data rich vis a vis 3G, LTE, HSIA, EvDO, Retail Broadband etc etc.


The main thing to remember:

The main thing to remember about classical TDM is that in classical TDM one deals with Multiplexers. The work of a multiplexer is actually to add and drop traffic from a lower tributary to higher rates in a framed manner. Hence, a TDM provisioning of transport is not essentially concerned about the utilization of the traffic. So in a classical TDM environment the parameters most essential for the transmission guys are actually concerned about health of the path with respect to errors and fault.

However, with the onset of Data services and Ethernet deployment, the multiplexing environment takes a shift and moves to switching environment. The transport guy is also responsible for switching of traffic, which was previously a responsibility of the Switch guy. This is because the elements in the transport are no longer multiplexers, they are actually switches, which measure, meter and also show the transmission guys the utilization of the BW.

This helps the transmission guy, especially the planner to actually monitor the traffic and provision as per requirement thus judiciously saving resources and delivering high BW services at the same time.

This main thing is the only important thing for a TDM transmission guy to understand while he and his company is moving towards a Ethernet back-haul. 

So friends, do not be paranoid about this change.....Just accept it as a new technology......





Why this blog is created?



It was in the year 2002 that I entered the world of telecom transmission. Ever since that day there has been a sense of evolution every day. Every day I used to wake up and think what more I have to learn today and the next day and the next day. The more I learnt the more I realized as to how ignorant I was regarding this oceanic domain of telecom transmission. Today after 10 years in this field and having worked with various technologies (SDH/ Carrier Ethernet/ Layer-2/ Layer-3) I constantly feel that there is so much and so heavy duty stuff to be learnt and what I do not know.  So, my dear readers, this blog definitely is not a lecture class. Because, me being ignorant cannot lecture you about the nuances of technology.



So what is this blog about? Why the hell do I get up one day and start thinking about telling the world (my telecom fraternity in particular) about the nuances of networking and transmission? The point is, after having realized that despite learning so much it becomes difficult for us and for me to retain things. I have heard my elders say that knowledge is the only wealth that multiplies by sharing. So my dear readers, I am today, with the blessings of my parents and loved ones, and for entirely my interest of multiplying my knowledge wealth sharing my experiences with you in this blog site.

This blog is not a guideline; it is a derivation of experiences. Experiences that are good and that are bad. Experiences that are mind blowing with complex problems having being solved with simple solutions.

Moreover this blog is a dedication to my Transmission Fraternity, who, today are somewhat paranoid due to the transition of transmission technologies. It is my share of effort to actually try and evolve everybody, including me to the new age transmission that is based on Ethernet/QoS/Layer-2/MPLS.

Some rules for this blog.

1.       Please do not be rude or derogatory to any technology:  Please understand that a technology is developed after several years of research and technology is for men and men are not for technology.
2.       Please do not make personal comments:  We are engineers. And today on Engineers’ day I would want every engineer to have mutual respect for each other.
3.       Please do not share proprietary information: We all belong to a domain that is highly technical and highly vulnerable and bounded by patents and copyrights. I, for myself, would not want any proprietary comments on this blog-site that will land any one of us in problems. Discuss technology with a free mind.
4.       Respect your predecessors: Remember, a son cannot be older than his father so maintain decorum for seniors.
5.       Do not ignore any juniors: Revolution can start from any phase, so please do not ignore it.
6.       Be logical: Our entire business is based on logic so it is a primary requirement.

So my dear readers, will post some interesting facts in my next blogs. I had done this in Orkut (when it existed) in the page called SDH and Optical networking. However, that was just questions and answers. This is going to be much more than that.