Ethernet Crosses 40 years and Continues to Bloom

DQI Bureau
New Update

The Computer History Museum and the Metro Ethernet Forum (MEF) recently came together to celebrate the fortieth anniversary of invention of ‘Ethernet' in Mountain View, California. Being in an era of massive centralized computing and yet being able to predict distributed computing to be the future, was one of the key pillars of success for Ethernet. The inventors believed in a simple yet overwhelming philosophy ‘Build and they will come', referring to the devices which were yet to come.


It was on May 22, 1973, while the US was engrossed with the unfolding of the Watergate scandal, Robert ‘Bob' MetCalfe shared a memo with a group of enthusiasts who embarked upon a journey at the Palo Alto Research Center (PARC) to bring about an innovation which was going to change the networking space and probably the face of computing forever.

At Xerox PARC, Bob Metcalfe along with Dave Boggs, David Redell, John Shoch and Tat Lam were the initial team behind this innovation. At the time, when only a lucky few could get a chance to access the ‘Silent 700' on a time sharing basis, this team set out to design a 2.94 mb/sec ‘high speed connectivity'. Ethernet was invented in 1973 after the ARPANET was designed in 1969. A few iterations and forty years later, Ethernet plans to deliver 400 gb/sec speed.

Ethernet is born:

It makes for interesting reading to find out what was so special about the Ethernet and how it was conceived. As per Bob, the ALOHA network which was present at that time, was capable of transmitting data packets without the feature of delivery assurance. So the challenge the innovators faced was to connect hundreds of PCs over a kilometer long LAN, looking for cheap distributed computing, design applications for MaxC terminals, Telnet, Packet Access, email, laser bitmap printer, Telnet FTP, printing and challenges of tangled wires, more popularly known as "Rat's Nest".


The team led by Bob applied the principle of ‘Manchester encoding' to ensure the ‘clocking' resided on the data packet. This was a quantum leap over the existing ALOHA network in a sense the new network could actually ‘listen to' and avoid collision of data packets. Many people referred to this as the ‘carrier sense' of the Ethernet.


Evolution of Ethernet was not free from friction though. Ten years of persistent threat from IBM-backed Token Ring technology did raise concerns amongst the team. But then the turn of two events occurred - the IBM PC arrived on the scene in August 1981 for which the Ethernet was perfectly suited, as it was a gross overkill for the existing Apple2 machine. And as the world started to accept twisted pairing, Ethernet was able to move on to the twisted pair form factor. Token Ring did not believe in making the transition and paid dearly.

Bob's vision of Ethernet:

Bob mapped the multi-directional development of the Ethernet in terms of perking up speed with IEEE standards, spreading into WAN, over the air through Wi-Fi, across the ‘telechasm', i.e., carrier ethernet and down, i.e., embedded in the chips.

Bob also predicted that the new areas of disruption to take place included healthcare service delivery, energy and learning verticals with explosive data traffic expected in video, mobile and embedded technology space. Massively Open Online Courses (MOOCs), Distance Healthcare, personal monitoring or maybe, self driven cars were identified as a few areas of disruption in the near future.


Ethernet in Enterprise Networks:

As per industry estimates, the 2013 worldwide market for Enterprise Networking Equipment is around US$ 42.4 billion. Ethernet (L2-3) switches contribute 47%, or almost US$ 19 billion. Then there are other adjacent technologies like the WLAN which ride on Ethernet. In 2013, the end user spend is likely to touch US$ 25 billion. In terms of speed, 1 Gig (gb/sec) remains the top biller with an estimated US$ 12 billion market in 2013, gradually tapering off to US$ 9 billion by 2017. However, 10 Gigs (just above US$ 8 billion in 2013) will overtake 1 Gig in 2015 to reach US$ 12 billion by 2017. The nascent 40 Gigs is expected show strong traction, growing from an estimated US$ 1 billion market in 2013 to US$ 5 billion in 2017. The Ethernet datacenter market (US$ 9 billion) is still significantly less than the non-data center market consisting largely of campus ethernet (US$ 15 billion), though the datacenter piece is growing on a steeper curve. Diverse workloads have evolved to include increased internet usage, virtualized corporate applications, cloud services, back up data from multiple sites, video conferencing, VoIP, desktop security solutions, video surveillance, interactive network of sensors etc.


On the question of BYOD, the panelists invited by Computer History Museum and MEF agreed that the management plane needs to evolve a robust platform or interface, which should be able to tackle this with a simple set of policy guidelines, though it is yet to be decided which set of vendors are going to provide this management layer. The BYOD challenge was viewed as not only a mesh of varied devices ranging from desktop PCs to notebook PCs to tablets, smartphones and VDIs, but also multiplicity combined with throughput going up multiple times. The architecture has to be varied to effectively serve this ecosystem. The goal would be to deliver unified access and uniform, consistent user experience across different devices.

Ethernet in the Datacenter environment threw up some interesting observations. The panelists observed that while the DC architecture was sufficient to handle older predictable workloads of exchange, SAP etc, but today the new order of workloads of web based ‘memcached', hadoop, storage apps, server virtualization and provisioning VMs (virtual machines) make the workloads extremely unpredictable which call for a very flexible datacenter architecture based on distributed networking. Another observation was the datacenter administration costs are high and increasing due to being people driven. Automation can bring in reduction in cost. A lead time of two weeks for configuration can actually be brought to less than 10 minutes. In this war of bandwidth density and scale, the management layer can play a crucial role for ease-of-use and easy deployment of networks on a large scale.

The next point of discussion hovered around network virtualization. While virtualization of servers, storage, desktop PCs and apps brought about huge cost savings, it also recreated the "Rat's Nest" problem of additional network connectivity and lots of connections to servers. Network virtualization would mean dissolving the network technology into compute, creating a single big pipe translating into no rewiring, just reconfiguring. Network virtualization will lead to offering of "network as a service" provided from the datacenter right through to the client. The boundaries between servers, storage, software, services are likely to get literally blended. The future could well be about open standards with choice and flexibility. It was also observed, that the whole ecosystem needs to gear up to deliver a speed of 40Gb/s, which is not limited to the silicon, system vendors or the infrastructure but also the optics, the NICs, the cables, the test equipment etc. Thus, Ethernet has evolved as a brand, which was able to demonstrate resilience over the years because of its adaptability and adherence to open standards.


Ethernet in the Service Provider Industry:

While vicious wars were fought in the past, what has evolved out of the Ethernet was a ‘heart of gold' - the frame format which still exists today. That different stakeholders decided to lose their individual attachment to the physical layer helped immensely to enable high speed data communication over networks. Today, telecom carriers are in the business of selling Ethernet as a service. A change in protocol can impact efficiency by as much as 50%. The frame format philosophy had actually helped avoid loss associated with a change in protocol. The Ethernet is integral to business, today. It is cost effective and helps introduce scale. Ethernet provides the service provider an option of higher bandwidth at lower cost per bit on a reliable ecosystem.


Ethernet 2.0 offers multiple classes of services, manageability, and performance assured services. It also offers a standard set of protocols for carriers to interconnect with each other. From a customer perspective, at this point of time, not all but a select few important applications are put on the Ethernet.

Many believe, apart from providing high speed (up to 10 gigabyte/sec), Ethernet also helps to provide differentiated levels of services, which allow to serve dynamically the change in demand for bandwidth. All these factors are contributing towards increased adoption of native Ethernet. But basic, legacy infrastructure has created a block in the path of Ethernet adoption in countries like India, South Africa etc. India has 40 million copper line connections which have proved to be a huge infrastructure bottleneck, while the existing copper-lines were enough to provide ‘first mile' connectivity in Britain and France. Many operators are embracing LTE, which actually offers additional features of Ethernet. Incidentally MEF, led by Nan Chen, plays a very important role in certifying standards of services offered by the providers. Recent launch of their CloudEthernet Forum is another example of close partnership with the industry in helping solve the problems as a whole.

With regard to SDN (software-defined networking), it is believed that SDN provides a native control plane which is hitherto missing in the Ethernet. It will provide a single unified control plane right from the server back-end all the way through. SDN is likely to flatten networks, thus impacting service providers.


Future of Ethernet:

Network virtualization is a potential area where the network is still deprived of the immense benefits which virtualization brings to the table. By separating network services from the physical layer, virtualization allows to create and manage networks in a manner similar to VMs (virtual machines). In such a scenario, network provisioning will become automated. This will bring more flexibility and mobilized infrastructure in the cloud space.


The next movement in the networking equipment space could well be towards the content centric network technologies. So, instead of the machine address specific movement of today, distribution could be based on content.

This possibility completely changes the potential which a hardware defined network has to offer. Thus, all of a sudden, an independent software market can emerge in the networking equipment segment. And consequently, the ‘Center of Value' will shift from hardware to software.

In this scenario, the vertical stack of offerings might just disintegrate into separate layers like hardware, control plane and applications resulting in increased visibility. The power of Openflow was underlined in the above scenario. This would require APIs need to turn content rich.

‘Silicon photonics' is also likely to gain importance in bringing the optical interface closer to the chip, thus leading to greater power savings.


The Great SDN story:

The role of SDN can well be similar to the role played by Google's Android OS in the mobile phones market. Branded solution providers can actually take a leaf out of the mobile phone story, where players embraced an open source solution like Android. SDN is also capable of providing multiple control planes to the Ethernet. It is believed to be central to convergence. Going forward, there would be a period of transition before SDN finally takes off. The use case for SDN can be mapped to multi-tenancy in a network virtualization environment, from layer 4 to 7, in a secured wired or wireless environment. Thus, on a long term basis, security is delivered at an economical cost. SDN can actually be visualized as an enabler to tie the network according to business priorities ranging from security to compliance to traffic engineering to virtualization.


Factors like technologies combining the ‘internet of things' and real world physical networks to transmit data from bio-metric devices or implants, convergence of communication technologies, and ‘consumerization of IT' all lead to future challenges of transmitting an avalanche of user data generated every day. Ethernet is now a brand symbolizing open standards, which is constantly evolving to remain relevant and interoperable, increasingly becoming omnipresent in nature and a potential US$ 100 billion industry. Ethernet will continue to be affordable to encourage adoption en masse.