As the industry sifts through the software defined networking (SDN) hype of the last few years, real use-cases have emerged for greater business agility through software control of the networking stack. Many of these use-cases were initially driven by the needs of the largest scale public cloud data center buildouts. As CIOs are looking at their own private cloud deployments, many of the same public cloud networking principles can apply to the private cloud data center networking environments.
Cloud-based data center architectures continue to grow as consumer, enterprise mobility and ‘Anything as a Service’ offerings evolve. This ‘new normal’ is consolidating more and more of the world’s compute and storage to a small handful of cloud providers creating previously unheard of levels of scale and capacity. These environments have driven the adoption of new environmental, operational, and financial principles. At the infrastructure level, simple and massive scale drives cost efficiencies that can be passed onto the end customers. But for this to work the infrastructure must operate at a new level of cost efficiency. Adding a proportionally large staff does not help the operational costs. Instead, these environments heavily rely on automation techniques to balance operational tasks with minimal costs.
One of the initial infrastructure focus areas for cloud optimization was the compute tier. The server scale problem was addressed with two primary components: adoption of commodity compute hardware and an increase in automated control of the compute software. On the hardware side, the broad availability of the x86 server architectures provided a more standard, open, and cost-effective approach. Similarly on the software side, the open Linux software platforms with native programmatic interfaces made software control very possible. And so, the massive server deployments in the cloud could be managed with more efficiency through an automated framework.
These same principles and tools used for compute efficiency at scale could be applied to networking infrastructure as well, in particular to ‘top of rack’ switches that are rolled out with the server racks themselves. But the networking stack was a few years behind the server stack and would need to evolve.
Networking Had To Change
Traditionally, networking software for Ethernet switching has been focused on building value add through advanced feature set development. A CXO Insights This advanced functionality was often implemented through vendor specific solutions, with limited interoperability and limited flexibility as network architectures has evolved. To be sure, this approach served many general purposes data center networks for many years in the 2000-2010 decade. However, the cloud network architects were designing for a new rate of growth and control needed for automation and they could not rely on black-box, closed networking architectures. It was then that the cloud networking trends started to coalesce into a similar movement to what happened with cloud-based compute infrastructure. In many cases, the industry was not able to provide Data Center switching solutions that either met the cost points or the tighter software control for the cloud. The approach of ‘open networking’, ‘software defined networking’, ‘white box’ or in-house switches were initially conceived as ways to meet these new requirements.
Traditional networking vendors have to reassess their approach to meet these cloud scale needs.
“The cloud network architects were designing for a new rate of growth and control needed for automation and they could not rely on black-box, closed networking architectures”
Making Networks More Automated
The cloud architects re-applied many of the same server automation principles to data center switching. First, the generally available ‘merchant silicon’ approach started to improve in functionality and cost structures. The cloud architects were able to leverage these well understood hardware architectures to build based on more standard and simple cloud network design principles. It also helped to reduce the CapEX costs of building out the networking infrastructure.
Second, the networking software stack needed to change from a closed, black box approach to more of the open Linux device model. As with the compute infrastructure, the goal was to gain more control of the networking devices to allow deeper automation and gain greater efficiency of the network. This required programmability at all levels of the networking stack, including the following examples:
With these capabilities now available to the cloud operators, they could start to move to the desired state of automation for the networking infrastructure.
Private Clouds Can Benefit, Too
Today, it is not just the mega-scale cloud providers that are building these new cloud-like architectures. Even at smaller scale, SDN-forward businesses are aimed at gaining business agility and efficiency by applying some of these cloud principles to data center networking in environments of all sizes.
A cloud-ready networking solution can provide the following benefits even in more traditional networking environments:
An open and programmability networking stack provides flexibility for a wide variety of improvements in efficiency. And ultimately, this approach would need to be combined with a feature-rich solution with a proven track record of reliability.
The cloud approach has already had a significant impact on data center infrastructure. As many improvements on the compute side were applied to networking, the data center switch started to look and feel more like a server and to gain the associated efficiencies that allowed the server environments to scale to requirements of the cloud. While championed in the largest public clouds, these efficiencies in networking are applicable to the broader data center environments. As CIO’s look for further business agility when evaluating their own cloud strategies, they should consider further investigation of the new trends in these cloud networking approaches.