7 Trends of Data Center Network

No, this is NOT a prediction, nor will these trends happen overnight. However, I see data centers moving toward a new architecture, which I believe will prevail in the next a couple of years.

Trend 1: Multi-path is in, Tree is out.

The data center network is migrating from traditional tree structure to a new multi-path architecture. This is driven by two demands. One is the fast expansion of data center scale, which no single core switch can keep up with the growth. In the meantime, data center virtualization requires dynamic resource assignment, where the traditional blocking network scheme of tree structure simply cannot cope with the fluid operation of data centers.

Trend 2: TOR switch is in, Big chassis switch is out

Given the data center is moving toward a multi-path architecture, it makes much more sense to use TOR (Top of Rack) switches instead of big switches. The big switches used to be a hard requirement in a tree architecture of network, but it is not a good investment anymore in a multi-path environment. Most of the big chassis switches are expensive, proprietary, single-point of failure, and often blocking. They could be replaced by a mesh of feature-rich and high-performance TOR switches.

Trend 3: Two-Layered hierarchy is in, three-layered is out

In a tree architecture, most data centers have been following Cisco’s design guideline in segregating the network into three layers, i.e. core, aggregation, and access. Each layer is designed with different blocking ratio and carries different function.

In a meshed data center, the network is not limited by blocking performance anymore.  A two-layered hierarchy gives much more flexibility and it enables add-bandwidth-as-you-go flexibility.

Trend 4: Merchant Silicon is in, Proprietary Silicon is out

The result of data centers migrating from big switches to TOR switches will make merchant silicon the mainstream. There will be much less incentive for data centers to pay high price tag of switches built with proprietary silicon. The off-the-shelf merchant silicons have already outperformed the proprietary silicons. The prevalence of TOR switches in data centers will drive the proprietary silicon into a niche play.

Professor Amin Vahdat of UCSD has foreseen this in 2009 and had published several papers on this trend.

Trend 5 – Open is in, close is out

The data center network infrastructure needs innovation badly, which will demand open hardware and software platforms. While Cisco will continue to use network architecture to lock in customers, the demand of innovation will facilitate a TOR market of open software and hardware.

Trend 6: Scale in IP or Ethernet, but not hybrid

In a legacy tree structure, Data Centers used to partition IP at the aggregation layer and scale Ethernet at the access layer. This needs to change in a two-layered design in order to ensure the scalability of a multi-path architecture.

Data Centers can choose to use multi-path Ethernet, such as TRILL, or multi-path IP, such as ECMP, to scale the network. Ethernet is easy to manage, plug-and-play, and can support Virtual Machine Migration (such as vMotion). IP, on the other hand, gives better domain protection, clear segregation, and better control of the traffic. Depending on different needs, Data Center can choose different protocols to scale. However, while we will still see both IP and Ethernet used in a data center, they will not be tightly tangled together as they used to be.

Trend 7: Central logical topology management is in, Distributed is out

For many years, the network has been designed to cope with changes of physical connection. Each network switch is running heavy protocols to probe and discover the topology in real time. This is not necessary in a data center environment, where all the cabling are fixed and seldom change.

The new data center badly needs a network to keep up with the dynamic configuration of logical connection, which is beyond the scope of existing protocol sets. In the meantime, the scale of new data centers make it rather difficult for individual switches to discover and maintain the logical topology.

With these new requirements, the demand of a central logical topology management is increasing. We have seen high interest of Openflow technology for this exactly reason. I will not be surprised to see other products targeting on solving the logical connection issues.

What is NOT a trend?

While it is always interesting to discuss the trend, it is as much to discuss what is not. This might be just my bias. If so, I would love to be corrected.

  • FCoE – I just don’t see FCoE happening in data center. For those who use Fiber Channel, they feel comfortable of running two separate networks. Going FCoE does not save them much equipment cost, since they still need special-purpose CNA on servers, which accounts for the majority share of the cost. As far as we see, FC will continue to have their niche market, even though we might see generic Ethernet based storage to start taking off after 10GE.
  • 10GbT replacing SFP+ – while we believe 10GbT will eventually happen, it is not clear whether it will kill fiber. It is widely believed 10GbT will be the last generation of twisted-pair copper solution and there will be no 40GbT or 100GbT. Many data centers have migrated to use fiber to build their network infrastructure.
  • 40GE as uplinks – this will happen, but not in a big way (and yes, Pronto will have 40GE aggregation switch product). Data centers will need an architecture to break the uplink barrier with multi-path 10G links. 40GE might be a way to work around cable issues, but it won’t be a solution for uplinks.
Advertisements

About James Liao
James is a data center architect, focusing on the scalability and operation of data center infrastructure.

Comments are closed.

%d bloggers like this: