Pica8 + Pronto = Open Network Platform

I am glad to announce we finally complete the merge of Pica8 and Pronto. Starting from Feb 1, 2012, the merged company is named PICA8 Inc. Pronto will become PICA8’s brand name.

A simple way to explain this is to use Cisco as an example. Cisco is the company. Catalyst is the brand name, and IOS is the software. In PICA8’s case, PICA8 is the company. Pronto is the brand name, and XORPlus is the software.

We are excited about the combination of the open software and the open platform, and look forward to changing the network industry in the next a couple of years.

For more information about PICA8, please visit our web site at http://www.pica8.com.

Integrate FlowVisor into Switch

Ivan Pepelnjak gives the following 4 OpenFlow models in his November 02, 2011 ipSpace blog:

  1. Native OpenFlow
  2. Native OpenFlow with extensions
  3. Ships in the night
  4. Integrated OpenFlow

Here, we propose an architecture which allows users to configure the OpenFlow switch to be one of the above 4 models. The key to achieve the flexibility is to integrate FlowVisor into the OpenFlow switch. It is named FlowVisor switch in this page.

Originally, FlowVisor is a special purpose OpenFlow controller that acts as a transparent proxy between OpenFlow switches and multiple OpenFlow controllers.  In this proposed architecture, FlowVisor performs the functions to virtualize the network for each controller by creating a “slice” of the network resources and delegates control of the “slice” to the controller. The rules for each “slice” are defined in the controller’s Slice Policy.

The following figure provides the logical view of this architecture.

Figure to Integrate FlowVisor into Switch

In the above figure, Controller 0 is a special controller which resides in the FlowVisor switch. Controller 0 and its corresponding Slice Policy play the major role to define the model to the FlowVisor switch to function.

Controller 1 to Controller n are remote controllers which resides in the remote servers. Native OpenFlow performs the OpenFlow switch functions and Traditional Control Plane performs the traditional L2/L3 switch functions.

Here are the models that can be performed by the FlowVisor switch:

  1. It is a traditional switch if there are no remote controllers and the Controller 0 does nothing.
  2. It is a “Native OpenFlow” model if the Controller 0 instructs the Traditional Control Plane to enter pass-through mode. In this mode, Traditional Control Plane forwards the incoming packets to Native OpenFlow directly.
  3. It is a “Native OpenFlow with extensions” model if the Controller 0 instructs the Traditional Control Plane to support L2 and L2.5 protocols (such as TRILL, LACP, LLDP, …).
  4. It is a “Ships in the night” model if there are no overlaps between Slice Policy 0 and other Slice Policies.
  5. It is a “Integrated OpenFlow” model if there are overlaps between Slice Policy 0 and other Slice Policies.

If this architecture does make sense for future development, we will continue to work on the design of the communication between Native OpenFlow and Traditional Control Plane.

Implementing MPLS through OpenFlow

There has been high interest in implementing MPLS over OpenFlow. In the past, we have been approached by at least three groups to try to come up with test beds and special implementations to address this topic. While most of these projects are still under development, we have started to see the potential of these creative projects.

To get a high-level view of why this project is particularly interesting, I would refer to a project description the Stanford team has posted on why they want to develop OpenFlow MPLS.

MPLS networks have evolved over the last 10-15 years to become critically important for ISPs. They provide two key services: traffic engineering in IP networks and L2 or L3 enterprise VPNs. However as carriers deploy MPLS networks, they find that (a) even though the MPLS data plane was meant to be simple, vendors end up supporting MPLS as an additional feature on complex, energy hogging, expensive core routers; and (b) the IP/MPLS control plane has become exceedingly complex with a wide variety of protocols tightly intertwined with the associated data-plane mechanisms.

We propose a new approach to MPLS that uses the standard MPLS data-plane with a simpler and extensible control-plane based on OpenFlow and SDN.There are significant advantages in using this approach. The control-plane is greatly simplified and is de-coupled from a simple data-plane. And we can still provide all the services that MPLS networks provide today. More importantly we can do much more: we can globally optimize the services; make them more dynamic; or create new services by simply programming networking applications on top of the SDN Controller.

Projects

The idea of implementing MPLS through OpenFlow has been around for a while. The Ericsson team (Howard Green, Mart Haitjema, Peyman Kazemian, and James Kempf) have put together a prototype back in 2009, when OpenFlow 1.0 is still in draft.

Their work can be found in OpenFlowMPLS.

Courtesy of Scott Whyte and Google

Open Source LSR (courtesy of Scott Whyte and Google)

In 2010, Scott Whyte of Google presented OpenLSR project in NANOG.

The project used NetFPGA card to implement the data plane of 4x1GE ports. At the protocol control layer, it implemented MPLS in the Linux kernel and added LDP on top of Quagga’s OSPF.

Recent Development

We recently see a new implementation of Open-MPLS at the OpenFlow site, which enables MPLS-TE with Nox (the controller), OpenVSwitch (the data plane), and Mininet (the GUI to visualize the network).

This project is particularly interesting to Pronto because we have ported OVS to Pronto 3290 and 3780, and both NOX and Mininet are open source. This allows researchers and visionaries to collaborate on the innovation.  In 2010, we have worked with Stanford in implementing the MPLS features at our Indigo version, but did not release the code because it would deviate from the Indigo public code. Given OVS now has the MPLS data structure defined, we are considering to plug our Indigo MPLS driver into the OVS code. This could create a stable and maintainable OpenFlow MPLS implementation for people to share.

If you are thinking of implementing MPLS over OpenFlow, drop us a note. We would love to collaborate.

Running OVS on Pronto

Pronto started to support Open VSwitch (OVS) since version 1.2. We have received several inquiries about how to test OVS with some OpenFlow controllers, such as NOX.

While every test environment is unique and it might require different configurations in different environment, we want to provide a quick-start guide to give users an easier ramp-up.

Before you start your trial, you need to first decide whether you want to run OVS in standalone mode or you want to run OVS with certain controller.

Standalone OVS

This is easy. Boot up Pronto until you see the following prompt.

………….
File system OK
net.netfilter.nf_conntrack_acct = 1
net.ipv6.conf.all.forwarding = 1
3 Sep 23:26:45 ntpdate[900]: no servers can be used, exiting
System initiating…Please wait…
Please choose which to start: Pica8 XorPlus, OpenFlow, or System shell:
(Will choose default entry if no input in 10 seconds.)
[1] Pica8 XorPlus * default
[2] OpenFlow
[3] Open vSwitch
[4] System shell
[5] Boot menu editor
Enter your choice (1,2,3,4,5):3

Choose 3 at the prompt, and Pronto will launch the standalone OVS for you.

Running OVS with Controllers

Before you start the trial, you need to prepare a couple of things

  • Prepare your controller, such as NOX, on a server (not on Pronto switches)
  • Prepare your Pronto switch by
    • Drop into Linux system shell (because we need to set up the environment)
    • Configure the IP address and gateway of the management port (which talks to the controller)
    • Configure the OVS environment
    • Launch the OVS process

Configure Controller

We assume users can find their controller information from internet. If you don’t know where to start, you can try NOX (http://noxrepo.org/wp/)

Drop Into Linux Shell

This is easy. Boot up Pronto until you see the following prompt.

………….
File system OK
net.netfilter.nf_conntrack_acct = 1
net.ipv6.conf.all.forwarding = 1
3 Sep 23:26:45 ntpdate[900]: no servers can be used, exiting
System initiating…Please wait…
Please choose which to start: Pica8 XorPlus, OpenFlow, or System shell:
(Will choose default entry if no input in 10 seconds.)
[1] Pica8 XorPlus * default
[2] OpenFlow
[3] Open vSwitch
[4] System shell
[5] Boot menu editor
Enter your choice (1,2,3,4,5):4

Choose 4 at the prompt, and Pronto will lead you to Linux shell.

Configure the IP address

You can configure the management IP in two ways, with udhcpc or manually. We recommend udhcpc, if the management port is connected to a network with DHCP server.

# udhcpc      

or configure the IP address manually (the address is just an example)

# ifconfig eth0 10.10.50.53 netmask 255.255.255.0

After the IP address is configured, you need to add the gateway

# route add -net default gw 10.10.50.1

# ifconfig

eth0   Link encap:Ethernet  HWaddr 00:E0:0C:00:00:FD
inet addr:10.10.50.53  Bcast:10.10.50.255  Mask:255.255.255.0
inet6 addr: fe80::2e0:cff:fe00:fd/64 Scope:Link
UP BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:60 (60.0 B)  TX bytes:238 (238.0 B)
Base address:0x4000

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Configure OVS Environment

#cd ovs/bin
#ovsdb-tool create /ovs/ovs-vswitchd.conf.db /ovs/bin/vswitch.ovsschema
ovsdb-tool: I/O error: create: /ovs/ovs-vswitchd.conf.db failed (File exists)
#ovsdb-server /ovs/ovs-vswitchd.conf.db –remote=ptcp:6633:10.10.50.53 &

Then launch the OVS process

#ovs-vswitchd tcp:10.10.50.53:6633 –pidfile=pica8 — &

Now, more tuning.

#ovs-vsctl –db=tcp:10.10.50.53:6633 add-br br0
#ovs-vsctl –db=tcp:10.10.50.53:6633 set bridge br0 datapath_type=pronto
#ovs-vsctl –db=tcp:10.10.50.53:6633 add-port br0 ge-1/1/34 — set Interface ge-1/1/34 type=pronto
#ovs-vsctl –db=tcp:10.10.50.53:6633 add-port br0 ge-1/1/35 — set Interface ge-1/1/35 type=pronto
#ovs-vsctl –db=tcp:10.10.50.53:6633 add-port br0 ge-1/1/36 — set Interface ge-1/1/36 type=pronto

Connect OVS to Controller

#ovs-vsctl –db=tcp:10.10.50.53:6633 set-controller br0 tcp:10.10.50.50:6636

where ‘tcp:10.10.50.50:6636’ is controller address.

Other Useful Commands

Dump flow information of br0 with command:

#ovs-ofctl dump-flows br0

Add a flow into br0 with command:

#ovs-ofctl add-flow br0 in_port=41,actions=output:43

Discount of Switch Products

Jim Duffy did an interesting report on Cisco’s 76% discount of their Nexus 7000 to Purdue University.

While it is not a secret that incumbents usually give out hefty discount on their switch products, this particular case is still quite interesting for two reasons.

  1. Nexus 7000 is Cisco’s relatively new flag-ship product. We know it is not usual for the incumbents to give heavy discount on their “old generation” products, such as Cisco 3K or 4K products, but it is interesting that Cisco is also giving such a heavy discount on their mainstream products.
  2. The volume of the Purdue deal is not particularly big. If Cisco is giving 76% to win Purdue’s case, one has to wonder how much discount would Cisco give to win Microsoft or Facebook.

For years, the networking incumbents have been inflating their listing price in order to give heavy discount to win deals. In parallel to the heavy discount, the incumbents usually inflates their support-and-maintenance annual fee in order to make up their loss in the price discount.

This inflate-then-discount pricing scheme might have worked in the past when the switch products are vertically integrated and encapsulated as a black box. However, as we move toward more commoditized networking world, the pricing of network products should become more transparent and the end-users should be able to start enjoying the benefit of open competition.

If you are using Cisco or HP and believe you have got a good discount, search the internet and you will be surprised (or upset) about the deals you got. If you are thinking of getting a 2nd-hand Cisco or HP switches, negotiate hard. 🙂

 

Fusion of legacy L2/L3 and OpenFlow

So far, we have released the Pronto’s Xorplus L2/L3 stack and the OVS stack. With the current implementation, users have to choose which stack to run when the switch boots up. While this is good enough for some users, we are constantly getting requests to support a fusion version where users can configure the network with legacy L2/L3 protocols, and dynamically use OpenFlow to direct the traffic.

Since Xorplus is already running on Pronto and OVS is already ported and released, implementation of the stack is not a big challenge. The major challenge we have is the operation model between OpenFlow and L2/L3 stacks.

Given we are proud to provide open network, we decide to publish our proposal here and solicit inputs from the community. If you have any feedback, feel free to leave a comment or send your proposed changes to support@prontosys.net.

Reference Design

HP and NEC have released software that can operate in this type of environment. From HP’s setup guide and NEC’s setup, we can tell both are using VLAN to configure OpenFlow.

The pseudo configuration procedure of both switches is

  • create a VLAN
  • name the VLAN (e.g. OpenFlow)
  • add ports to this VLAN
  • set the OpenFlow controller
  • enable OpenFlow on this VLAN
  • use show command to display the status of OpenFlow instance
  • allow lldp protocol to run on the OpenFlow ports (this can be done through configuration or through the instruction from the controller)

Users can set up multiple OpenFlow VLANs on each switch, and each VLAN can be connected to different controller. It is possible to add a port to multiple OpenFlow VLANs, each managed by different controller.

[Note] Does this really work? Theoretically this should work, but in real implementation OpenFlow is using different MAC learning (manual insertion) from the legacy L2/L3 network (auto learning). Having two mode running on the same port might be difficult to implement.

Pronto Design

While we like the idea of using VLAN to partition the legacy switching ports from OpenFlow ports, we see the need of having OpenFlow ports sharing the data planes with the legacy switch ports. This requires the legacy ports to use the same “tables” (L2 MAC, L3 FIB, or ACL) as the OpenFlow ports.

In this case, we will provide two types of OpenFlow configuration. One is OpenFlow port configuration, and the other as OpenFlow VLAN configuration.

1. Port (including port channel) configuration.

We will add an attribute, OpenFlow, in the port configuration. When this is enabled, the port is operated in OpenFlow mode, and will use the port VLAN configuration.

When configured into OpenFlow mode, this port, by default, should drop all packets until the flow entries are inserted by the controller. The spanning tree protocol should be automatically disabled.

By default, the OpenFlow port should be running at default VLAN. This means the frames of this port can be forwarded to other ports at the same switch, even if those ports are “non-openflow ports”.

2. Per VLAN configuration.

Just like HP and NEC switches, it helps to have two separate virtual switches, one running legacy protocols, while the other(s) running OpenFlow. In the per-VLAN OpenFlow configuration, each OpenFlow VLAN can be configured with its own controller. Like we mentioned earlier, the behaviors of the chips might be different and the implementation might be tricky.

OpenFlow Configuration

OVS has a well integrated config server to handle its configuration. In Pronto’s integration, we want to keep that configuration as part of OVS, instead of integrating it into Xorplus configuration database.

Trouble Shooting OpenFlow

OVS is built on top of Linux, and leverages Linux network tools, such as tcpdump, for trouble shooting. Since Xorplus is also built on top of Linux, all these tools should be still available to OVS.

Sustaining Protocols

While OpenFlow does not require most of the legacy L2/L3 switch protocols, some protocols are still useful to the OpenFlow ports.  For example, the LLDP protocol can still provide link status to the port management. In these cases, we want to keep these protocols available in the OpenFlow ports.

Feedback Appreciated

We love to hear your application case and your feedback on the design case. More specifically, some answers to the following questions would really help.

1. Do we need per VLAN configuration? Per-port configuration seems to fulfill the requirements.

[Note] Based on the feedback from Jim Chen of Northwestern University, Matt Davy of Indiana University, and Srini Seetharaman of Stanford, per-VLAN OpenFlow configuration (with multiple OpenFlow instances) is particularly useful in an environment where network control is distributed between several parties.

2. Do we need to pass through any other protocol, besides LLDP, through OpenFlow?

[Note] LLDP seems to be the only required protocol for now. Not sure about LACP yet.

Switch to Break Away From Vertical Integration?

We have discussed similar subject in my earlier post, Network Issues in Data Centers. Let’s take a little different angle to look at the problem this time.

Vertical Integration

A little bit background. Vertical integration is the degree to which a firm owns its downstream suppliers and its upstream buyers. The switching products and technology has been vertically-integrated for decades. Networking incumbents, like Cisco and Juniper, have been designing their own ASICs, embedded systems, protocols, and tightly integrate these in-house components into their products.

Even though most of these products claim to be “interoperable” and “standard compliant”, the administrators have been advised to stay with single vendor solution for compatiblity reasons in the day-to-day operation. This vertical integration strategy, while provided certain level of warranty over product and service quality, put network users in many disadvantages. For example,

  • Lock-in leads to unreasonable pricing – have anyone ever compared the price of a Cisco-certified SFP+ transceiver to any market leader’s products?
  • Slow innovation – how many network technology research papers evolve into a real product in the past 10 years?
  • Slow response to customer’s problem and requests – ever try to ask Cisco to add something simple, say an agent to report certain unusual events?

Break Away?

Switch users, especially data centers, have been questioning whether vertical integration still makes sense in the past several years. Visionaries, such as James Hamilton of Amazon, have been advocating to break away from vertical and build a new programming model where we can enable innovation in network industry and avoid the lock-in from single vendor.

The same requests have been raised in many other occasions. For example, one of the Open Compute workgroups discussed the possibility of bringing open architecture to data center networks that facilitate the Open Compute platforms. In several Open Network Foundation (ONF) discussions, we have heard similar comments that commoditized switching platforms are facilitating the demand of Software Defined Network (SDN).

So, why do people think it is time to break away from vertically integrated network solution? Obviously, Moore’s law has played a critical role behind the scene. Not only the off-the-shelf switch chips are getting faster and smaller, the chip vendors are getting faster in adding new features. Not only the new networking startups such as Arista, Nicira, BigSwitchNetworks, were all software centric, but also did big tycoons like Cisco and Juniper start to use off-the-shelf chips in their top performance TOR switches. This indicates the chips have been commoditized and the need of proprietary chips has diminished.

Why Isn’t It Happening?

In theory, if the market is changing from vertical integration to a horizontal structure, we should see the hardware price drop significantly and software start to diversify. Why isn’t it happening? Why aren’t we seeing lower cost 10GE platforms? Why aren’t we seeing tens of startups to promote their niche software?

In a way, it is actually happening, but just not as fast as we’d have expected. The cost of the 10GE is dropping fast in the past two years, from $500 a port to around $250 a port, mainly because of the competition between Broadcom, Marvell, and Fulcrum. We expect the price continues to drop even further when more software options are available.

What Can Accelerate the Change?

The most critical step of creating a horizontal platform is to define the interface between different horizontal layers.

As many have hoped, OpenFlow might be one option to provide a unified interface to various chipsets (or platforms). However, while we might see OpenFlow start to solve niche (but could be big-scale) problems soon, it will likely take years before we can use OpenFlow a the ONLY interface to control the switch chips.

How about asking switch chip vendors to open source their Software Development Kit (SDK) so software vendors and users can easily develop and share the code? Well, based on our talk with these switch vendors, they have a big concern of opening the SDK mainly because there are a lot of “trade secretes” embedded in these SDK packages.

What about Pica8 driver? Well, we hope so. We are willing to open the API and even the software stacks so people can use it as a foundation to develop their own code. However, there are not really that many developers familiar with the switch programming and it takes time to train these developers. From the users end, it will also take a couple of years for Xorplus to build the reference accounts and gain users’ confidence.

In the meantime, we also believe there are rooms to improve our switch platforms to make it more developer friendly. For example, using x86 CPU or creating native compilers on Pronto might help people to mitigate the learning curve of cross compiler.  Creating virtual ETH devices for each data ports might help people to virtualize the switch chips. If you have any idea that can help accelerating the transition, we would love to hear about it.

Announce OVS Support

Pronto Systems announced it would be adding Open vSwitch (OVS) to its Open Switch Software Architecture (OSSA) suite.

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license.  It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols.

Pronto has completed the integration of OVS 1.1.1 into OSSA 1.3 image. After booting up Pronto OSSA 1.3, users will be prompted,

Please choose which to start:
(Will choose default entry if no input in 5 seconds.)
[1] Pica8 XorPlus * default
[2] OpenFlow
[3] Open vSwitch
[4] System shell
[5] Change the default setting
Enter your choice (1-5):

The ported OVS code uses Pica8 Driver to configure and control the switch chips. While the Pica8 Driver is encapsulated with Pica8 API and released as a binary driver, the ported OVS stack on top of the driver is completely open sourced. Users can request the source code of OVS from Pronto Systems.

Why Does it Matter?

Open vSwitch (OVS) is one of the most popular open source implementation of OpenFlow 1.0.  OVS is the default virtual switch of Citrix Xenserver (project Boston). It is also integrated into OpenStack and Open Nebula Project.  The code has also been ported to a few ASIC based switches, but unfortunately none of them is open sourced.

By porting OVS to OSSA, we provide both the source and production-ready binary to researchers, developers, and administrators. This, we hope, will help the R&D community to continue enhancing the features and performance of the OVS switches.

Release Note

Here is the release note of the OVS port.

In Xorplus release 1.3, the Open vSwitch (OVS) is ported from openvswitch-7fb563b.tar.gz of master OVS branch.

This OVS module can be remotely controlled through ovsdb-server running on a different Linux platform (but users need to make sure the version of ovsdb-server is compatible to this OVS branch). When using ovsdb-server to control this OVS on Pronto switches, users need to configure the type of netdev and datapath as “pronto”.

For instance,

ovs-vsctl –db=tcp:10.10.53.53:6636 add-br br0

ovs-vsctl –db=tcp:10.10.53.53:6636 set bridge br0 datapath_type=pronto

ovs-vsctl –db=tcp:10.10.53.53:6636 add-port br0 ge-1/1/2 — set Interface ge-1/1/2 type=pronto

ovs-vsctl –db=tcp:10.10.53.53:6636 add-port br0 ge-1/1/3 — set Interface ge-1/1/3 type=pronto

Known issues of Pronto OVS in this release,

1. The following actions are not yet supported:

ODP_ACTION_ATTR_SET_NW_SRC,

ODP_ACTION_ATTR_SET_NW_DST,

ODP_ACTION_ATTR_SET_TP_SRC,

ODP_ACTION_ATTR_SET_TP_DST,

ODP_ACTION_ATTR_SET_TUNNEL,

ODP_ACTION_ATTR_SET_PRIORITY,

ODP_ACTION_ATTR_POP_PRIORITY,

ODP_ACTION_ATTR_DROP_SPOOFED_ARP.

2. Qos and queue for port config, and flow match of tunnel id are not implemented.

Source code distribution

The source code of the Pronto OVS is available through email request. Please send your request to support@pica8.com.

Notes On OpenFlow

OpenFlow has attracted significant attention in the past three months. Part of the reason is the announcement of ONF.  The OpenFlow Lab of Interop also helped pushing up the interest level. Well, inevitably, there also comes many doubts whether OpenFlow deserves such a high interest.

While OpenFlow is not our only focus in our Open Switching Software Architecture, we see it as a very potential technology that may change how we control and configure the network in the future. We would like to share our view on this technology.

Let’s be clear. Can OpenFlow solve all network problems?

Not likely, at least not in the short term.

OpenFlow is an control interface (or API) to program the switch data plane. It does not define new frame type (yet) or new way for switches to distribute the traffic. This means OpenFlow is still limited by whatever data plane capability that existing Ethernet can support. For example, with OpenFlow, Ethernet will still lose packets when congestion occurs.

OpenFlow is just a control protocol, what is new?

OpenFlow is in fact more like an API to program the switch data plane directly.

In the traditional network administration, users configure the protocols through CLI (or other configuration interfaces), then the protocols figure out how to control the data plane. With OpenFlow, administrators or software can directly control the data plane through OpenFlow interface.

Programming the data plane without protocol’s help used to be a no-no. So, what has changed?

Most of the L2/L3 switching technology was designed to respond to the changes of topology. Protocols are designed to handle addition, removal, and failure of nodes in the network. While this requirement is still quite important to many networks, some operators start to realize these protocols add tremendous complexity to the network management.

Take data center as an example. In today’s data centers, switches and cables are part of the infrastructure, which is well planned and laid out long before the actual servers are installed. In this type of network, the topology is fixed, ie. no need for STP to figure out the “best path” or OSPF to find the “shortest path”. All traffic and routes can be pre-determined.

In this case, if network administrators or software can directly program the data plane, either statically or dynamically, it will be much simpler than trying to distribute the traffic through ECMP or Trill from multiple vendors (or even different product lines from the same vendor).

So, what if failure occurs in the network? Well, since the topology is pre-determined, the error handling can be pre-programmed as well. There is really no need for individual switches to try to figure out the alternative route after the failure happens, especially in the case the failure is local to a handful or servers and most of the switches should not even care.

Practically, where can people use OpenFlow?

While still at R&D stage, OpenFlow is showing some interesting signs of solving real problems.

One particular potential area is to build a dynamic virtual network in data centers. With GRE or similar tunneling technology, OpenFlow can build a virtual network on top of fixed cables. This allows data center administrators to dynamically associate resources through this virtual network. Randy Bias of CloudScaling has published an insightful white paper on this architecture.

Another interesting area is to build an MPLS network through OpenFlow. Ericsson Research has build a Wiki to explain how this can be done.

An interesting coincidence in the two examples is they are both trying to use OpenFlow to program data traffic that tunnels through existing network. Well, it definitely helps to get through trial gap if OpenFlow does not require users to replace all existing equipments.

Finally, is OpenFlow a hype?

It does not feel like a hype yet.

While a group of people is over-excited about this new technology and another group of people is eager to dismiss it, most of the developers and network specialists are still investigating how OpenFlow is going to evolve.

Some people predicts it will only evolves into a niche. Well, compared to Ethernet, aren’t new switching technology, such as FCoE and Trill, all eventually become a niche?

Network Load Balance with OpenFlow

James Hamilton had an post on Network Load Balancing with OpenFlow. In this post, James re-iterates the vision of migrating networking to open and commoditized platforms, which we not only share but work hard to make it happen.

Open, multi-layer hardware and software stacks encourage innovation and rapidly drive down costs. The server world is clear evidence of what is possible when such an ecosystem emerges.

This post also gives an interesting example of using OpenFlow to load balance a multi-campus network.

… Essentially they distribute the load balancing functionality throughout the network. What’s unusual here is that the ideas could be tested and tried over a 9 campus, North American wide network with only 500 lines of code. With conventional network protocol stacks, this research work would have been impossible in that vendors don’t open up protocol stacks. And, even if they did, it would have been complex and very time consuming.

I find an interesting point in James’s conclusion – if it is not OpenFlow, it will be difficult (complex and time consuming) to try out the load balancing idea.

The question about the value of OpenFlow was often raised as an attack point to this new technology. People often quoted Professor Scott Shenker, a founding member of ONF, “[OpenFlow] doesn’t let you do anything you couldn’t do on a network before”. Some said OpenFlow was nothing new and did not bring value to the network world. This is only half of the truth. Take the load balance case as an example, that feature can be done on selected Cisco switches if you hire a network specialist for a couple of weeks, if not months, to try out various switches and configuration.

With the example of implementing a load balancing network with less than 500 lines of code, we see the potential of this OpenFlow protocol.

  1. New features can be done and tune through software, without involving manual configuration. This gives the hope a scalable network can be self managed and tuned by software.
  2. By allowing researchers and engineers to try out the new ideas on a production network, OpenFlow can roll out new network innovation much faster than traditional network environment.

This implies someday the system administrators might be able to use software, through OpenFlow protocol, to replace some tasks that can only be carried out by network specialists.  The system administrator can focus on more critical things like optimizing the efficiency of the whole data center or planning for the future growth than trying to trace down the packets on the wire.