Open Flow Info
Open Flow Info
OpenFlow/SDN is emerging as one of the most promising and disruptive networking technologies of recent years. It has the potential to enable network innovation and create choice, and thus help realize new capabilities and address persistent problems with networking. It also promises to give network operators more control of their infrastructure, allowing customization and optimization, therefore reducing overall capital and operational costs. Industry is embracing SDN. Network operators plan to build their infrastructure using this innovative technology. Incumbent vendors as well as startups are developing a range of products for different market segments including data center, service provider and enterprise.
What is OpenFlow/SDN?
SDN is a new approach to networking and its key attributes include: separation of data and control planes; a uniform vendor-agnostic interface called OpenFlow between control and data planes; a logically centralized control plane; and slicing and virtualization of the underlying network. The logically centralized control plane is realized using a network operating system that constructs and presents a logical map of the entire network to services or control applications implemented on top of it. With SDN, a researcher or network administrator can introduce a new capability by writing a simple software program that manipulates the logical map of a slice of the network. The rest is taken care of by the network operating system.
Promise of OpenFlow/SDN
Enabling innovation and creating choice has been a proven recipe for realizing best solutions in the market place and that is exactly what SDN is attempting to do with networking. SDN enables innovation with its network operating system and network virtualization. The network operating system provides a central vantage point and well defined API that make it easy for the operator or a third party to create new network management and control applications. The application developer essentially operates on a local network graph or even a simpler abstraction of the network and does not need to worry about all the complexity of the distributed control of the network. Network slicing and virtualization makes it easier to experiment with new capabilities in isolated slices of the network without impacting other part of the network. Network virtualization, as server virtualization, also significantly helps with efficient use of network resources with multiple customers and services. SDN enables choice with the separation of data and control planes and a vendor agnostic interface OpenFlow between the two, well-defined API for the network operating system, and network virtualization. The OpenFlow interface allows a network operator to mix and match devices from different vendors and make independent choices for the control and data plane vendors. The well-defined API for the network operating system means third parties can develop and sell network control and management applications creating more choice for the network operators. Finally, network virtualization allows a network operator to use different and customized control plane solutions for different virtual networks and thus not become dependent on a single vendor. The Network Operating System API combined with slicing and virtualization also makes it possible for researchers to experiment with their research ideas on a slice of a production network without impacting it within or across campuses - offering researchers a much larger realistic infrastructure than has been possible before. Short videos of example research experiments enabled by OpenFlow/SDN can be found at http://www.openflow.org/videos/.
Owners of data center, enterprise, and service provider networks realize the promise of SDN and have been actively pursuing it. Some of the world's largest network providers, including Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo!, have created the Open Networking Foundation (ONF), publicly launched in March 2011, to standardize and promote SDN interfaces and protocols including OpenFlow. Over 40 companies are now members of the Open Networking Foundation and 15 vendors demonstrated interoperable OpenFlow/SDN offerings at Interop 2011 in Las Vegas. Several vendors have announced products for both switches and controllers and many more are expected to do so during the coming 12 months.
More Information
Open Networking Foundation OpenFlow and SDN Watch Videos to learn more about OpenFlow/Software-Defined Networking
David Ward | April 16, 2012 at 5:30 am PST 1 This is the question I continue to ask myself as I look back at my career at various companies in multiple industries. As I look back, I remind myself of the industry changing trends that weve gone through in past few decades: the rise (and fall) of the mainframe, the PC, numerous different networking protocols and technologies, and various standards that come and go. On top of all this I recall, dozens of system architectures and hundreds of programming languages. And these days Open Source Software, Si-photonics, mega/giga/tera-bit interfaces, smart phones and tablets, big data and real time analytics, cloud computing, everything fully virtualized. Lets pause here to think about the game changers. The architectures, processes and ideas that once pushed industries forward seemed to eventually disappear into the next big thing. Distributed Object Technology (RFC), Loosely Coupled Technology and Architectures (SOA). Agile, or is it Dev/Ops? As you can see, there are major differences here. Each technology trend brings tremendous value and is of critical importance but, like so many of these examples there is that fundamental difference, that many of these trends evolve and merge into much bigger vision. Its also present in how we view SDN and how we are including it in what were building at Cisco. Once you take that step back and can separate game-changing from the industry-shifting movements, you can fundamentally break down why we believe that SDN is a piece of a larger puzzle. Software Defined Networks are a great game-changer likely to help many companies provide new services it is definitely something we see as beneficial and are working on. However, when I dig into what were truly working on and the ripples that we (and a few of our customers) think this will have, we see market evolution. Something that affects not only how people build technology or products or scale and operate them, but something that makes people look at their business differently. It allows them to re-think revenue and business models. How people collaborate. It creates new business channels and models on top of that. Thats where we are. And as we talk to more service providers; we find that they think so too.
Let me share with you my perspective Two Fundamental Platforms and Underlying Complexity The importance of the diversity of functionality and programming interfaces necessary to make networking hardware and software platforms work better together cant be underestimated. But to make them really work better together we must start thinking multi-dimensionally. The need for real-time, duplex communication between both hardware and software and application development platforms needs to be pushed further. Its absolutely necessary for this to happen as the intelligence in the network is far underutilized. But thats not enough. How do the hardware and software platforms connect? How is the value delivered in a usable way to the market segments that need it? Programmability coupled with key protocols, APIs and new technologies enable real-time, fully duplex transfer of state between the applications and Management Service
planes of the network. This isnt only about separating and programming two planes (which is how SDN is defined largely today); its about full duplex access to all of the planes of the network. You can only get this to work if the network programming interfaces (NPIs) are dynamic and capable of extracting data out of the network as well as programming features and state. The elements across and between the layers of the network need to be able to talk to each other in ways that allow them to signal application requests faster and with location-based information enabling new functionality to developers and integrated into development platforms. Real-time, full duplex interaction is the key to how these programmable interfaces must work with the application and the network and this is where SDN needs to get to. I see a workflow based on the concept of a real-time feedback loop that includes data mined from the network, the transformation of that data to actual and usable information via analytics engines that can be consumed at multiple levels of infrastructure communicating via orchestration mechanisms to policy servers to create a unified policy (in the generic sense) and the programming of that policy into the network using an active set of NPIs. Policy here can be config, features, VDI, hypervisor, security profiles, content access i.e we are referring to policy in the most general sense. Multi-Dimensional Networks Imagine an environment that is constantly managing not just the state of the network but analyzing the applications network requirements and state and calculating policies that are programmed across every network plane. This vision reflects exactly what is already attempting to be built in Providers networks and is only realizable when the application developers information and ability to program state is multi-dimensional vs. uni-dimensional. If we introduce this continuous feedback loop coupled with the multi-dimensional benefits it becomes the key that unlocks the real value that comes from SDN. Better and more intelligent applications can be created by integrating data derived from the network. The critical understanding is that multi-dimensional environments add additional elements to SDN. Looking at what data is the right data, the transformation and packaging of that into information that can be consumed and utilized is critical. Not everyone consumes technology the same way, and its rapidly evolving. Much of what were focusing on includes this in particular. In order to innovate and scale into the future while also providing customers with investment protection we must expanding our thinking to include the data transformation, delivery, and consumption models as well as evolving architectures of existing platforms. Harnessing this kind of power for development will foster
innovation and yield new services but at what level? And how will we drive real business benefit versus just short term improvements? Challenging the Current Business Model I hear a lot of talk about the business problems that SDN will help address and you can read about a variety of use cases for SDN. Many use cases concentrate on extracting data and programming data (or information, once transformed), but theyre missing the real problem. SDN is about helping customers and users find ways to efficiently use their network and operational resources and increase revenue from services to start. If we base our offerings solely on the aspect of lowering infrastructure cost and increasing a couple of points on Total Cost of Ownership. , we change small things. Pennies in a jar, so to speak. Better and more lucrative business models are needed not only for providers but for enterprises as well. At Cisco, we know that theres something better we can do for our customers, partners, and end-users. The problems we see customers facing and the use cases on the market today quite emphatically demonstrate that the solution that we need to provide is bigger than just separating the control and forwarding planes in the network. This is a critical first step, but there needs to be more than just layer separations and abstractions. Additionally, SDN is not solely about a single feature or a single network function. Most often virtualization is the capability that SDN is promised to bring. And is primarily applicable only in the Data Center. In my view its much more than that. The programmability that we have at our fingertips can be used in a multi-tenant data center BUT it isnt limited to just that function. We can have full duplex programming of ephemeral state at all sorts of network perimeter points: hypervisor, DC security domains, Layer 3 VPN edge points, mobile and wireline subscriber termination points, secure VPN gateways, WLAN access points, CDN head-ends, provider peering points. Anywhere you can derive state -- youll want to program a policy or feature or be able to modify the customer experience and thats also where youll get the best data to analyze. The Gold Mine The data held in the network is the current untapped gold mine that has the ability to improve many things from real-time information, location based offerings to new service insertion points, intelligent applications that can re-route themselves based on network data and more. All of this leads to better user experiences and the potential monetization of these services. Data mining subscriber, session and application state only helps if we have this feedback loop between a policy engine and the network work together to mutual advantage. Optimizing the network not only reduces cost by using available capacity, enabling new protection/restoration strategies and agile/flexible OSS performance service enablement capabilities but also opens up innovation in areas that have needed a lot of attention for a long time. These are primarily centered on breaking the shackles of current provisioning protocols like RADIUS, COPS, PCMM, GDOI, CAPWAP and mining protocols like NETFLOW, IPDR, as well as gateways. My current mission working with our Cisco technical talent is about enabling both network optimization and service monetization to help customers bend back the cost curve and find value by increasing agility of service offering and improving experience. Thats today. For tomorrow, it needs to be around actual consumability, usability and smart, intuitive and usable data that delivers new business models based on abstracting network intelligence and presenting it to various existing application development environments. Imagine having embedded in HTML5 the ability to optimize networks easier, faster, more effectively, and at much greater scale. Better usability and real-time visibility so that the focus becomes not on tasks that should be automated or deployments and scalability issues. This leaves more time to focus on strategy, planning, creating new business and monetization opportunities and actually delivering them to generate new revenue channels and real money versus just value. Value today is about TCO, optimization and bottom lines. But what about
enablement and strategies for better business? What about revenue? Well THIS is the real challenge and the industry needs more time to work on all this. In the end, thats how I define success for myself, our team, our business, and our industry at large. What should be clear is that in no way do I see SDN or programmable interfaces as a technology that commoditizes networking equipment. Instead, I see it unleashing all the potential weve built into these systems and augmenting the current internet. Network information is power power generated by data transformation done well such as: on-demand and flip-the-switch or self-service experience levers, applications that know where you are and the re-routing of your application to the closest point of contact to get the information to you (and your friends) faster than before. This is a small subset of problems were solving today. Multi-Dimensional Approach The multi-layered approach that Im advocating is a bit like viewing a forest as an ecosystem. Walking into the woods and looking at only the bark of a single tree; only gives you a limited view of the health of the forest. One has to look at all elements that contribute to creating the forest. A forester must look at all these elements simultaneously, assess changes and inputs, history and emerging growth to understand and evaluate the overall health of the forest. This requiresanalysis that looks at not only a single aspect of the system but also how all the elements interact with each other. Our view at Cisco; uses the ecosystem analogy of how we view the interaction of all the elements and variables on the Internet. The system allows the network to provide information/state to applications, enhance sessions or enable restoration scenarios in ways that were previously unachievable. The network view must be both centralized and distributed at the same time. A centralized view of the topology with real-time updates of the multi-layered network coupled with the existing distributed control plane gives the ability to find and use otherwise trapped capacity. Therefore, the foundation of the Internet built upon distributed algorithms doesnt get reinvented instead it is augmented. We can now discover and view the end-to-end performance, delay, jitter, etc. of the network and place bandwidth and services on optimal paths. Without distributed routing we lose the highest performance resiliency and protection/restoration; without the centralized view we lose the ability for an application developer to write to the network vs. control one node. The key is just this: SDN must work with the entire network, as a system and not be limited to a pairwise architecture. By giving developers the ability to see and in real-time get the weather report of the network; they can enable service agility that allows them to build better and more intelligent applications. Session-aware applications work with every network layer and integrate with policy engines and a fluid network infrastructure that is in constant touch with those applications. This approach can drive operational costs down while laying the foundation for monetization and new business models when delivered in usable ways that are actually consumable by people building for the network layers. A Development Community Working Together Going back to the beginning of my post, I hope you now see the opportunity and value of both game-changers and industry shifts. When you allow yourself to take that step back you can see that what were thinking about is more than just SDN. We want your businesses to be always-on and using network intelligence in ways we know are possible. We want proactive versus reactive. We want real time. We want it now and we want to change it all tomorrow. Programmable networking and enabling application development platforms with network intelligence runs strong within me. On a personal note; Ive been working in the industry and inside Cisco to build the foundations and technologies that enables hardware and software platforms to work together. To be clear, this is
not a product. Its a solution. And you dont have to rip out your existing network to enjoy the benefits, we will make it evolutionary. And Cisco wants to give you a solution that solves multiple real problems in new ways and allows for our customers, our partners, our users and developers to do things better, faster and more effectively as well as the way you want to do them. Id love to hear your thoughts as well. Where do you see the future of networking? I have much to discuss and Ill be presenting Ciscos viewpoint at the Open Networking Summit in Santa Clara this week. Join us in shaping this next shift -- hear the presentation, meet my colleagues, and lets continue to push our thinking further to enable what an intelligent network can deliver. Tags: cloud, cloud_computing, SDN, Service Provider, tco Leave a comment
1 Comments.
1. Supersfat Media April 16, 2012 at 9:33 am Reply OpenFlow/SDN is emerging as one of the most promising and disruptive networking technologies of recent years. It has the potential to enable network innovation and create choice, and thus help realize new capabilities and address persistent problems with networking. It also promises to give network operators more control of their infrastructure, allowing customization and optimization, therefore reducing overall capital and operational costs. Thanks for sharing this articles. Marius Anton Supersfat media
E-mail Print A AA AAA LinkedIn Facebook Twitter Share This RSS Reprints
There is a joke among some software-defined networking and OpenFlow insiders here at the Open Networking Summit in San Jose: The Open Networking Foundation (ONF) will not be like the IETF -- it will not develop thousands of complicated protocols and standards based on one company's proprietary method; it will leave as much open for development as possible. Dan Pitt, ONF executive director, doesn't crack this joke but does make it clear that the foundation is about much more than developing the OpenFlow standard. The ONF is as much focused on fostering the kind of open application-development community for networking that can be found in other parts of the open-source software world. This is a massive cultural shift for a proprietary ASICs and hardware-oriented networking market. It's not that Pitt is anti-standard. With software-defined networking, the control plane is separated from the physical network and can separately control every flow on the network, depending on the need of the applications that reside in the upper layers. In this scenario, the only place where a standard must come into play is in the language the controller uses to translate information from the applications to the underlying physical and virtual switches. That's where the OpenFlow standard comes in, Pitt says. Above that, in the application layer, anything goes. So this week at the Open Networking Summit, there's lots of talk about these potential applications doing everything from prioritizing video and unified-communications traffic to controlling mobile access and managing intricate security strategies that differ for every tenant on a network. Actual product releases are still scarce here at the summit, and there's some anxiety about the lack of OpenFlow-friendly switches available (especially Cisco's lack of commitment) to ultimately support the ecosystem of applications. Yet, at this year's summit, conversation has evolved from defining OpenFlow and SDN to laying out actual use cases and how the technology can address customer need, Pitt said. The next step will be product releases. One thing is clear: There's no lack of excitement around the Open Networking Foundation and OpenFlow. In the year since the ONF was founded, membership has grown to 66, from 17, and members include every major networking vendor, including Cisco, Google, Yahoo, Facebook and VMware. The Open Networking Summit, with upward of 800 attendees, is completely sold out. Pitt talked to SearchNetworking about ONF and OpenFlow's general direction. Many companies at the Open Networking Summit will talk software-defined networking controllers and applications, but these won't work without OpenFlow-friendly switches. Where are we in that process? Dan Pitt: We had a plugfest last month in which we had at least a dozen switch code bases and four controller code bases and the flow visor. What we're seeing is that every company we talk to is doing something in terms
of product development or quiet trials to be prepared for this market. What I am hearing from the vendors is that their customers are asking them what their capabilities are for OpenFlow and software-defined networking. Everybody is trying to figure out how to do it, how to get out there first and which customer sectors to attack. Is there concern that key players, namely Cisco, will not go OpenFlow? If that's the case, will that hinder advancement for the whole ecosystem? Pitt: I can't speak for what Cisco's plans are. Our experience at ONF is that Cisco is an outstanding technical contributor and technical advisor. At this point, every participant is trying to understand where this is going and how to make it customer-satisfying technology. There are a variety of approaches to software-defined networking, including some proprietary ones. Some of these predate OpenFlow as a popular standard for the communication between the control plane and the forwarding plane. One thing about all of the incumbents is that they know what their customers' problems are, and they've been constrained by the culture of distributed RFC standards and then having to accommodate their proprietary operating systems and their shipping schedules. Now they can just write the software that controls the network directly to meet those customer needs. That said, when it comes to open software, it's anybody's ballgame. There will be lots of players, and to the benefit of the customers, there will be competition to see who can provide this software and this customization aspect fastest. Last year we heard about the basics of separating the control plane in SDN. But now I am hearing more about specific applications emerging. Can you talk about what these applications will be? Pitt: With software-defined networking, you can write software to make the network do exactly what you want it to do. So we have this logically centralized control plane that we call a controller, which is just a software function. The controller conveys to the switches what they should do with traffic when it comes in. So, for example, if you want to multicast today, you have all these protocols to configure these trees, and it's so complex that nobody uses it. If you want to multicast a flow with OpenFlow and SDN, the controller loads the flow pattern into the switches, so when a packet comes in, it sends it downstream to the switches and ports from there. There's no need for the network to configure itself. It's just direct programming of the routing algorithm. You can have a module above the controller that dictates access control [using user-based policy], or traffic engineering, or security, or compliance. Those are modules that influence calculation of the paths through the network. And what's more important is that software in the control plane can be written by anybody. An operator can write their own software, a vendor can supply their own software, independent software vendors who do nothing but software can supply these as products. Enterprises can hire their own staff or contractors to write this software. I think there will be a big market for networking apps once we have common agreed APIs [application programming interfaces]. Application development communities have not been part of the networking culture. Will networking pros easily make that cultural shift? Pitt: This is part of what's fundamentally different and exciting about networking. It's taking networking into the realm of software, and it will be a big cultural change. We have been hardware- and protocol-oriented, and now we're going to be software- and API-oriented. We will create standards like we're doing for OpenFlow, but that doesn't mean that all we do is create standards. We are going to standardize as little as necessary. [In the past] the market has arrived at conventions before a smoke-filled room of a standards committee. We are encouraging [a move away from that] into software culture.
What's different at Open Networking Summit 2012, compared to a year ago? Pitt: A year ago everyone was excited by the abstract notion of this stuff. This week we will see a lot of progress about making this real. What does "making it real" mean at this point? Does that mean product announcements? Pitt: That means people finding controllers and switches. We're seeing implementations in use of merchant silicon, ASICs, network processors and purely in software. The basic thing about controllers is what do they contain? What are the boundaries of a controller? Nobody knows. BigSwitch has already released an opensource basic controller, which translates logical directives into flow tables and switches. We'll see more of how these pieces come together to be offered commercially. Then I am looking forward to hearing how the carriers are adopting it and deploying it. I know there is a difference in adoption need (and therefore uptake) in SDN in the enterprise vs. service providers. For service providers, there is a pain point, so they need SDN. But for the enterprise, that may not happen for a while. What will change that? Pitt: We talk to people in the financial services community, and they are under these compliance mandates where they have to separate investment banking from commercial banking and their money from their customer's money, so they are running completely different infrastructures that are very expensive. They are looking at this as a way of having a common physical infrastructure with all those cost benefits. They can now have separate logical overlays that are auditable. When you are talking to enterprises, you have to consider different sizes of enterprises. The larger enterprises are going to [do this] first. Smaller and medium companies are going to look for cloud providers to build and manage their plumbing for them. It's just not part of their core competence. The Open Networking Research Center (ONRC) was announced last week. What will the relationship be between the ONRC and the Open Networking Foundation? Pitt: We don't know the nature of a formal relationship yet, but we are intending to be partners. They are on the research side, and we are on the commercialization side, so we have complementary roles.
There is a joke among some software-defined networking and OpenFlow insiders here at the Open Networking Summit in San Jose: The Open Networking Foundation (ONF) will not be like the IETF -- it will not develop thousands of complicated protocols and standards based on one company's proprietary method; it will leave as much open for development as possible. Dan Pitt, ONF executive director, doesn't crack this joke but does make it clear that the foundation is about much more than developing the OpenFlow standard. The ONF is as much focused on fostering the kind of open application-development community for networking that can be found in other parts of the open-source software world. This is a massive cultural shift for a proprietary ASICs and hardware-oriented networking market. It's not that Pitt is anti-standard. With software-defined networking, the control plane is separated from the physical network and can separately control every flow on the network, depending on the need of the applications that reside in the upper layers. In this scenario, the only place where a standard must come into play is in the language the controller uses to translate information from the applications to the underlying physical and virtual switches. That's where the OpenFlow standard comes in, Pitt says. Above that, in the application layer, anything goes. So this week at the Open Networking Summit, there's lots of talk about these potential applications doing everything from prioritizing video and unified-communications traffic to controlling mobile access and managing intricate security strategies that differ for every tenant on a network. Actual product releases are still scarce here at the summit, and there's some anxiety about the lack of OpenFlow-friendly switches available (especially Cisco's lack of commitment) to ultimately support the ecosystem of applications. Yet, at this year's summit, conversation has evolved from defining OpenFlow and SDN to laying out actual use cases and how the technology can address customer need, Pitt said. The next step will be product releases. One thing is clear: There's no lack of excitement around the Open Networking Foundation and OpenFlow. In the year since the ONF was founded, membership has grown to 66, from 17, and members include every major networking vendor, including Cisco, Google, Yahoo, Facebook and VMware. The Open Networking Summit, with upward of 800 attendees, is completely sold out. Pitt talked to SearchNetworking about ONF and OpenFlow's general direction. Many companies at the Open Networking Summit will talk software-defined networking controllers and applications, but these won't work without OpenFlow-friendly switches. Where are we in that process? Dan Pitt: We had a plugfest last month in which we had at least a dozen switch code bases and four controller code bases and the flow visor. What we're seeing is that every company we talk to is doing something in terms of product development or quiet trials to be prepared for this market. What I am hearing from the vendors is that their customers are asking them what their capabilities are for OpenFlow and software-defined networking. Everybody is trying to figure out how to do it, how to get out there first and which customer sectors to attack. Is there concern that key players, namely Cisco, will not go OpenFlow? If that's the case, will that hinder advancement for the whole ecosystem?
Pitt: I can't speak for what Cisco's plans are. Our experience at ONF is that Cisco is an outstanding technical contributor and technical advisor. At this point, every participant is trying to understand where this is going and how to make it customer-satisfying technology. There are a variety of approaches to software-defined networking, including some proprietary ones. Some of these predate OpenFlow as a popular standard for the communication between the control plane and the forwarding plane. One thing about all of the incumbents is that they know what their customers' problems are, and they've been constrained by the culture of distributed RFC standards and then having to accommodate their proprietary operating systems and their shipping schedules. Now they can just write the software that controls the network directly to meet those customer needs. That said, when it comes to open software, it's anybody's ballgame. There will be lots of players, and to the benefit of the customers, there will be competition to see who can provide this software and this customization aspect fastest. Last year we heard about the basics of separating the control plane in SDN. But now I am hearing more about specific applications emerging. Can you talk about what these applications will be? Pitt: With software-defined networking, you can write software to make the network do exactly what you want it to do. So we have this logically centralized control plane that we call a controller, which is just a software function. The controller conveys to the switches what they should do with traffic when it comes in. So, for example, if you want to multicast today, you have all these protocols to configure these trees, and it's so complex that nobody uses it. If you want to multicast a flow with OpenFlow and SDN, the controller loads the flow pattern into the switches, so when a packet comes in, it sends it downstream to the switches and ports from there. There's no need for the network to configure itself. It's just direct programming of the routing algorithm. You can have a module above the controller that dictates access control [using user-based policy], or traffic engineering, or security, or compliance. Those are modules that influence calculation of the paths through the network. And what's more important is that software in the control plane can be written by anybody. An operator can write their own software, a vendor can supply their own software, independent software vendors who do nothing but software can supply these as products. Enterprises can hire their own staff or contractors to write this software. I think there will be a big market for networking apps once we have common agreed APIs [application programming interfaces]. Application development communities have not been part of the networking culture. Will networking pros easily make that cultural shift? Pitt: This is part of what's fundamentally different and exciting about networking. It's taking networking into the realm of software, and it will be a big cultural change. We have been hardware- and protocol-oriented, and now we're going to be software- and API-oriented. We will create standards like we're doing for OpenFlow, but that doesn't mean that all we do is create standards. We are going to standardize as little as necessary. [In the past] the market has arrived at conventions before a smoke-filled room of a standards committee. We are encouraging [a move away from that] into software culture. What's different at Open Networking Summit 2012, compared to a year ago? Pitt: A year ago everyone was excited by the abstract notion of this stuff. This week we will see a lot of progress about making this real. What does "making it real" mean at this point? Does that mean product announcements?
Pitt: That means people finding controllers and switches. We're seeing implementations in use of merchant silicon, ASICs, network processors and purely in software. The basic thing about controllers is what do they contain? What are the boundaries of a controller? Nobody knows. BigSwitch has already released an opensource basic controller, which translates logical directives into flow tables and switches. We'll see more of how these pieces come together to be offered commercially. Then I am looking forward to hearing how the carriers are adopting it and deploying it. I know there is a difference in adoption need (and therefore uptake) in SDN in the enterprise vs. service providers. For service providers, there is a pain point, so they need SDN. But for the enterprise, that may not happen for a while. What will change that? Pitt: We talk to people in the financial services community, and they are under these compliance mandates where they have to separate investment banking from commercial banking and their money from their customer's money, so they are running completely different infrastructures that are very expensive. They are looking at this as a way of having a common physical infrastructure with all those cost benefits. They can now have separate logical overlays that are auditable. When you are talking to enterprises, you have to consider different sizes of enterprises. The larger enterprises are going to [do this] first. Smaller and medium companies are going to look for cloud providers to build and manage their plumbing for them. It's just not part of their core competence. The Open Networking Research Center (ONRC) was announced last week. What will the relationship be between the ONRC and the Open Networking Foundation? Pitt: We don't know the nature of a formal relationship yet, but we are intending to be partners. They are on the research side, and we are on the commercialization side, so we have complementary roles. Verizon's vast network involves an equally massive collection of network equipment installed in the field, where the technology behind the company's service offerings and network intelligence reside. Through SDN, Elby says Verizon will be able to centralize its intelligence from a high number of remote locations to its more accessible data centers. The software layer that would grant control to its network intelligence would enable Verizon to communicate and manage its network elements more flexibly, "so that we can explicitly define paths that might be based not just on traditional routing protocols, but will be based on things like service awareness or subscriber awareness or state of the network in terms of congestion or capabilities." OpenFlow, consequently, will be critical to making this transition not just for Verizon, but any other organization looking to benefit from SDN. "Software defined networking itself is a framework and it's very disruptive. It's sort of a radical change in how we do networking," Elby says. "But once you decide to do that, unless you think you're going to get all your software or all your components from one vendor, you need some standardization, some ways where your software can speak to any switch or router." That's why Verizon, as well as fellow board members Yahoo, Google, Facebook, Microsoft, Deutsche Telekom and NTT, "jumped at" OpenFlow after the initial research work done at Stanford grabbed their attention, Elby says. By standardizing SDN, Elby says the ONF can "take OpenFlow from its initial academic inception and harden it up, expand it into something that can really be operational in our environments." Currently, OpenFlow appears to be at the intersection of research and deployment. A few vendors, namely Big Switch Networks, NEC, IBM, and HP, have introduced their offerings to get in on this soon-to-be burgeoning market early, while Cisco is rumored to be quietly developing its own strategy.
At ONS, a bevy of other vendors plan to introduce their own contributions to bringing SDN capabilities to life in the enterprise. Dell will focus on ease-of-use and "cloud orchestration" techniques that could facilitate management of a software-defined network. Vello Systems will showcase its offerings for preventing latency in networks for both enterprises and service providers. Extreme Networks will present options for workflow, provisioning and management in campus networks and data centers. The list goes on and on. From here, the ONF will continue to analyze the potential for SDN as the wheels of OpenFlow meet the road of enterprise and service providers' networks. Since February 2011, the foundation has already approved two versions of the OpenFlow protocol, having published Version 1.2 just two months ago. Other research initiatives, such as the Open Networking Research Center, have reportedly recruited the assistance of founding researchers at Stanford University and the University of California, Berkeley. Growing support for OpenFlow and SDN suggests that the standard has turned a corner toward large-scale deployment, Elby says. "I really think the end of last year and beginning of this year we've really sort of crossed that line where companies realize that this is the real roadmap, the real product," Elby says. "It's not just something that's on the research side anymore." Colin Neagle covers Microsoft security and network management for Network World. Keep up with his blog: Rated Critical, follow him on Twitter: @ntwrkwrldneagle. Colin's email is [email protected]. Read more about software in Network World's Software section.