Network Virtualization in CONFINE

The CONFINE project intends to provide network overlays for each experiment.

What is Network Virtualization?

Network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network.

A networking environment supports network virtualization if it allows coexistence of multiple virtual networks on the same physical substrate. Each virtual network (VN) in a network virtualization environment (NVE) is a collection of virtual nodes and virtual links. Essentially, a virtual network is a subset of the underlying physical network resources. Network virtualization involves platform virtualization, often combined with resource virtualization.

Virtualization technologies

Virtual Local Area Network

A virtual local area network (VLAN) is a group of logically networked hosts with a single broadcast domain regardless of their physical connectivity. All frames in a VLAN bear a VLAN ID in the MAC header, and VLAN-enabled switches use both the destination MAC address and the VLAN ID to forward frames. Since VLANs are based on logical instead of physical connections, network administration, management, and reconfiguration of VLANs are simpler than that of their physical counterparts. In addition, VLANs provide elevated levels of isolation.

Virtual Private Network

A virtual private network (VPN) is a dedicated network connecting multiple sites using private and secured tunnels over shared or public communication networks like the Internet. In most cases, VPNs connect geographically distributed sites of a single corporate enterprise. Each VPN site contains one or more customer edge (CE) devices that are attached to one or more provider edge (PE) routers.

Active and Programmable Networks

Active and programmable networks research was motivated by the need to create, deploy, and manage novel services on the fly in response to user demands. In addition to programmability, they also promote concepts of isolated environments to allow multiple parties to run possibly conflicting codes on the same network elements without causing network instability.

Overlay Networks

An overlay network is a virtual network that creates a virtual topology on top of the physical topology of another network. Nodes in an overlay network are connected through virtual links which correspond to paths in the underlying network. Overlays are typically implemented in the application layer, though various implementations at lower layers of the network stack do exist. Overlays are not geographically restricted, and they are flexible and adaptable to changes and easily deployable in comparison to any other network. As a result, overlay networks have long been used to deploy new features and fixes in the Internet. A multitude of overlay designs have been proposed in recent years to address diverse issues, which include: ensuring performance and availability of Internet routing, enabling multicasting, providing QoS guarantees, protecting from denial of service attacks, and for content distribution, file sharing and even in storage systems. Overlays have also been used as testbeds to design and evaluate new architectures. In addition, highly popular and widely used peer-to-peer networks are also overlays in the application layer.

Recent Applications

At testbed level, the homogenization of network resources towards the control frame has been made possible with the integration of the Open-Flow protocol (OFP) on the federated testbed nodes, as a uniform way for establishing paths between network nodes of different technologies, and node resources management in the context of user experiments.

OpenFlow

OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network.In simpler terms, OpenFlow allows the path of network packets through the network of switches to be determined by software running on multiple routers (minimum two of them – primary and secondary – has a role of observers). This separation of the control from the forwarding allows for more sophisticated traffic management than is feasible using access control lists (ACL)s and routing protocols. Its inventors consider OpenFlow an enabler of Software Defined Networking.

An OpenFlow Switch consists of at least three parts:

(1) A Flow Table, with an action associated with each flow entry, to tell the switch how to process the flow,

(2) A Secure Channel that connects the switch to a remote control process (called the controller), allowing commands and packets. to be sent between a controller and the switch using
(3) The OpenFlow Protocol, which provides an open and standard way for a controller to communicate with a switch. By specifying a standard interface (the OpenFlow Protocol) through which entries in the Flow Table can be defined externally, the OpenFlow Switch avoids the need for researchers to program the switch.
Dedicated OpenFlow switches. A dedicated OpenFlow Switch is a dumb datapath element that forwards packets between ports, as defined by a remote control process.

Actions:

1. Forward this flow’s packets to a given port (or ports). This allows packets to be routed through the network. In most switches this is expected to take place at line-rate.
2. Encapsulate and forward this flow’s packets to a controller. Packet is delivered to Secure Channel, where it is encapsulated and sent to a controller. Typically used for the first packet in a new flow, so a controller can decide if the flow should be added to the Flow Table. Or in some experiments, it could be used to forward all packets to a controller for processing.
3. Drop this flow’s packets. It can be used for security, to curb denial of service attacks, or to reduce spurious broadcast discovery traffic from end-hosts.

OpenFlow-enabled switches. Some commercial switches, routers and access points will be enhanced with the OpenFlow feature by adding the Flow Table, Secure Channel and OpenFlow Protocol.

Controllers. A controller adds and removes flow-entries from the Flow Table on behalf of experiments. For example, a static controller might be a simple application running on a PC to statically establish flows to interconnect a set of test computers for the duration of an experiment.

Use Cases

 Add properties to Network:

  1. Packets belonging to users other than owner should be routed using a standard and tested routing protocol running in the switch or router from a “name-brand” vendor.
  2. Owner should only be able to add flow entries for her traffic, or for any traffic his network administrator has allowed him to control.

Property 1 is achieved by OpenFlow-enabled switches. In Amy’s experiment, the default action for all packets that don’t come from Amy’s PC could be to forward them through the normal processing pipeline. Amy’s own packets would be forwarded directly to the outgoing port, without being processed by the normal pipeline.

Property 2 depends on the controller. The controller should be seen as a platform that enables researchers to implement various experiments, and the restrictions of Property 2 can be achieved with the appropriate use of permissions or other ways to limit the powers of individual researchers to control flow entries. The exact nature of these permission-like mechanisms will depend on how the controller is implemented. We expect that a variety of controllers will emerge. As an example of a concrete realization of a controller, some of the authors are working on a controller called NOX as a follow-on to the Ethane work.

Open vSwitch

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.

Use Cases

  • Use Case 1: Isolating VM traffic using VLANs

It consists of two different physical networks. First one is Data Network, it is Ethernet network for VM data traffic, which will carry VLAN tagged traffic between VMs.  the physical switch(es) must be capable of forwarding VLAN tagged traffic and the physical switch ports should be VLAN trunks; The second is Management Network, This network is not strictly required, but it is a simple way to give the physical host an IP address for remote access, since an IP address cannot be assigned directly to eth0.

Also two hosts are involved in the system, both hosts are running Open vSwitch, and each host has two NICs (network interface controller), eth0 is connected to the Data Network.  No IP address can be assigned on eth0. Eth1 is connected to the Management Network (if necessary).   Also, eth1 has an IP address that is used to reach the physical host for management. There are four VMs in the system, two run on Host1 and the other two on Host2, each VM has a single interface that appears as a Linux device on physical host, which makes isolated VMs using VLANs on the Data Network.

  • Use Case 2: Monitoring VM traffic using sFlow

Again two physical networks and two physical hosts are used in the monitoring system, the differences are that Data Network becomes optional this time; it can be replaced by connecting all VMs to a bridge that is not connected to a physical interface. However, Management Network must exist, as it is used to send sFlow data from the agent to the remote collector. In terms of the hosts, the first “Host1” is the same as the host in the last case, but the second host “Monitoring Host” can be any computer that can run the sFlow collector.  For this cookbook entry, we use sFlowTrend, a free sFlow collector that is a simple cross-platform Java download. Other sFlow collectors should work equally well. In “Monitoring Host”, eth0 is connected to the Management Network.  eth0 has an IP address that can reach Host1.

This system monitor traffic sent to/from VM1 and VM2 on the Data Network using a sFlow collector.

  • Use Case 3: Rate-limiting VMs using QoS policing

In this system, one physical network is needed, which is Data Network. In terms of the two hosts, instead of the “Monitoring Host”, it uses a “Measurement Host” which can be any host capable of measuring throughput from a VM.  For this cookbook entry, we use netperf, a free tool for testing the rate at which one host can send to another. As a result, the system Rate-limit the traffic sent by each VM to 1Mbps.

Linux Container (LXC)

LXC (Linux Containers) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space.

Moreover, LXC builds up from chroot to implement complete virtual systems, adding resource management and isolation mechanisms to Linux’s existing process management infrastructure. It implements resource management via “process control groups”; resource isolation via new flags to the clone system call; and several additional isolation mechanisms such as the “-o newinstance” flag to the devpts file system.

However, Linux Containers take a completely different approach than system virtualization technologies such as KVM and Xen, which started by booting separate virtual systems on emulated hardware and then attempted to lower their overhead via paravirtualization and related mechanisms. Instead of retrofitting efficiency onto full isolation, LXC started out with an efficient mechanism (existing Linux process management) and added isolation, resulting in a system virtualization mechanism as scalable and portable as chroot, capable of simultaneously supporting thousands of emulated systems on a single server while also providing lightweight virtualization options to routers and smart phones.

Use case: use a modern Linux system to run ns-3 systems using paravirtualization of resources with Linux Containers (lxc).

ns-3 is a discrete-event network simulator for Internet systems, targeted primarily for research and educational use. ns-3 is free software, licensed under the GNU GPLv2 license, and is publicly available for research, development, and use.

The dotted-dashed lines demark the containers and the host OS. For example, we created two containers, located at the top of the picture. They are each running an application called “Your App.” As far as these applications are concerned, they are running as if they were in their own Linux systems, each having their own network stack and talking to a net device named “eth0.”

These net devices are actually connected to Linux bridges that form the connection to the host OS. There is a tap device also connected to each of these bridges. These tap devices bring the packets flowing through them into user space where ns-3 can get hold of them. A special ns-3 NetDevice attaches to the network tap and passes packets on to an ns-3 “ghost node.”

The ns-3 ghost node acts as a proxy for the container in the ns-3 simulation. Packets that come in through the network tap are forwarded out the corresponding CSMA net device; and packets that come in through the CSMA net device are forwarded out the network tap. This gives the illusion to the container (and its application) that it is connected to the ns-3 CSMA network.

 

References:

1. Network Virtualization: State of the Art and Research Challenges, N.M. Mosharaf Kabir Chowdhury and Raouf Boutaba, University of Waterloo

2. A survey of network virtualization, N.M. Mosharaf Kabir Chowdhury, Raouf Boutaba

3. OpenFlow: Enabling Innovation in Campus Networks, Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, Jonathan Turner

4. http://openvswitch.org/

5. http://lxc.sourceforge.net/

Comments are closed.