Just as servers have moved to virtual systems, our networks have also become virtualized. In this video, you’ll learn about network function virtualization (NFV), hypervisors, vSwitches, and virtual network interface cards (vNICs).
The way we’ve managed data has changed rapidly over the last few years, and the way that we’ve deployed networks has changed along with that. We used to have server farms that would have individual computers and individual servers inside of them. Let’s say that we had a server farm with over 100 individual servers all working and connected together with the network. This enterprise network is usually connecting all of these devices using multiple VLANs, redundant connections, and high-speed communication. But with virtual servers, we’ve realized that we don’t need 100 separate physical servers.
Instead, we could create 100 virtual servers that may be located within one single physical device. So this brings up the question, that if we’re collapsing all of these servers into one single physical device, what happens to our network? If we’re replacing physical servers with virtual servers, then we’re taking all of our physical network and we’re replacing it with a virtual network.
This is called network function virtualization, or NFV, where we’ll take all of our network devices and the entire network infrastructure, and we’ll move it directly into the hypervisor. This means that all of our switching, all of our routing, our VLANs, our firewalls, and anything else on the network infrastructure are now contained within this virtual system.
This not only provides us with the same functionality we had when we had physical switches and physical routers, but in many cases, provides us with additional capabilities. For example, when you need a new switch or a new router, you don’t have to go out, purchase a new router, put it into a rack, power it up and then physically connect those devices. Instead, you simply click a few buttons inside of the hypervisor, and you can drag and drop a brand new router or a brand new switch into your network infrastructure.
This means that you could have many different kinds of deployment options for your virtual machines, your containers, you can add fault tolerance and different monitoring services, all from this network function virtualization. As with most virtual systems, everything starts and ends with the hypervisor. This is our Virtual Machine Manager, or VMM. This Virtual Machine Manager is responsible for managing all of the operating systems, all of the virtual systems, and all of our virtual network connections that we’re deploying on this virtual system. The hypervisor is responsible for managing access to the CPU, to memory, and to the network for all of those virtual systems, but it can all be managed from this one Central Management Console. Sometimes you’ll hear this referred to as a single pane of glass, because instead of visiting all of those individual virtual systems, you simply go to one management screen and you can control everything.
Here’s a better view of this hypervisor. You can see this one hypervisor has two different types of virtual regions. We also have different virtual machines running in each of those regions. You can see that there are 24 processors inside of this one single physical device that’s managed by this hypervisor, with 47 gigahertz of cycles available for the CPU, 90 gig of memory, and you can see all of the virtual machines that are managed by this single hypervisor.
Now that we know where all of our virtual machines are located, and the device that is managing those virtual machines, now we need to have a network that connects all of those virtual machines together. We connect that through a vSwitch, or a virtual switch. We’re simply taking the physical switch that we used to have, and we’ve moved it into the virtual world.
We can still do all of the things that we used to do on our physical switch. We know how to set all of the forwarding options. We can configure link aggregation between different virtual switches and different servers. We can do port mirroring and NetFlow to provide additional management capabilities, even though this is contained within a virtual environment. And deploying one of these virtual switches from the hypervisor is simple. You simply drag and drop, or click a button, and you can deploy one of these virtual switches. This can also be automated through the hypervisor’s API so that this can be deployed and removed automatically using orchestration.
Here’s a better view of this virtual switch. There’s the picture of the virtual switch. It connects to eight different networks and four different hosts. You can see all the different ports– 553 ports on this virtual switch– and all of the things that is supported by this virtual switch. This supports NetFlow, it supports Link Aggregation Control Protocol, or LACP, for providing multiple links for load balancing. We also have port mirroring, health, check and other features available on the switch.
Inside of your virtual machines will be a virtual network interface card, or a vNIC. All of these virtual servers need a vNIC so that they can then communicate out to the rest of the network. This is usually also configured through the hypervisor, and you can add additional functionality or features depending on what you need on that server. You may need multiple network interface cards to provide load balancing, or perhaps you want to add some type of VLAN capabilities or additional monitoring. All of that can be done through the hypervisor.
Here’s a graphical view of these network connections in the hypervisor. On the right side is the uplink to the physical network. We have to have some way to take this data from the virtual world and get it into our physical world, and these are the interfaces that provide that. And then in these different regions, in different virtual machines, you can see individual ports are configured. All of these individual interfaces or vNICs are configured and connected to the same central network. We can make some of these networks private, we can associate them with a different VLAN, and build out as complex or detailed a network as we might need for our applications.