An optimal cloud computing infrastructure will make strategic use of edge computing and fog computing. In this video, you’ll learn how edge and fog computing are used to create an efficient and effective cloud computing design.
<< Previous Video: Cloud Models Next: Designing the Cloud >>
Cloud computing has changed the way that we approach the deployment of applications and the use of those applications. It gives us a massive amount of computing power located in the cloud. We have instantly available access to all of this power, and we can store amazing amounts of data all in this cloud-based infrastructure.
If you’ve ever managed a cloud-based infrastructure, then you know you can click a button and instantly deploy new applications. It’s very easy and very fast to have exactly the power you need in the locations you might need it. The cloud has also changed the costs involved in deploying these new applications. Instead of creating your own data center and purchasing your own equipment, you can simply buy some time on a cloud-based service and pay as you go. Although cloud computing provides significant advantages, there are some disadvantages as well.
For example, the cloud may not be at a location that is near you. There may be some delays in communicating between your location and the location where these cloud applications are hosted. There might also be limited bandwidth. Since this is going across a wide area network connection and it’s not technology that’s located in your facility, there is limited bandwidth between you and the cloud.
It may also be difficult to protect the data that you’re storing in the cloud because it’s often stored in the format required by the cloud service provider, and it may not allow you to do any type of encryption. And of course, because these applications are located off-site in the cloud, it requires some type of connectivity to be able to communicate with those apps. So we would need some type of private network connection or internet-based connection to be able to use any of those applications.
In recent years, we’ve seen an enormous increase in the number of IoT, or Internet of things devices that we’re adding onto our networks. It’s not unusual to have climate control alarm systems and lighting systems all running inside of our homes but also accessible as Internet of things devices on the internet. This makes it very easy for us to pop open an app on our mobile device and control what the lighting might be in our house, we can change the temperature of the room, and we can make sure that our alarm systems are up and running.
A lot of the technology and the data for these devices are occurring on the device themselves. We call this edge computing or sometimes just abbreviated as edge. This means that the applications that are running and the decisions being made from the data created by these applications are all occurring on the local system and don’t have to go out to the internet. Since there would be no requirement to communicate this data out to the internet, we don’t have to worry about latency or any type of wide area network or internet connectivity.
That also means that because this is a device that’s running locally in our environment that the speed and performance of this device should be at the local speed of our network. We should also think about edge computing from a data perspective. All of those Internet of things devices, your garage door opener, your washer and dryer, your climate control system, are all collecting data on your network.
These devices use this data to make decisions about how they should operate. For example, a climate control system can look at the temperature in a room and determine if it should cool or heat the room based on what the current temperature might be. There is not a need to go out to the internet and process any data just to make those local decisions on the climate control device.
But there may be times when these Internet of things devices may provide additional functionality by taking some of that data and moving it into the cloud for processing. There may be also a midpoint, though, between having the data locally and having the data on one centrally stored cloud-based server. Instead of consolidating all of this data in the cloud for all devices, we can have a subset of devices consolidated in the fog. This is fog computing. It’s a distributed cloud architecture that allows us to send information into the cloud for processing without requiring that all of this data be consolidated in one single place.
This means that any data that our IoT device needs to make local decisions can stay local on that device. It doesn’t need to go into the cloud. But we might want to take some of the data created by these devices and move it into the cloud for additional processing. We can then compare the data that we’re seeing with data seen by other people and be able to use that to be able to make our devices work more effectively. From a privacy perspective, this means that we can keep sensitive data on our local network, and we can only send into the fog the information that we might feel a little more comfortable sharing with others.
Here’s a visual perspective of our Internet of things devices, the fog computing, and the cloud computing that sits above that. For example, let’s look at an Internet of things device such as an automobile. Inside of our cars, we have over 50 different CPUs that are collecting data across many different systems. For example, information about our tires can be used by our suspension system and our braking system to make our cars safer. All of that data is being collected locally and is being acted upon by devices that are on our edge or local device.
That data doesn’t need to leave the car because we’re using all of that data locally. But there are circumstances where we might want to send some of that data into the cloud to be able to consolidate it with what other people may be seeing for the same types of tires or the same types of braking systems. Instead of sending all of that up into one single consolidated view, we could send that into the fog. These nodes are going to provide real-time processing of this data, it might optimize or buffer this information or consolidate it so that it can be used later, and we might want to send this data into the fog just so that we can then take that data and send it down to another Internet of things device.
Once that data is in the fog, the manufacturer of our car may want to consolidate that into a much larger database to begin performing some advanced analytics. So the data could be moved from the fog into a cloud-based data center and consolidate that from everyone who owns the same car that you do. This multitiered architecture allows us to still get the data we need to perform the analytics that are important, but it limits the type and the amount of data that would finally be rolled up into our data center. Having local processing on the edge device and a midpoint in the fog allows us to have a much more efficient cloud computing experience.