Other Infrastructure Concepts- CompTIA Security+ SY0-701 – 3.1

New network services can introduce additional security concerns. In this video, you’ll learn about virtualization, containerization, Internet of things, embedded systems, and more.


If you ask an IT professional where the safest place to store data might be, some might tell you that your on-premises infrastructure is the safest. Others might tell you a more secure environment is in the cloud. Both of these, of course, have their advantages and disadvantages when it comes to security. But of course, there are many other considerations to take into account.

If your security is in the cloud, then it’s centralized with all of your other cloud-based services. There’s no hardware that you have to support. There’s no separate data center that needs to be cooled and staffed. And the third party providing the cloud based services also provides the security. If you have your own data center, then all of your security technologies are local and on-premises. But this also means that you have to support all of those systems, which, of course, has a cost associated with it.

From the attacker’s perspective, they don’t care where the security happens to be. They’re working to circumvent and get around the security regardless of whether it’s in the cloud or on premises. If everything is in house, then you have complete control over the decisions made with the security, the options that are available to install, and you manage the entire process. In

Most on-premises infrastructures, there is an IT team that handles the management and maintenance of all of these systems. The staff that’s managing this on-premises security are well-trained, professional, and cost more than something that might be in the cloud. But of course, when you have your own data center, you have complete control of all of the data and all of the systems. If you want to make a change to an existing system or modify a security posture, you simply do that yourself. You don’t have to call a third party or access your cloud provider to make that happen.

However, if you’d like to bring additional equipment into the data center, there are obviously costs for purchasing the equipment, and it does take time to purchase, configure, and install this technology.

In most organizations, the technology is decentralized. You might have multiple locations for your organization, you might be using multiple cloud providers, and of course, there’s more than one operating system that’s running on all of these systems in your environment. From a security perspective, this becomes challenging to manage. Having all of these different systems in so many different places create a challenge for IT professionals to maintain the security of their data and applications.

To address these challenges, many security professionals will create a consolidated management view of all of their systems from one single console. This allows ongoing monitoring of every user, every device, every application, and anything else that’s important to monitor from a security perspective. You might get consolidated alerts brought to a single console. There might be a consolidated log file analysis so you can easily search for information regardless of where it might be in the organization. And there might be a global process for maintaining systems and keeping everything up to date.

Having this centralized point where you can view the entire organization does provide additional visibility, but it does create a single point of failure. If you lose the console, you’ve effectively lost your visibility to what’s going on with your security. And as your organization grows larger and larger, that system begins working more and more. You may need additional storage space to handle the increasing number of logs, and you might need additional CPU power to be able to manage the additional alerts and alarms that might be received.

Virtualization is also a significant technology that’s running in practically anyone’s data center. This allows us to run Windows, Linux, and other operating systems and be able to build these systems instantly and tear them down when needed. One challenge with a virtualized environment is that each individual virtual machine needs to have its own operating system.

The architecture starts with the infrastructure, or the physical device that everything will run on, and then there’s software that runs on top of that called the hypervisor. The hypervisor’s job is to manage all of the resources between the separate virtual machines running on that system. Each virtual machine requires its own guest operating system. And on top of that guest operating system will be the applications that you need to run in your environment.

Looking at this diagram, you can probably see where different inefficiencies might be. For example, let’s say that every guest operating system on this hypervisor is all running exactly the same OS. Maybe this is the same Linux operating system or the same Windows operating system. Even though it’s all the same OS, you still need to run three separate instances of that operating system even though they’re all identical to each other.

To address some of these inefficiencies, some organizations are moving from a virtualized environment to a containered environment. Containerization is a way to also have multiple applications running simultaneously, all on one single piece of hardware. The container in this containerization is the applications that sit on top of the container software. That container has everything you need to be able to run that application except for the operating system.

The containerization software manages this relationship between the operating system on the device and the application that’s running on top of it. This effectively allows you to swap applications in and out, especially when those applications are all sharing the same host operating system.

And similar to virtualization, all of these containers are isolated from one another. Each application can only see the application that’s running in that individual container, which makes it very efficient to be able to remove or turn off a container and replace and add others.

You’ll also notice that each of these containers is referencing a single host operating system that’s running on this infrastructure. So generally, these apps have already been designed to run on that single host operating system. And that means that everything that’s on that particular system might be running inside of Windows. You might have a separate containerization system that would support applications running in Linux.

So to compare a virtualized environment versus a containerized environment, you can see they all start at the bottom with hardware that everything is running on. That is the infrastructure itself. This changes a bit as we move up through the stack. You can see that a hypervisor is what sits on top of the infrastructure in a virtualized environment, but for a containered environment, we’ve got a host operating system that is running on that infrastructure.

On top of the host operating system is the containerization software. One of the most popular is called Docker. And that is the software that then manages all of these different apps that are running on the system. In a virtualized environment, our guest operating systems are running as separate entities on each virtual machine. There are, of course, advantages and disadvantages to both of these types of implementations, and you should find the one that works best for the types of applications and deployment model that you have.

A relatively new categorization of infrastructure devices is called IoT, or Internet of Things. These are devices that are designed to be integrated into your network and support some of the features and services that you use on a day to day basis. For example, you might have sensors in your home or business that are monitoring the temperature and managing the heating and cooling, or your offices may have an automatic lighting system using internet of things.

At home, internet of things might look like home automation tools, garage door openers, and video doorbells. You might be wearing an internet of things technology if you have a smartwatch or some type of health monitor. And at work, you may find that internet of things technologies are monitoring the air quality, they’re setting the temperature, and they may be turning on and turning off the lights automatically in the workplace.

IoT devices are very convenient. They provide automation and flexibility of things that, normally, we wouldn’t have access to. But they also come at a cost, especially from a security perspective. These organizations that are designing and developing thermostats, lighting systems, video doorbells, and the like are very good at creating temperature and video doorbells, but they may not be security professionals.

So we do have to think about how we would implement IoT devices in an environment that is highly secure. It only takes a single IoT device to be exploited for an attacker to have full access to the devices inside of your network. If you work in an environment where there are large pieces of machinery, then these are probably networked together using SCADA. SCADA is the Supervisory Control and Data Acquisition System. Sometimes, you’ll hear this referred to as an Industrial Control System, or ICS.

So if you work for a manufacturing company or in power generation, then you’re probably very familiar with this type of equipment and connecting all of these devices together using SCADA. This allows the technicians to sit in a centralized control room, monitor the status of all of these pieces of equipment, and be able to make changes and modifications in the control room rather than having to physically visit every piece of equipment.

One characteristic that tends to be very similar across many different organizations when it comes to SCADA is that all of these need to be completely segmented from the outside. You can imagine the security issues that could arise if someone was to gain access to power generation systems or oil refineries. This could have a dramatic impact not just on immediate power needs, but could affect it for a long period of time. This is why you’ll find that many SCADA systems are some of the most secure that you’ll find in the world, and for good reason.

If you use Windows, Linux, or other operating systems on your desktop, your tablet, or your mobile devices, then you’re using a non-deterministic operating system. That means that there’s no single process that can suddenly grab all of the resources of the system and take priority. But there are some systems that need that kind of deterministic use.

If you’re driving an automobile, using military equipment, or setting up manufacturing equipment, then you’re probably going to need a deterministic operating system. You may have already taken advantage of an RTOS, or a Real-Time Operating System, when you drive your car. If you’re in a situation where you need to brake the vehicle very quickly, your entire system will suddenly focus on that breaking system and make sure that you’re not able to skid out of control. The anti-lock brakes kick in immediately, understand the environment that you happen to be driving in, and can safely bring you to a stop.

As you can imagine, we are very sensitive to any security issues that may be associated with our real-time operating systems. We don’t have the luxury of these types of operating systems able to wait around while something else is occurring in the operating system itself.

We’re obviously not installing antivirus or anti-malware software into our automobiles, but the same type of security concern certainly applies regardless of what system it might be. This is why those systems tend to be very self-contained, and it’s very difficult to find a way in to even access the operating system in that equipment.

It might also be difficult to gain access to an operating system that’s running on an embedded system. An embedded system is one where the hardware and software are all created as a self-contained and purpose-built device. Sometimes, we see these embedded systems working as single components that make up a much larger device.

We also generally find that embedded systems are created for one sole purpose, and that’s the only process that that embedded system does. This would not be a technology where different apps might be loaded or you might pull in different capabilities from a centralized app store. Instead, these are designed to do one thing and do that one thing very efficiently.

For example, the traffic lights that you see when you’re out driving in your car are all controlled through an embedded system. The digital watches that we use today are very good at providing us with information on weather and time, and you’ll notice that there’s no direct access into an operating system on that embedded system. And in a doctor’s office or a hospital, you’ve probably seen monitoring equipment for medical use, and that is certainly a very advanced form of an embedded system.

One concern that most security professionals have is maintaining the uptime and availability of their critical systems. One way to provide this uptime is through the use of high availability. This gives you a way to keep a system constantly running even if one part of the system was to fail. This would be one step up from something that might simply be redundant. Redundant means that we have multiple systems available so that one system can be used if another one fails. But it doesn’t imply that this system would always be available. You may have to pull a redundant system out of the closet, put it into the rack, power it on, configure it, and finally, that system would provide some type of availability.

We often refer to high availability as HA. So there may be an HA configuration for a pair of firewalls that you’ve installed. If one of those firewalls was to stop working, it would fail over to the other highly available firewall. Sometimes, this high availability can be configured with additional efficiencies. In our last example with the firewalls, we had a primary firewall and we failed over to a backup that was simply waiting for that primary to fail. But some firewalls can be configured so that both of those systems are active all the time. And if one of those firewalls was to fail, the other would continue to work normally, and all of the traffic would continue to flow without anyone knowing there was a failure.

When you start designing highly available systems, you also tend to spend more money. In our previous example with the firewalls, we needed two separate firewalls to provide that HA capability. But if we have two separate firewalls, maybe we also need two separate network infrastructures. And if we have that, perhaps we need two separate power systems. You can continue to add on these systems and build out and plan for additional HA. But as you do this, there will be more and more costs associated with that. And eventually, you’ll get to a point where you’ll need to make a business decision on whether you’ll spend that additional money to be highly available or if you’ll simply take the risk of that downtime.