Security in the cloud is a constant challenge. In this video, you’ll learn about infrastructure as code, serverless architectures, APIs, and more.
At this point in the evolution of cloud technologies, it’s very likely that you or your organization has one or many applications running in the cloud. These could be running as infrastructure as a service, platform as a service, software as a service, or one of the many other types of services available in cloud-based infrastructures. But in all of these cases, the question still remains who is responsible for the security for all of these different cloud-based systems?
Fortunately, if you’re working with a public cloud provider, they are probably going to provide you with a matrix of responsibilities. This will clearly show who’s responsible for the different aspects of the technologies running in the cloud. Not all cloud providers provide the same matrix. There may be differences depending on the cloud provider that you’re using. And you might have a contract with a cloud provider that modifies the default matrix of responsibilities to better fit the needs of that contract.
This is a responsibility matrix that was taken from a large cloud provider. You can see that it’s broken up by software as a service, platform as a service, infrastructure as a service, and on-prem, or on-premises. Anything in blue is managed by the customer. And anything in yellow is managed by the cloud provider. In some cases, there’s an overlap where both the customer and the provider are responsible for everything at that level.
So for this cloud provider, if you’re wondering who’s responsible for the operating system, if this is software as a service, it’s the provider, platform as a service is the provider, and infrastructure as a service, and obviously on-premises would be the responsibility of the customer. You can compare that with a section of the matrix for accounts and identities that would obviously have a significant security concern. And in the case of this cloud provider, the customer is always responsible for anything associated with their accounts.
If you’re wondering what the responsibility matrix might be for your cloud provider, they probably already have it documented as part of their services. In some organizations, one cloud isn’t enough. You may have multiple clouds that you’re using across different cloud providers. We refer to this as a hybrid cloud. And although it adds additional flexibility when you’re using different cloud providers, it also includes an extra level of complexity when you’re needing to manage across those providers.
For example, most cloud providers don’t talk to each other directly. And in fact, many of the systems between different cloud providers may work in very different ways. So you might have to manually configure all of your settings separately for each cloud provider. For example, you might have authentication that needs to occur. And if you’re configuring it differently on each provider, you may have a mismatch between one provider and another.
Or you may be configuring server configurations or firewall settings. And all of those need to match between all of these different providers. And since they’re not connected to each other, it’s an opportunity to have a mismatch between one and the other. It can also be difficult to manage security and other logs between these different providers. Each provider writes a different type of log with different terminology. And it may be difficult to combine those together to see what’s going on.
And in order to take advantage of this hybrid cloud, it may be perfectly normal for data to constantly be transferred from one cloud provider to the other. Each time this data is sent from one provider to another, it’s traversing the public internet. So you have to make sure that all of your security settings are configured to protect that data in transit.
When working with the cloud, you’re most likely working with a cloud provider. But there are also third parties for applications and other cloud-based devices that you also have to manage. For example, you might have an application you’ve written in house, and you’ve posted that application to the cloud. But you would like to put a firewall in front of that application to provide additional security. You’re most likely using a firewall from a third party to provide that security.
A good best practice is to have a vendor risk management policy so that you can manage and maintain the security for these third party technologies. We also have to think about how we’ll handle incident response for all of these third party products and technologies. We certainly have our own internal processes, and we have the processes associated with the cloud provider. But we also have to bring in all of these other third parties to participate in this incident response process.
And of course, we need to constantly monitor these third party processes and devices. We want to be sure that the security of these systems and the availability is always working as expected for these cloud-based systems. Cloud-based infrastructures almost always require some type of infrastructure as code. This is a way to describe an application instance or a portion of the infrastructure in the cloud, but you’re defining it as code rather than defining it as a particular piece of hardware.
For example, your infrastructure as code may define what hosts need to be built, the type of web servers that are running on these hosts, and database servers that would also be used for this infrastructure. This allows you to easily build out an infrastructure. And it also allows you to easily modify the infrastructure so that you can change the configuration as needed. You can build this out in the code itself. So the next time this code is used to build out the infrastructure, it takes all of your changes into account.
Now that you’ve created a perfect version of the application instance, you can easily use that code to rebuild the instance on any cloud provider at any time. This is one of the significant benefits of cloud computing is that you’re able to create an entire infrastructure all based around one single definition of infrastructure as code. When we build out an application instance in the cloud, we often refer to different servers. But what if we could build an application instance that had no servers?
This is referred to as a serverless architecture. Instead of simply accessing a single application, we are instead accessing individual functions that are handled by that application. Each function handles a small piece of the application. And any time we need to perform one of these functions, we address that part of the serverless architecture. What’s also interesting in this serverless architecture is there’s much less emphasis on the operating system itself.
Each one of these smaller application functions can run in whatever operating system happens to be appropriate at that particular time. From the application’s perspective, we’re only interested in sending and receiving information for that small piece of those autonomous functions. We don’t have to worry so much about what operating system is running underneath. Your application developer is going to spend time breaking the application into the smaller functions and then deploying them on the server side.
This also means that you would only need to run these particular application functions when they’re needed. This also saves time and money, especially on a public cloud-based infrastructure. If the application needs to access a particular function, it can be built in real time in the cloud. That function can then be referenced for the application. And if it’s no longer necessary to continue to run that compute container, it can then be removed from the cloud until you need it the next time.
Most of the work that’s happening with a serverless architecture occurs in the cloud. So the bulk of the security associated with this application instance and the serverless architecture is all in the cloud itself. Cloud infrastructures also allow us to have extremely efficient application instances. Traditionally, we’ve used monolithic applications running on our desktops. We would install large applications onto our storage drives. We run all of those applications in the memory of our system. And that one big application handles all functions you need for that app.
Everything about that application is running as one single executable. The user interface, the login screens, all of the business logic that you would have inside of that application all occurs on the client of that monolithic architecture. This also means that you have a large application that needs to be installed on one local machine. And if that application needs to be updated, we need to process a change control. We need to send the changes down to that particular device. It needs to be installed on that machine, and then we are able to use the newest version.
In the cloud, we have the opportunity to use a more streamlined process that focuses on a microservice architecture. We’re able to take advantage of microservices by using APIs. These are Application Programming Interfaces. And it allows us to programmatically control the way that an application is working. So instead of having one single executable that handles everything for the application, you can break out individual services for the application and run them as separate instances in the cloud.
All you would need to do is the client is talk to the API gateway, which would then send the request to the appropriate microservice. This greatly extends the scalability of the application. If there’s a certain portion of the application that is being used more than others, you can roll out additional microservices to handle that load. This also makes it more resilient. If you happen to lose a particular microservice, the rest of the application will continue to work.
And since the security is based on the microservice that’s running, you can provide the proper amount of security based on the service that happens to be running. If the microservice is handling authentication, there will be a set of security processes for that. And if this microservice is reading or writing information to a database, there will be a security process associated with that microservice.