There are many ways for attackers to find their way into your network. In this video, you’ll learn about zero-day attacks, open permissions, unsecured root accounts, and much more.
<< Previous Video: Threat Research Next: Third-party Risks >>
There are many ways for attackers to find their way inside of your network. And in this video, we’ll look at some of these common vulnerability types.
The applications we use on our computers and our workstations every day have vulnerabilities inside of them. We just haven’t found those vulnerabilities yet, but hidden somewhere within the code of these applications is potentially a way that an attacker can use to get into your network.
Security researchers are interested in closing these holes before the attackers find them. And they’re doing research every day to try to identify these vulnerabilities that may be hidden in our software. But of course, the attackers would rather find these vulnerabilities first. They can use these vulnerabilities to find their way into your network or they can sell these vulnerabilities to the highest bidder.
If the attackers do use this vulnerability against you and this vulnerability has never been seen up to this point, then we have a zero-day attack. A zero-day attack means that we’ve never seen this vulnerability before. It’s brand new. And because of that, there’s probably not a patch or a way to prevent this vulnerability from being exploited. Obviously a zero-day attack is something that we should all take very seriously because often it’s very difficult to mitigate these. It’s very difficult to stop something that you had no idea existed in the first place.
You always want to keep an eye on what the latest vulnerabilities might be and one place to go would be the common vulnerabilities and exposures database, the CVE, that can be located at cve.mitre.org.
Sometimes, attackers don’t need to find a hidden vulnerability that’s inside of software instead they wait for you to leave the door open and they simply walk in to that open door. This is an open permissions problem, where information has been put onto the internet but no security has been applied to that data. And it makes it very easy for anyone on the internet to access that information. This used to be much more difficult when all of our data was in our private data center and there was no way to gain access to that data from the outside. But of course, we’re putting an increasing amount of data on the cloud. And because this data is now located somewhere that can be accessed from anywhere in the world, it’s becoming increasingly common to see misconfigurations that would allow access to this data.
One of many examples of this occurred in June 2017 when Verizon accidentally exposed 14 million records of data. This was in an Amazon S3 data repository. And instead of applying the proper passwords and security to the repository, it was left open. Fortunately, a researcher found this data before someone malicious could get their hands on it and they were able to close this hole without this data becoming public.
Attackers spend a lot of time on these cloud repositories, trying to find sections of data that may have been left open. So make sure that if you’re storing data in the cloud that you’re putting the proper security in place.
Not only do we sometimes leave our data open, we also sometimes leave our accounts open. And if this account is an administrator account or root account, then an attacker may have full control over an operating system. Sometimes, this is a misconfiguration that allows someone access to an administrator account or perhaps the password associated with the administrator account is not strong enough to prevent a brute force attack.
On many systems, the administrator has chosen not to allow interactive access to log into the administrator account. This means no matter how hard the attacker tries to find the correct username and password combinations, they’ll never gain access to the operating system by logging in with the administrator or the root account. Access to the root or the administrative account should be closely monitored. And we should always have policies and procedures in place to prevent casual use of these accounts.
If you’re using an application and an error occurs, it’s very common to see an error message pop up on the screen. But occasionally, we can give too much information in that error message and that information could be used against us. For example, the error message may show the service that’s being used or the application that we’re using. It might show version information associated with that application. And the message may display debug information that could be memory values or information that’s not commonly seen from the outside.
An example of an error message turning into a vulnerability and, ultimately, an exploit was in December of 2015 on the website Patreon. They had installed a debugger to help monitor a problem they were having with their website. Normally this is something that would not be visible to the public, this is only used for internal use. But unfortunately, they left it turned on and it was exposed to the internet side. Attackers found a way to access this well known debugger and they were able to start using it to perform executions of code on the web server itself. Using that vulnerability, they were able to transfer gigabytes of customer data and then release all of that information online.
We often emphasize the need to encrypt our data. Encrypt data that we’re storing on our drives. And encrypt data as we’re transferring it across the network. But just because we’re encrypting data doesn’t necessarily mean that it’s well protected.
There’s many different kinds of encryption. That’s why you need to be sure that you’re using encryption protocols that are strong protocols. Encryption protocols, such as AES and triple DES, are very common to see used. And of course, you want to be sure that the length of the encryption key is long enough to provide the proper amount of security.
You also want to be sure that the hashes you’re using don’t have any known vulnerabilities.
And if you’re communicating over a wireless network, make sure you’re using the latest wireless encryption protocols.
There are many different cipher suites. And you’re using different types of ciphers in different places on your network. So it’s always good to stay up to date with what the industry believes are the best ciphers to use on your network.
A good example of a technology where it’s important to stay up to date is the TLS protocol. This is the transport layer security protocol that we commonly use in our browsers to encrypt data. But there are over 300 cipher suites in TLS. Some of them are very secure. And some of them are not secure at all. So it’s important that you configure your web servers and your clients to make sure that they are using the strongest protocols.
There are many documents for best practices on the internet that will help you know exactly which of these cipher suites is the best to use. But in general, you want to be able to avoid any ciphers that are weak or have null encryption. This would be encryption keys of 128 bits or smaller, which tend to be very easy to brute force. You also want to be sure that you’re not using any outdated hashes, like MD5, where known vulnerabilities may exist.
And with some of our applications, we’re simply sending the application data in the clear. Anyone who’s watching this data go back and forth on the network would be able to read everything that we’re sending back and forth.
Protocols, such as Telnet, FTP, SMTP, and IMAP, are good examples of these in the clear protocols. If you’re not sure if your application is sending this information in the clear, then you may want to capture those packets and see if you can read through the packet capture. If it looks like a plain English description of the information that’s being sent across the network, then it is not being encrypted during transport.
In many cases, you can reconfigure the application to use an encrypted protocol instead of the in the clear protocol. So you might want to change the application to use SSH, SFTP, or IMAPS.
If you don’t configure your application to use these secure protocols and you go to an industry event such as Defcon, you may find yourself on the wall of sheep. This is a list of everybody communicating on the network in the clear along with the passwords and the application they happen to be using. This is something that is easily captured from the traffic going over the wireless network and now it has been put on the front board for everyone to be able to see.
We’re connecting more and more devices to our networks these days. And most of these devices have a default username and default password. The attackers know that many people will plug these in and never change that username and password. And they found ways to use this to their advantage.
One such use is the Mirai botnet. It takes advantage of these default usernames and passwords to gain access to these systems and take them over for their own use. These devices now become part of a much larger botnet and are now under the control of the botnet owner. This is a botnet that takes advantage of many different kinds of devices and includes cameras, routers, doorbells, garage door openers, and many other IoT, or internet of things, devices.
To make this even worse, the Mirai botnet is now open source. So attackers can download the software, modify it for their own purposes, and control even more IoT devices.
To be able to use these services over a network, we have to open ports on the server, so that our applications can talk to the server itself. Unfortunately, opening these ports also creates an opening into the server. And we have to make sure that we’re properly adding the security that we need to let the good people in and keep the bad guys out.
We often manage this flow of traffic with firewalls. We’ll have software based firewalls running on the server and we’ll have network based firewalls on the ingress and egress parts of the network. The firewall will commonly have a rule set that will allow or disallow access to certain ports on the IP address and thereby keep out anyone who may be trying to attack that device.
Unfortunately, these rule sets tend to become very large, very complex. And as time goes on, they become even more unwieldy. It may become very easy to accidentally allow access to a service that was not intended. In fact, it’s very common for someone who is managing one of these firewalls to occasionally audit the rule base, make sure that all of the rules are up to date, and that no mistakes have been made with IP addresses, port numbers, or any of the other services that are configured in that rule base.
We know that these vulnerabilities exist in our software and, of course, many organizations will occasionally release updates to the software that didn’t need to be deployed on all of our systems. Many organizations will have processes in place to be able to keep all of their systems up to date with the latest patches. This is a priority for many organizations because most of these patches are associated with security vulnerabilities.
There’s usually a group of people that will test these patches, make sure that they will operate properly in your environment, and then load them on a central server, which will then deploy to all of the other systems in your organization. These patches may be associated with the firmware, or the BIOS, of the device. These may be operating system patches, which work on the core Windows, Linux, or other operating systems. Or they may be patches associated with the particular application and so we would need to patch all of these types of systems to keep everything up to date. If the patches on your systems are not kept up to date, then the results could be very damaging.
For example, in 2017 between May and July, Equifax had a data breach of 147.9 million Americans, 15 million British citizens, and others. The information that was released included names, social security numbers, birth dates, addresses, and more. The attackers were able to get into these systems because a server on the Equifax network had not been properly patched.
This was from a vulnerability with Apache Struts that was identified on March the 7th. The attackers were able to take advantage of this system on March the 12th. And the system had not been patched, so the attackers were able to take advantage of this vulnerability. This wasn’t patched by Equifax until July the 30th. By that point of course, the attackers were already in the network, they were already gathering this information, and it was much too late for that patch to have any effect. A disclosure of the breach was made public on September the 7th. And just a week later, the CIO and CSO were told that they no longer were part of the organization. Two years later, Equifax paid over half a billion in fines, all because they weren’t able to quickly and properly patch their systems.
If you go into any data center of really any size, you’re going to find systems and components in there that may have been sitting there for a very long time. We refer to these older systems as legacy systems. These legacy devices may be running older operating systems, old applications. There may be middleware that’s installed, but these are systems that we can’t easily turn off or convert because perhaps they’re performing a particular function that can’t be duplicated or no one is really put in the effort to upgrade it to the latest version.
In many cases, these systems are running software that has been far beyond the end of life. There are no longer patches being released for these. And it now becomes a security concern. It’s, of course, up to the security administrator to determine the advantage and disadvantage of having this system on the network and finding ways to protect it, even though there may be no way to patch the operating system. This means you may keep some of this legacy equipment around, but you may be adding additional firewalls or other security tools around that system to make sure that you can keep it as secure as possible. This may not be the best possible option when you have software and devices this old on the network. But occasionally, you need to have some type of transition in place, so that you can remove the legacy software and put something in place that’s much more secure.