It can require many different network appliances to properly secure a network. In this video, you’ll learn about jump servers, application proxies, load balancing, sensors, collectors, and more.
If you’re on the inside of a private or internal network, it’s relatively easy to connect and manage devices that may be also on the inside of that network. But what if you need to manage these devices and you’re on the outside of the network? In that case, you may be able to take advantage of a jump server. A jump server is a device on the inside of your network that is accessible from the outside.
It’s usually hardened and it has security associated with it to limit access to only those individuals who are authorized. This means that usually it’s a two-step process. The external client would connect first to the jump server, and from the jump server they might SSH to a web server to make changes to the configuration. From a security perspective, this jump server is an important device to be sure that it is properly hardened and properly secured.
You would not want unauthorized access to the jump server from someone who is outside of your network because then they could potentially gain access to the devices on the inside. Another useful network appliance is a proxy server. A proxy is designed to sit in the middle of a conversation between two devices and make requests on behalf of one of those users. This is commonly seen with users that might be on the inside of the network and they want to communicate with the device that’s on the internet.
Instead of communicating directly to those devices on the internet, your internal devices communicate to a proxy server and the proxy server makes that request to the internet. The response is then sent back to the proxy server, which can then evaluate that response, confirm that the information in that response is not malicious, and then send a response down to the original requester. There are many different uses for a proxy and this could simply be for caching. So the first person who makes the request to the proxy server has all of that information cached inside, which means the second person making the same request can simply receive the same response as the first request.
This saves a great deal of bandwidth and time because you’re not sending those subsequent requests out to those devices on the internet. These proxies can also perform URL filtering so they will limit what websites you’re able to visit. Many proxies will also provide content scanning so anything that’s received by the proxy can be analyzed to see if there might be malicious traffic or some type of exploit and if it is, it can be blocked at the proxy.
You’ll generally see two different types of proxies in use. One is a proxy that needs to be configured in the application or operating system that you’re using. This type of proxy is referred to as an explicit proxy because you are explicitly naming the IP address or name of the proxy that you’re communicating with. Another type of proxy is a transparent proxy.
From the end user’s perspective, they have no idea the proxy is even in place. The proxy is able to sit in the middle of the conversation and automatically make requests on a user’s behalf without configuring anything else in the operating system. Since the users have no idea the proxy is in place, we refer to that as a transparent proxy. There are many different types of proxies you might use.
A proxy you might use every day is a very simple proxy called a network address translation, which will convert between internal and external IP addresses on internet facing routers. But the proxy type that many people think about is an application-level proxy. An application proxy understands the protocols used for specific application. For example, you might be using an HTTP proxy.
Some proxies can work with multiple protocols so you might be able to proxy HTTP, HTTPS, FTP, and other protocols as well. If you’re using a proxy inside of your network to control outbound traffic to the internet, then you’re using a forward proxy. Sometimes you’ll hear this referred to as an internal proxy. With a forward proxy, your users make requests to an internal proxy server that is inside of your network.
That proxy makes the requests out to the internet to the website. The website replies back to the proxy, which then examines the traffic. And if everything looks legitimate, it will send the response down to the user.
Reverse proxies provide a similar type of function, but for inbound traffic to a specific service. For example, there might be users on the internet that would like to communicate to a web server that’s inside of your network. Instead of having the users communicate directly to the web server, they connect to a proxy server.
The proxy server then makes requests on the user’s behalf. Those responses are sent from the web server to the proxy server, which then sends that response to the user on the internet. This provides additional security in front of your web server.
If there is any type of malicious traffic inbound, it can be dropped at the proxy rather than being sent to the web server. This proxy can also act as a caching server for identical requests coming from the internet. The first request from the internet would be handled by the proxy. It will be forwarded to the web server for its response.
The response back to the proxy, though, is saved in a local cache. All subsequent identical requests from the internet are then sent, of course, to the proxy server. But instead of sending that request again to the web server, we simply pull from the cache and send the response directly from the proxy. This provides a much faster response time for the users and it limits the amount of load that’s being sent to the web server.
Another type of proxy you might find is an open proxy. From a security perspective, this is a significant concern because this is a proxy that is simply available for anyone on the internet to be able to use. These are commonly used to circumvent existing security controls. So instead of your device talking to a proxy or a firewall inside of your network, this information is sent directly to an external proxy, which then makes the request for you to another internet site.
But there are significant security concerns when working with an open proxy. This proxy is managed by some third party, but we have no idea who that might be. And that third party could add additional information into these traffic flows that are going back and forth. The proxy owner could be adding advertisements into the messages being sent back and forth, or they might include malicious code that would infect the devices on your network.
In many organizations, these open proxies are blocked to limit any type of security risk communicating through those unknown devices. Another useful network appliance is the load balancer. Load balancer does exactly what the name implies, it will take a load coming from one direction and distribute that load across multiple services. You often see load balancers used in large scale implementations.
You might have a group of web servers or farm of database servers and having a load balancer in place maintains efficiency and keeps the load even across all of those devices. Another nice feature of a load balancer is fault tolerance. If a server connected to a load balancer was to fail, the load balancer would recognize that server was no longer communicating and would split the load among the remaining servers.
This convergence happens very quickly and most users don’t even realize that a change has been made to the load balancer. Many load balancers are running as an active active load balancer, which means all of the servers connected to the load balancer are active and are being used by the load balancer. This, of course, allows the single load balancer to manage the load across all of those individual servers.
This load balancer can also provide TCP offloading, which means that it doesn’t have to set up an individual TCP communication session for every user connecting to the servers. Instead, it will keep one single TCP connection open all the time and simply distribute the load without having to recreate separate TCP sessions each time. Some load balancers can also offload the SSL decryption process.
Instead of having the servers manage the decryption individually, all the decryption can be done on the load balancer, which then sends in the clear or decrypted traffic down to the servers. On the way back, the load balancer will encrypt the response and send that back to the user. There’s usually purpose built hardware inside of the load balancer to provide efficient encryption and decryption processes.
Just like a proxy, a load balancer can also provide caching, especially if there are multiple identical requests being made to the web servers. And there may be prioritization built into this as well where certain applications or certain protocols may have a higher priority than others. And many load balancers can also provide content switching, which means they recognize the type of requests being made and can send certain requests to specific web servers that are connected to the load balancer.
Some load balancers can work in an active passive configuration. This means that some of the servers are active and working when connected to the load balancer. But there are also other servers which are ready to work but are not currently being used by the load balancer. If any of the active servers fail, the load balancer recognizes that server is no longer communicating and will begin sending those requests to the standby or passive web servers that are ready to go.
In our example here, we have server A, server B, server C, and server D. And you can see that server A and B have a green light so they are actively being used by the load balancer. And we have standby or passive web servers that are configured with these red notations. A user will make requests to the load balancer, and of course the load balancer will distribute that load to any of the servers that may currently be working.
However if a server was to fail for any reason, the load balancer would recognize there was a failure, mark that device as being no longer available, and then use one of the passive or standby devices and turn that into an active device. Any subsequent requests to the load balancer would then be sent to the new active devices. Another useful series of network appliances commonly used for network management and monitoring are sensors and collectors.
Some devices that you may already be using have sensors built into them. Things like switches, routers, firewalls and other devices may have the ability to compile statistics and make them available to these collectors. There’s also a great deal of information that can be gathered from intrusion prevention systems, authentication logs, web server access logs, and anything else that might be connected to the network.
You can also use sensors that are designed as separate devices whose sole purpose is to collect statistics from the traffic going across the network. All of the details being collected by these sensors is sent to one central database called a collector. Sometimes this is a proprietary console that’s written specifically to work with certain types of devices, for example, an IPS or a firewall. Or you may be using something that is able to collect data from many different types of devices, such as a SIEM.
This is a security information and event manager, and this not only provides a way to consolidate all of this data into a single database, but it also provides a powerful reporting tool to be able to consolidate, correlate, and compare different types of data across all of these very diverse devices. Here’s an example of the console that you might find on a SIEM.
This one is showing a report that describes different types of events that are occurring on the network, including failed authentications and port scans. You also see a log of security events at the top. And there are many options for other types of reports that you can find on the left sidebar.