Understanding Jump Servers
What is a jump server?
Jump servers (sometimes known as a jump host or jump box), are used to securely access the system.
As you may recall, your three-tiered application is structured something like this:
Of course, administrators will need a way to reach the systems on these three networks without sacrificing the security posture of the systems running the applications. To accomplish this, create a virtual machine in the Mirantis OpenStack Express environment and open SSH access to that virtual machine (and only that VM) in order to enable an entry point to reach the other systems in the environment. These servers are often referred to as “jump servers” or “pivot servers” in an environment. When considering the utilization of jump servers, there are three different ways they can be set up, and the choice you make is dependant on how you want to configure your security posture:
One jump server: Create a single jump server in one tier, assign it a public IP address so it is reachable from the internet, and open port 22 to this server. You can then allow this server to access any of the tiers on port 22.
One jump server per tier: Create a separate jump server in each tier, assign each a public IP address so they are reachable from the internet, and open port 22 to each server. You can then allow each server to access only other systems within its tier on port 22.
Concentric rings of security: Create a separate jump server in each tier, assigning only the Web Tier jump server a public IP address so it is reachable from the internet. Assign the Application Tier and Database Tier jump servers IP addresses from their respective networks.
Open port 22 from the internet to the Web Tier jump server, and allow the Web Tier jump server to connect to any other system in the Web Tier, as well as the Application Tier jump server on port 22.
For the Application Tier jump server, only allow access to it from the Web Tier jump server on port 22. Allow the Application Tier jump server to connect to any other system in the Application Tier network, as well as to the Database Tier jump server on port 22.
For the Database Tier jump server, only allow access to it only from the Application Tier jump server on port 22. Allow the Database Tier jump server to connect to any other system in the Database Tier on port 22.
Once this configuration is in place, if you need to access a machine in the Database Tier, you’d first need to SSH to the Web Tier jump server, then from there, you would SSH into the Application Tier jump server, then you’d SSH into the Database jump server. Finally you’d be able to access the server you needed to access in the Database Tier. An attacker would need to compromise 3 separate jump servers before attempting to attack a server in the Database Tier.
Option #1 allows for the most “ease of access” in working with the environment. You can easily get directly to any server in your environment in 2 hops (SSH first to the jump server in the Web Tier and then SSH a second time to the server you wish to access in the environment). The downside is that every one of your tiers has a single SSH entry point to it from the internet so an attacker would just need to compromise a single server to be able to then attack any server in your environment.
Option #2 is very similar to Option #1, in that you can easily get directly to any server in your environment in 2 hops (SSH first to the jump server in the appropriate tier and then SSH a second time to the server you wish to access in that tier). It slightly improves security in that an attacker would need to compromise 3 separate servers to gain access to your whole environment, however, they still only need to compromise one server to access an entire tier.
Option #3 addresses that exposure by limiting the visibility of SSH to each of the tiers, minimizing the attack vector to a single machine for the Application and Database Tiers. The downside is that it is a lot more tedious to work with, since you have to make several hops depending on where in the environment you want to go.
In this case, however, we’re going to go for the most secure example, and set up Option #3, concentric rings of security.
In the Security Groups section, we set up one security group for each tier, and then we set up additional security groups for each jump server, configuring their Ingress rules appropriately to match this model. Now we need to create the actual jump servers and add them to these groups.
Launch three instances of the vanilla CentOS image we uploaded into OpenStack. (Before launching an instance, you will want to either upload the key pair created earlier or create a new key pair to ease access.) When launching the instance, be sure to select the appropriate Security Group (WebJump, ApplicationJump or DatabaseJump) as well as the appropriate network for each instance. After launching one instance per tier, your Network Topology should look like this:
What do jump servers use for authentication?
Jump servers are regular servers; you access them using a username and password or, preferably, an SSH key, just as you would with any other server. What makes a jump server special is that it is the only server accessible from outside a cluster.
How do you copy files from a jump server to a local machine?
The easiest way to copy files from a jump server to a local machine is going to depend on the various operating systems involved. In most cases, the server is a Linux box, so an SCP client or SFTP client will be most convenient.
What is the relationship between jump servers and proxies?
A jump server is a server that is the only server accessible from outside a cluster; once inside the jump server, you can then access other servers inside the cluster.
A proxy is used to aggregate requests and forward them automatically. For example, if your company uses a proxy, all requests out to the internet go through that proxy, which then retrieves the information and automatically passes it back to the requestor. Similarly, if there's a proxy in front of your company's systems, all requests for information from those systems go to that proxy, which then retrieves the information and automatically passes it back to the requestor.
Want to read more? This article is an excerpt from our new guide, Mirantis OpenStack Express: Application On-boarding Guide (currently in beta). Please let us know what you think.