Launch Virtual Machines on AWS – Documentation and Tools
Learn to start fleets of Linux servers on AWS - great for deploying and testing "big software"
Evaluating and learning on Kubernetes and other complex software — and of course, deploying such software for production — sometimes demands building clusters of some scale and scope. Public and private clouds like AWS, VMware, and OpenStack can be great for this, providing resources on demand. Where Kubernetes deployers include so-called “provider integrations,” as with Mirantis Container Cloud, you don’t even need to provision cloud resources manually: just use the simple webUI to provision your cluster, and Container Cloud does the rest: communicating with the cloud’s management API and marshaling the infrastructure resources it needs.
In other situations — for example, when deploying Mirantis Kubernetes Engine with Launchpad, k0s with kosctl, or other applications and platforms — you may need to create a resource pool in advance to host your deployment. In simple cases, that usually means:
A collection of servers …
Running a supported operating system …
In the same VPC and availability zone …
Sharing a subnet …
Equipped with network interfaces and accessible to one another on private IP addresses …
With public IP addresses so you can reach them …
Secured from incursions with properly-defined security groups, and …
Configured appropriately to host your project
Starting up and configuring resource pools like this is quite simple on AWS. This tutorial shows you how. We’ll be using Ubuntu 18.04 LTS 64-bit servers as an example: creating a two-server resource pool sized to host a usable manager + worker configuration of Mirantis Kubernetes Engine or k0s Kubernetes.
Step 1: Check host requirements
The tutorials Download Mirantis Kubernetes Engine and Download k0s – Zero-Friction Kubernetes each contain recommended minimum and ‘production requirements for manager and worker node hosts. Here are the minimum requirements for Mirantis Kubernetes Engine:
Minimum Hardware Requirements for MKE nodes
8GB of RAM for manager nodes
4GB of RAM for worker nodes
2 vCPUs for manager nodes
25GB of free disk space.
What we can glean from this is that Mirantis Kubernetes Engine should run fine, for evaluation, on two nodes, each with two vCPUs (virtual CPUs) and 8GB of RAM, plus a 30GB virtual SSD (which accounts for the 25GB minimum, plus 5GB — more than enough — for the operating system).
Amazon AWS EC2 Resources dashboard. Note the orange Launch Instance button, and the link to Key pairs (center, top).
Step 2: Create a keypair
Log into your AWS account, and navigate to the EC2 dashboard page (see above).
You’ll need to create a keypair (and download the private key) or import a public key (from a keypair you’ve already created, offline). This will let you authenticate to your servers and log in with SSH.
The EC2 Dashboard (see illustration) has a link to the Key Pairs page. If you’ve created a key pair offline, you’ll click Actions>Import Key Pair and upload the public key, then give the keypair a name.
If you elect to create a keypair, click the orange Create Key Pair button, and you’ll see a dialog like the image below:
Select the file format you find most convenient — typically, for Linux servers, this is .pem — name the keypair, and click Create key pair. AWS will generate the keypair, and let you download the private key as a .pem file. Save this and make sure to move it (copy/paste should work – it’s just a text file) to the ~/.ssh folder of the machine you’re using to manage deployments. Use chmod to give it permissions as follows:
sudo chmod 600 mykeypair
Later, when you create servers, you’ll designate the stored keypair while configuring them, then use the associated private key to access them.
Step 3: Launch instances
Navigate to the EC2 dashboard page (see above).
Click the Launch Instance button, making a note of the AWS region in which it wants to launch your VMs. If this is inconvenient or conflicts with something else you’re doing, you can change the region using the popdown at the top right of the page.
Step 4: Pick an Ubuntu 18.04 SSD AMI
The first thing AWS does is let you search for an operating system image for your servers. Enter ‘ubuntu’ and hit Enter, and you’ll see a listing for an Ubuntu 18.04 LTS AMI with Solid-State Disk. Click the radio button and select the 64-bit version.
Step 5: Pick a virtual machine type
Next, you’ll be shown a long list of VM configurations. Look down the list until you see the t2.large type.
Step 6: Start two instances, with public IPs, on an unoccupied subnet
Next, you’ll set instance details. You’ll be starting two of these VMs, and ideally, it would be nice to put them on a subnet separate from other stuff. You can also create a new VPC and subnet just for this project — a good idea if you plan to work with your cluster for a long time.
You can’t see it underneath the popdown, but the default for this subnet is to assign a public IP address to new machines created there. Check this: you need these servers to have public IPs unless you plan to reach them via a VPN connection to the subnet.
Step 7: Configure storage
Earlier, we decided that 30GB of SSD on each server would be enough. So enter 30GB in the storage configuration dialog that comes up next.
You can skip the Add Tags dialog that pops up next.
Step 8: Create a new security group
In the next dialog, you’ll be asked if you want to use an existing security group or create a new one. Elect to create a new one and give it a name. You don’t need to add any rules, yet. We need to create the security group first, then modify it to be self-referential. We’ll show you how, in a minute.
Step 9: Launch your instances
You’re done with configuration. When you click Launch, AWS will ask you which keypair to use with these instances. Give it the name of the keypair you created earlier. Then follow the links back to the Instances list, to watch your VMs start up.
Step 10: Get IP addresses
In the instance display, click the checkbox to the left of each of your new servers, input a name for the server if you like (helps keep track), and note down its private and public IP addresses.
Step 11: Modify security group rules
In the Security tab under one of your instances, click on the ID of the security group you just created (both instances are in this security group). Edit the inbound rules by following this simple tutorial:How to Set up AWS Security Groups for Software Evaluation. The result will be to let both servers freely access addresses on the outbound side, and on the inbound side, accept traffic only from within instances in the same security group, and from your current public IP address. This should keep your project relatively safe from incursions.
Step 12: SSH into your servers
At this point, from the machine you’re using to run your project, you should be able to log into each of your servers on its public IP, using your newly-created private key to authenticate.
ssh -i ~/.ssh/mykeypair ubuntu@<ip_address>
Note that the default administrative username of an Ubuntu instance on AWS is ‘ubuntu,’ and instances are preconfigured automatically for passwordless use of sudo, a requirement of some deployment packages that use SSH to connect to, and configure, servers.