Using ansible to prepare a digital ocean droplet to host a static website

Preface

This guide comes logically after the previous one I wrote about setting up a digital ocean server with Terraform.

You can clone my website’s ansible repository for reference.

The main logic for this Ansible configuration happens in the setup.yml file. This file can be called whatever you like as we’ll call it by name later on.

Installing Ansible

You can install Ansible with your package manager of choice.

I install it using pacman on Arch Linux:

Bash
sudo pacman -S ansible

The inventory.yml file

The inventory file is where I have set the relative configuration needed for the playbook.

The all key contains all of the host configurations (although I’m only using a single one).

Within that all key is vars.ansible_ssh_private_key_file which is just the local path to the ssh private key used to access the server.

This is the key I set up with Terraform in the previous guide.

Then the hosts key just contains the hosts I want to be able to target (im using the domain name that I set up in the previous Terraform guide)

The setup.yml file explained

The setup.yml file is what is known as an “Ansible Playbook”.

From my limited working knowledge of Ansible, a playbook is basically a set of tasks that are run against a server or a collection of servers.

In my own one I am currently only running it against a single server, which I am targeting via its domain name of “zet.davidpeach.me”

YAML
- hosts: all
  become: true
  user: root
  vars_files:
    - vars/default.yml

This first section is the setup of the playbook.

hosts:all tells it to run against all hosts that are defined in the ./inventory.yml file.

become:true is saying that ansible will switch to the root user on the server (defined on the next line with user: root) before running the playbook tasks.

The vars_files: part lets you set relative paths to files containing variables that are used in the playbook and inside the file ./files/nginx.conf.j2.

I wont go through each of the variables but hopefully you can see what they are doing.

The Playbook Tasks

Each of the tasks in the Playbook has a descriptive title that hopefully does well in explaining what the steps are doing.

The key value pairs of configuration after each of the task titles are pre-defined settings available to use in ansible.

The tasks read from top to bottom and essentially automate the steps that normally need to be manually done when preparing a server.

Running the playbook

Bash
cd ansible-project

ansible-playbook setup.yml -i inventory.yml

This command should start Ansible off. You should get the usual message about trusting the target host when first connecting to the server. Just answer “yes” and press enter.

You should now see the output for each step defined in the playbook.

The server should now be ready to deploy to.

Testing your webserver

In the ./files/nginx.conf.j2 there is a root directive on live 3. For me this is set to /var/www/{{ http_host }}. (http_host is a variable set in the vars/default.yml file).

SSH on to the server, using the private ssh key from the keypair I am using (see the Terraform guide for reference).

Bash
ssh -i ~/.ssh/id_rsa.davidpeachme zet.davidpeach.me

Then on the server, create a basic index.html file in the website root defined in the default nginx file:

Bash
cd /var/www/zet.davidpeach.me
touch index.html
echo "hello world" > index.html

Now, going to your website url in a browser, you should be able to see the text “hello world” in the top left.

The server is ready to host a static html website.

Next Step

You can use whatever method you prefer to get your html files on to your server.

You could use rsync, scp, an overly-complicated CI pipeline, or – if your’e using lupo – your could have lupo deploy it straight to your server for you.

Setting up a Digital Ocean droplet for a Lupo website with Terraform

Overview of this guide

My Terraform Repository used in this guide

Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud provider’s dashboard and manually clicking buttons and setting up things yourself.

This is known as “Infrastructure as Code”.

It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.

This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.

You can then build upon those foundations and work on building out your own desired infrastructures.

The Terraform Flow

As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.

  • Firstly we write a configuration file that defines the infrastructure that we want.
  • Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
  • Finally we run the terraform plan command to test our infrastructure configuration, and then terraform apply to make it all live.

Installing the Terraform program

Terraform has installation instructions, but you may be able to find it with your package manager.

Here I am installing it on Arch Linux, by the way, with pacman

Bash
sudo pacman -S terraform

Setting the required variables

The configuration file for the infrastructure I am using requires only a single variable from outside. That is the do_token.

This is created manually in the API section of the Digital Ocean dashboard. Create yours and keep its value to hand for usage later.

Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific env in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.

Creating an ssh key

In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought I’d create a key pair specific for my website deployment.

Bash
ssh-keygen -t rsa

I give it a different name so as to not override my standard id_rsa one. I call it id_rsa.davidpeachme just so I know which is my website server one at a glance.

Describing your desired infrastructure with code

Terraform uses a declaritive language, as opposed to imperetive.

What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.

You dont need to be concerned with the nitty gritty of how it is achieved.

I have a real-life example that will show you exactly what a minimal configuration can look like.

Clone / fork the repository for my website server.

Explaination of my terraform repository

YAML
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

variable "do_token" {}

# Variables whose values are defined in ./terraform.tfvars
variable "domain_name" {}
variable "droplet_image" {}
variable "droplet_name" {}
variable "droplet_region" {}
variable "droplet_size" {}
variable "ssh_key_name" {}
variable "ssh_local_path" {}

provider "digitalocean" {
  token = var.do_token
}

The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.

Since I’m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.

The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.

Following that are the variables that I have defined myself in the ./terraform.tfvars file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.

The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.

YAML
resource "digitalocean_ssh_key" "ssh_key" {
  name       = var.ssh_key_name
  public_key = file(var.ssh_local_path)
}

Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.

This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.

I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.

I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.

YAML
resource "digitalocean_droplet" "droplet" {
  image    = var.droplet_image
  name     = var.droplet_name
  region   = var.droplet_region
  size     = var.droplet_size
  ssh_keys = [digitalocean_ssh_key.ssh_key.fingerprint]
}

Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.

YAML
data "digitalocean_domain" "domain" {
  name = var.domain_name
}

This block is a little different. Here I am using the data property to grab information about something that already exists in my Digital Ocean account.

I have already set up my domain in Digital Ocean’s networking area.

This is the overarching domain itself – not the specific A record that will point to the server.

The reason i’m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run terraform destroy.

YAML
resource "digitalocean_record" "record" {
  domain = data.digitalocean_domain.domain.id
  type   = "A"
  name   = "@"
  ttl    = 60
  value  = "${digitalocean_droplet.droplet.ipv4_address}"
}

The final block creates the actual A record with my existing domain settings.

It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.

Testing and Running the config to create the infrastructure

If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:

Bash
terraform plan

If the output looks okay to you, then type the following command and enter “yes” when it asks you:

Bash
terraform apply

This should create the three items of infrastructure we have defined.

Next Step

Next we need to set that server up with the required software needed to run a static html website.

I will be doing this with a program called Ansible.

I’ll be writing up those steps in a zet very soon.