• ๐Ÿ“‚

    Setting up a Digital Ocean droplet for a Lupo website with Terraform

    Overview of this guide

    My Terraform Repository used in this guide

    Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud providerโ€™s dashboard and manually clicking buttons and setting up things yourself.

    This is known as โ€œInfrastructure as Codeโ€.

    It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.

    This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.

    You can then build upon those foundations and work on building out your own desired infrastructures.

    The Terraform Flow

    As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.

    • Firstly we write a configuration file that defines the infrastructure that we want.
    • Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
    • Finally we run the terraform plan command to test our infrastructure configuration, and then terraform apply to make it all live.

    Installing the Terraform program

    Terraform has installation instructions, but you may be able to find it with your package manager.

    Here I am installing it on Arch Linux, by the way, with pacman

    Bash
    sudo pacman -S terraform

    Setting the required variables

    The configuration file for the infrastructure I am using requires only a single variable from outside. That is the do_token.

    This is created manually in the API section of the Digital Ocean dashboard. Create yours and keep its value to hand for usage later.

    Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific env in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.

    Creating an ssh key

    In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought Iโ€™d create a key pair specific for my website deployment.

    Bash
    ssh-keygen -t rsa

    I give it a different name so as to not override my standard id_rsa one. I call it id_rsa.davidpeachme just so I know which is my website server one at a glance.

    Describing your desired infrastructure with code

    Terraform uses a declaritive language, as opposed to imperetive.

    What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.

    You dont need to be concerned with the nitty gritty of how it is achieved.

    I have a real-life example that will show you exactly what a minimal configuration can look like.

    Clone / fork the repository for my website server.

    Explaination of my terraform repository

    YAML
    terraform {
      required_providers {
        digitalocean = {
          source = "digitalocean/digitalocean"
          version = "~> 2.0"
        }
      }
    }
    
    variable "do_token" {}
    
    # Variables whose values are defined in ./terraform.tfvars
    variable "domain_name" {}
    variable "droplet_image" {}
    variable "droplet_name" {}
    variable "droplet_region" {}
    variable "droplet_size" {}
    variable "ssh_key_name" {}
    variable "ssh_local_path" {}
    
    provider "digitalocean" {
      token = var.do_token
    }

    The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.

    Since Iโ€™m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.

    The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.

    Following that are the variables that I have defined myself in the ./terraform.tfvars file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.

    The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.

    YAML
    resource "digitalocean_ssh_key" "ssh_key" {
      name       = var.ssh_key_name
      public_key = file(var.ssh_local_path)
    }

    Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.

    This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.

    I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.

    I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.

    YAML
    resource "digitalocean_droplet" "droplet" {
      image    = var.droplet_image
      name     = var.droplet_name
      region   = var.droplet_region
      size     = var.droplet_size
      ssh_keys = [digitalocean_ssh_key.ssh_key.fingerprint]
    }

    Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.

    YAML
    data "digitalocean_domain" "domain" {
      name = var.domain_name
    }

    This block is a little different. Here I am using the data property to grab information about something that already exists in my Digital Ocean account.

    I have already set up my domain in Digital Oceanโ€™s networking area.

    This is the overarching domain itself โ€“ not the specific A record that will point to the server.

    The reason iโ€™m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run terraform destroy.

    YAML
    resource "digitalocean_record" "record" {
      domain = data.digitalocean_domain.domain.id
      type   = "A"
      name   = "@"
      ttl    = 60
      value  = "${digitalocean_droplet.droplet.ipv4_address}"
    }

    The final block creates the actual A record with my existing domain settings.

    It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.

    Testing and Running the config to create the infrastructure

    If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:

    Bash
    terraform plan

    If the output looks okay to you, then type the following command and enter โ€œyesโ€ when it asks you:

    Bash
    terraform apply

    This should create the three items of infrastructure we have defined.

    Next Step

    Next we need to set that server up with the required software needed to run a static html website.

    I will be doing this with a program called Ansible.

    Iโ€™ll be writing up those steps in a zet very soon.


  • ๐Ÿ“‚

    Beyond Aliases — define your development workflow with custom bash scripts

    Being a Linux user for just over 10 years now, I can’t imagine my life with my aliases.

    Aliases help with removing the repetition of commonly-used commands on a system.

    For example, here’s some of my own that I use with the Laravel framework:

    Bash
    alias a="php artisan"
    alias sail='[ -f sail ] && bash sail || bash vendor/bin/sail'
    alias stan="./vendor/bin/phpstan analyse"

    You can set these in your ~/.bashrc file. See mine in my dotfiles as a fuller example.

    However, I recently came to want greater control over my development workflow. And so, with the help of videos by rwxrob, I came to embrace the idea of learning bash, and writing my own little scripts to help in various places in my workflow.

    A custom bash script

    For the example here, I’ll use the action of wanting to “exec” on to a local docker container.

    Sometimes you’ll want to get into a shell within a local docker container to test / debug things.

    I found I was repeating the same steps to do this and so I made a little script.

    Here is the script in full:

    Bash
    #!/bin/bash
    
    docker container ls | fzf | awk '{print $1}' | \
    xargs -o -I % docker exec -it % bash

    Breaking it down

    In order to better understand this script I’ll assume no prior knowledge and explain some bash concepts along the way.

    Sh-bang line.

    the first line is the “sh-bang”. It basically tells your shell which binary should execute this script when ran.

    For example you could write a valid php script and add #!/usr/bin/php at the top, which would tell the shell to use your php binary to interpret the script.

    So #!/usr/bash means we are writing a bash script.

    Pipes

    The pipe symbol: |.

    In brief, a “pipe” in bash is a way to pass the output of the left hand command to the input of the right hand command.

    So the order of the commands to be ran in the script is in this order:

    1. docker container ls
    2. fzf
    3. awk ‘{print $1}’
    4. xargs -o -I % docker exec -it % bash

    docker container ls

    This gives us the list of currently-running containers on our system. The output is the list like so (I’ve used an image as the formatting gets messed up when pasting into a post as text) :

    fzf

    So the output of the docker container ls command above is the table in the image above, which is several rows of text.

    fzf is a “fuzzy finder” tool, which can be passed a list of pretty much anything, which can then be searched over by “fuzzy searching” the list.

    In this case the list is each row of that output (header row included)

    When you select (press enter) on your chosen row, that row of text is returned as the output of the command.

    In this image example you can see I’ve typed in “app” to search for, and it has highlighted the closest matching row.

    awk ‘{print $1}’

    awk is an extremely powerful tool, built into linux distributions, that allows you to parse structured text and return specific parts of that text.

    '{print $1}' is saying “take whatever input I’m given, split it up based on a delimeter, and return the item that is 1st ($1).

    The default delimeter is a space. So looking at that previous image example, the first piece of text in the docker image rows is the image ID: `”df96280be3ad” in the app image chosen just above.

    So pressing enter for that row from fzf, wil pass it to awk, which will then split that row up by spaces and return you the first element from that internal array of text items.

    xargs -o -I % docker exec -it % bash

    xargs is another powerful tool, which enables you to pass what ever is given as input, into another command. I’ll break it down further to explain the flow:

    The beginning of the xargs command is as so:

    Bash
    xargs -o -I %

    -o is needed when running an “interactive application”. Since our goal is to “exec” on to the docker container we choose, interactive is what we need. -o means to “open stdin (standard in) as /dev/tty in the child process before executing the command we specify.

    Next, -I % is us telling xargs, “when you next see the ‘%’ character, replace it with what we give you as input. Which in this case will be that docker container ID returned from the awk command previously.

    So when you replace the % character in the command that we are giving xargs, it will read as such:

    Bash
    docker exec -it df96280be3ad bash

    This is will “exec” on to that docker container and immediately run “bash” in that container.

    Goal complete.

    Put it in a script file

    So all that’s needed now, is to have that full set of piped commands in an executable script:

    Bash
    #!/bin/bash
    
    docker container ls | fzf | awk '{print $1}' | xargs -o -I % docker exec -it % bash

    My own version of this script is in a file called d8exec, which after saving it I ran:

    Bash
    chmod +x ./d8exec

    Call the script

    In order to be able to call your script from anywhere in your terminal, you just need to add the script to a directory that is in your $PATH. I keep mine at ~/.local/bin/, which is pretty standard for a user’s own scripts in Linux.

    You can see how I set my own in my .bashrc file here. The section that reads $HOME/.local/bin is the relevant piece. Each folder that is added to the $PATH is separated by the : character.

    Feel free to explore further

    You can look over all of my own little scripts in my bin folder for more inspiration for your own bash adventures.

    Have fun. And don’t put anything into your scripts that you wouldn’t want others seeing (api keys / secrets etc)


  • ๐Ÿ“‚

    Strange Things are Afoot

    Complete a task for a stranger

    Red Dead Redemption

  • ๐Ÿ“‚

    Clemency Pays

    Capture a bounty alive.

    — Red Dead Redemption

  • ๐Ÿ“‚

    That Government Boy

    Complete “Exodus in America”

    — Red Dead Redemption

  • ๐Ÿ“‚ ,

    Defeated Londra and His Horus – Horizon Forbidden West

    Defeated the awakened Horus and put a stop to Londra’s plans.


  • ๐Ÿ“‚ ,

    Confronted Londra – Horizon Forbidden West

    Uncovered the truth of Londra’s plans for the Quen and rescued Seyka’s sister.


  • ๐Ÿ“‚ ,

    Discovered the Ascension – Horizon Forbidden West

    Located the missing Quen and discovered Londra’s plan to leave Earth.


  • ๐Ÿ“‚ ,

    Completed Ultra Hard – Horizon Forbidden West

    Completed a new or New Game+ playthrough on Ultra Hard difficulty.


  • ๐Ÿ“‚ ,

    Completed New Game+ – Horizon Forbidden West

    Completed a New Game+ playthrough on any difficulty.


  • ๐Ÿ“‚

    Foxglove at Pipe Hall Farm

    Foxglove

    nice flower.


  • ๐Ÿ“‚

    Setting up a GPG Key with git to sign your commits

    Signing your git commits with GPG is really easy to set up and I’m always surprised by how many developers I meet that don’t do this.

    Of course it’s not required to push commits and has no baring on quality of code. But that green verified message next to your commits does feel good.

    Essentially there are three parts to this:

    1. Create your GPG key
    2. Tell git to use your GPG key to sign your commits
    3. Upload the public part of your GPG key to Gitlab / Github / etc

    Creating the GPG key if needed

    gpg --full-generate-key
    

    In the interactive guide, I choose:

    1. (1) RSA and RSA (default)
    2. 4096 bits long
    3. Does not expire
    4. Fill in Name, Email, Comment and Confirm.
    5. Enter passphrase when prompted.

    Getting the Key ID

    This will list all of your keys:

    gpg --list-secret-keys --keyid-format=long
    

    Example of the output:

    sec   rsa4096/THIS0IS0YOUR0KEY0ID 2020-12-25 [SC]
          KGHJ64GHG6HJGH5J4G6H5465HJGHJGHJG56HJ5GY
    uid                 [ultimate] Bob GPG Key<mail@your-domain.co.uk>
    

    In that example, the key id that you would need next is “THIS0IS0YOUR0KEY0ID” from the first line, after the forward slash.

    Tell your local git about the signing key

    To set the gpg key as the signing key for all of your git projects, run the following global git command:

    git config --global user.signingkey THIS0IS0YOUR0KEY0ID
    

    If you want to do it on a repository by repository basis, you can run it from within each project, and omit the --global flag:

    git config user.signingkey THIS0IS0YOUR0KEY0ID
    

    Signing your commits

    You can either set commit signing to true for all projects as the default, or by a repo by repo basis.

    # global
    git config --global commit.gpgsign true
    
    # local
    git config commit.gpgsign true
    

    If you wanted to, you could even decide to sign commits per each commit, by not setting it as a config setting, but passing a flag on every commit:

    git commit -S -m "My signed commit message"
    

    Adding your public key to gitlab / github / wherever

    Firstly export the public part of your key using your key id. Again, using the example key id from above:

    # Show your public key in terminal
    gpg --armor --export THIS0IS0YOUR0KEY0ID
    
    # Copy straight to your system clipboard using "xclip"
    gpg --armor --export THIS0IS0YOUR0KEY0ID | xclip -sel clipboard
    

    This will spit out a large key text block begining and ending with comments. Copy all of the text that it gives you and paste it into the gpg textbox in your git forge of choice – gitlab / github / gitea / etc.


  • ๐Ÿ“‚ ,

    Proficient Agent – Resident Evil 4

    Complete the main story on Hardcore mode or higher.


Explore

If you want to search, or just get an overview of my stuff, the explore page is a good place to start.

Any interesting websites and/or people I have found online, I link them on my blogroll page.

I keep a record of things i use on my… well… my “uses” page.