An overview of how I set up Kubernetes, and my projects to deploy to it.
WIP: post not yet finalized.
This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.
This is a descriptive post and contains nothing technical in the setting up of this infrastructure.
That will come in future posts.
Services / Websites I use
Digital Ocean
Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.
Github
Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.
Terraform
I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.
Setting up the infrastructure
I firstly set up my infrastructure in Digital Ocean and Github using Terraform.
This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.
After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.
Setting up a Laravel project
I use Laravel Sail for local development.
For deployments I write a separate Dockerfile which builds off of a php-fpm container.
Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.
Kubernetes deployment file
All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml file in the project itself.
This file is used by the Github action responsible for deploying the project.
Github action workflows
I add two workflow files for the project inside the ./.github/workflows/ directory. These are:
ci.yml
This file runs the full test suite, along with pint and larastan.
deploy.yml
This file is triggered only on the main branch, after the Tests (ci) action has completed successfully.
It will build the container image and tag it with the current git sha.
Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.
Then it pushes that image to my Digital Ocean container registry.
The next step does a find and replace to the project’s deployment.yml file. I’ve included a snippet of that file below:
It replaces that <IMAGE> placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.
Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml file with the kubectl command. After which, it just does a check to see that the deployment was a success.
Automating backups of Docker volumes from a Linux server to Digital Ocean spaces.
Backups are a must for pretty much anything digital. And automating those backups make life so much easier for you, should you lose your data.
My use case
My own use case is to backup the data on my home server, since these are storing my music collection and my family’s photos and documents.
All of the services on my home server are installed with Docker, with all of the data in separate Docker Volumes. This means I should only need to back those folders that get mounted into the containers, since the services themselves could be easily re-deployed.
I also want this data to be encrypted, since I will be keeping both an offline local copy, as well as storing a copy in a third party cloud provider (Digital Ocean spaces).
Setting up s3cmd
S3cmd is a command line utility for interacting with an S3-compliant storage system.
It will enable me to send a copy of my data to my Digital Ocean Spaces account, encrypting it before hand.
And for my home server, which is running Ubuntu Server, I installed it via Python’s package manager, “pip”:
sudo pip install s3cmd
Configuring s3cmd
Once installed, the first step is to run through the configuration steps with this command:
s3cmd --configure
Then answer the questions that is asks you.
You’ll need these items to complete the steps:
Access Key (for digital ocean api)
Secret Key (for digital ocean api)
S3 endpoint (e.g. lon1.digitaloceanspaces.com)
DNS-style (I use %(bucket)s.ams3.digitaloceanspaces.com)
Encryption password (remember this as you’ll need it for whenever you need to decrypt your data)
The other options should be fine as their default values.
Your configuration will be stored as a plain text file at ~/.s3cmd. This includes that encryption password.
Automation script for backing up docker volume data
Since all of the data I actually care about on my server will be in directories that get mounted into docker containers, I only need to compress and encrypt those directories for backing up.
If ever I need to re-install my server I can just start all of the fresh docker containers, then move my latest backups to the correct path on the new server.
Here is my bash script that will archive, compress and push my data to backup over to Digital Ocean spaces (encrypting it via GPG before sending it).
I have added comments above each section to try and make it more clear as to what each step is doing:
#!/usr/bin/bash
## Root directory where all my backups are kept.
basepath="/home/david/backups"
## Variables for use below.
appname="nextcloud"
volume_from="nextcloud-aio-nextcloud"
container_path="/mnt/ncdata"
## Ensure the backup folder for the service exists.
mkdir -p "$basepath"/"$appname"
## Get current timestamp for backup naming.
datetime=$(date +"%Y-%m-%d-%H-%M-%S")
## Start a new ubuntu container, mounting all the volumes from my nextcloud container
## (I use Nextcloud All in One, so my Nextcloud service is called "nextcloud-aio-nextcloud")
## Also mount the local "$basepath"/"$appname" to the ubuntu container's "/backups" path.
## Once the ubuntu container starts it will run the tar command, creating the tar archive from
## the contents of the "$container_path", which is from the Nextcloud volume I mounted with
## the --volumes-from flag.
docker run \
--rm \
--volumes-from "$volume_from" \
-v "$basepath"/"$appname":/backups \
ubuntu \
tar cvzf /backups/"$appname"-data-"$datetime".tar.gz "$container_path"
## Now I use the s3cmd command to move that newly-created
## backup tar archive to my Digital Ocean spaces.
s3cmd -e put \
"$basepath"/"$appname"/"$appname"-data-"$datetime".tar.gz \
s3://scottie/"$appname"/
Automating the backup with a cronjob
Cron jobs are a way to automate any tasks you want to on a Linux system.
You can have fine-grained control over how often you want to run a task.
Although work with Linux’s cron scheduler is out of the context of this guide, I will share the setting I have for my Nextcloud backup, and a brief explanation of its configuration.
The command to edit what cron jobs are running on a Linux system, Ubuntu in my case, is:
crontab -e
This will open up a temporary file to edit, which will get written to the actual cron file when saved — provided it is syntactically correct.
This is the setting I have in mine for my Nextcloud backup (it should all be on a single line):
The numbers and asterisks are telling cron when the given command should run:
10th minute
3rd Hour
* Day of month (not relevant here)
* Month (not relevant here)
1st,4th Day of the Week (Monday and Thursday)
So my configuration there says it will run the /home/david/backup-nextcloud command every Monday and Thursday at 3:10am. It will then pipe the command’s output into my log file for my Nextcloud backups.
Decrypting your backups
Download the file from your Digital Ocean spaces account.
Go into the directory it is downloaded to and run the file command on the archive:
# For example
file nextcloud-data-2023-11-17-03-10-01.tar.gz
# You should get something like the following feedback:
nextcloud-data-2023-11-17-03-10-01.tar.gz: GPG symmetrically encrypted data (AES256 cipher)
You can decrypt the archive with the following command:
By trade I am a PHP developer. I’ve never done devops in a professional setting. However, for a while I have had a strange fascination with various continuous integration and deployment strategies I’ve seen at many of my places of work.
I’ve seen some very complicated setups over the years, which has created a mental block for me to really dig in and understand setting up integration and deployment workflows.
But in my current role at Geomiq, I had the opportunity of being shown a possible setup — specifically using Kubernetes. And that was sort of a gateway drug, which finally led me to getting a working workflow up and running.
I now want to start sharing what I have learnt and build out a fully-fledged deployment workflow. Not sure how many posts it will take, or what structure it will take, but my aim is to make devops and CI/CD as approachable as possible.
In this guide I’ll show you a way to get started with Terraform — specifically with Digital Ocean.
Terraform is a program that can be used to build your cloud-based infrastructure based off of configuration files that you write. It’s a part of what is referred to as “Infrastructure as code (Iac)”.
Instead of going into various cloud provider UI dashboards and clicking around to build your resources, Terraform can do all that provisioning for you. It uses the cloud provider APIs behind the scenes — you just write exactly the infrastructure that you want to end up with at the end.
In this guide, we will provision a simple Digital Ocean Server (a Droplet in Digital Ocean parlance) using Terraform from our local terminal.
If you don’t yet have a Digital Ocean account, feel free to use my referral link to set one up. With that link you’ll get $200 in credit to use over 60 days.
Setting up Terraform in 4 steps
1 :: Install terraform
Terraform is available to install from pretty much all package repositories out there.
Installing it should be as simple as running a one-line command in your terminal.
2 :: Configure any required cloud provider API tokens
In order to let the Terraform program make changes to your cloud provider account, you will need to set up API tokens and tell Terraform where to find them.
In this post I’ll only be setting up a single one for Digital Ocean.
3 :: Write your main.tf configuration file
A single main.tf file will be enough to get you something working.
Add all of your needed resources / infrastructure in it.
4 :: Run the apply command
By running the terraform apply command against your main.tf file, you can turn your empty cloud infrastructure into a working setup.
Step 1 :: Install Terraform
Terraform’s documentation details the numerous ways of getting it installed across operating systems.
I use Arch Linux and so install it like so:
Bash
sudopacman-Syterraform
You can check it is installed and discoverable on your system by checking the version you have installed:
Bash
terraform-v# My OutputTerraformv1.6.4onlinux_amd64
Now create an empty directory, which will be your “terraform project”. It doesn’t matter what you call the folder.
Then inside that file create a file called main.tf. We’ll come back to this file a little later.
Step 2 :: Configure any required cloud provider API tokens
Head to your Digital Ocean API Tokens dashboard and click “Generate New Token”. Give it a name, choose an expiry and make sure you click the “write” permission option. Click “generate token”.
There are a number of ways we can tell Terraform what our Digital Ocean API Token is:
Obviously we could hard code it for the purposes of just getting it running while learning, though I wouldn’t recommend this approach even in testing.
Another is to use Terraform-specific environment variables set on your system. This has been my approach in the past. However, I came to realize how unsafe this was as every program you install has the potential to read from your environment variable.
A third way is to pass it as a parameter when calling the apply command.
I will be opting for that third option, but I don’t want to have that token saved in my history or have to pass it in everytime I want to run a Terraform command.
So my solution is to write a small wrapper bash script that will read the contents of a file in my home directory (with my token in) and pass it as an argument to the Terraform apply command.
Creating a wrapper bash script to safely pass secret token to command
Create a file in your home directory called “terraform-test”. You can call it anything, just remember to reference it correctly when using it later in the guide.
Inside that file, paste only the API token that you got from your Digital Ocean API dashboard. Then save the file and close it.
Open a new file in the root of your Terraform project and add the following contents:
This means that you are not having to keep passing your Digital Ocean token in for every command, and you wont end up accidentally leaking the token inside your shell’s env variables.
We will use that file later in this guide.
Step 3 :: Write your main.tf configuration file
For this example, everything will be kept in a single file called main.tf. When you start working on bigger infrastructure plans, there is nothing stopping you from splitting out your configuration into multiple, single-purpose files.
At the top of the file is the terraform block. This sets up the various providers that we want to work with for building out our infrastructure. In this example we only need the digital ocean one.
variable declarations
Variable declarations can be used to keep sensitive information out of out configuration — and thus source control later, as well as making our configuration more reusable.
Each of the variables that our configuration needs to run must be defined as a variable like above. You can define variables in a few different ways, but here I have opted for the simplest.
We can see that all our configuration needs is a do_token value passed to it.
provider setups
Each of the providers that we declare in our terraform block will probably need some kind of setup — such as an api token like our Digital Ocean example.
For us we can see that the setting up of Digital Ocean’s provider needs only a token, which we are passing it from the variable that we will pass in via the cli command.
resource declarations
We then declare the “resources” that we want Terraform to create for us in our Digital Ocean account. In this case we just want it to create a single small droplet as a proof of concept.
The values I have passed to the digitalocean_droplet resource, would be great examples of where to use variables, potentially even with default placeholder values.
I have hard coded the values here for brevity.
Step 4 :: Run the apply command
Before running apply for the first time, we first need to initialize the project:
Bash
terraforminit# You should see some feedback starting with this:Terraformhasbeensuccessfullyinitialized!
You can also run terraform plan before the apply command to see what Terraform will be provisioning for you. However, when running terraform apply, it shows you the plan and asks for explicit confirmation before building anything. So I rarely use plan.
If you run terraform apply, it will prompt you for any variables that your main.tf requires — in our case the do_token variable. We could type it / paste it in every time we want to run a command. But a more elegant solution would be to use that custom bash script we created earlier.
Assuming that bash script is in our current directory — the Terraform project folder — run the following:
Bash
./myterraformwrapperapply
This should display to you what it is planning to provision in your Digital Ocean account — a single Droplet.
Type the word “yes” and hit enter.
You should now see it giving you a status update every 10 seconds, ending in confirmation of the droplet being created.
If you hard back over to your Digital Ocean account dashboard, you should see that new droplet sitting there.
Step 5 :: Bonus: destroying resources.
Just as Terraform can be used to create those resources, it can also be used to destroy them too. It goes without saying that you should always be mindful of just what you are destroying, but in this example we are just playing with a test droplet.
Run the following to destroy your newly-created droplet:
Bash
./myterraformwrapperdestroy
Again, it will first show you what it is planning to change in your account — the destruction of that single droplet.
Type “yes” and hit enter to accept.
Next Steps
I love playing with Terraform, and will be sharing anything that I learn along my journey on my website.
You could start working through Terraform’s documentation to get a taste of what it can do for you.
You can even take a look at its excellent registry to see all of the providers that are available. Maybe even dig deep into the Digital Ocean provider documentation and see all of the available resources you could play with.
Just be careful how much you are creating and when testing don’t forget to run the destroy command when you’re done. The whole point of storing your infrastructure as code is that it is dead simple to provision and destroy it all.
Just don’t get leaving test resources up and potentially running yourself a huge bill.
I’ve seen some very elaborate homelab set-ups online but wanted to get the easiest possible implementation I could, within my current skill set.
As I have quite a lot of experience with using docker for development in my day to day work, I thought I’d just try using docker compose to setup my homelab service
What is docker?
Docker is a piece of software that allows you to package up your services / apps in to “containers”, along with any dependencies that they need to run.
What this means for you, is that you can define all of the things you need to make your specific app work in a configuration file, called a Dockerfile. When the container is then built, it builds it with all of the dependencies that you specify.
This is opposed to the older way of setting up a service / app /website, by installing the required dependencies manually on the host server itself.
By setting up services using docker (and its companion tool docker compose) You remove the need to install manual dependencies yourself.
Not only that, but if different services that you install require different versions of the same dependencies, containers keep those different versions separate.
Once docker and docker compose are installed on the server, I can then use a single configuration file for each of the services I want to put into my Home Lab. This means I don’t need to worry about the dependencies that those services need to work — because they are in their own containers, they are self-contained and need nothing to be added to the host system.
There are services that can help you manage docker too. But that was one step too far outside of my comfort zone for what I want to get working right now.
I will, however, be installing a service called “Portainer”, detailed in my next Home Lab post, which gives you a UI in which to look at the docker services you have running.
Most of the videos I’ve seen for Homelab-related guides and reviews tend to revolve around Proxmox and/or TrueNAS. I have no experience with either of those, but I do have experience with Docker, so I am opting to go with straight up docker — at least for now.
Setting up the Operating system
I’m using a Linux-based system and so instructions are based on this.
Step 1: Download the Ubuntu Server iso image
Head here to download your preferred version of Ubuntu Server. I chose the latest LTS version at the time of writing (22.04)
Step 2: Create a bootable USB stick with the iso image you downloaded.
Once downloaded, insert and a usb stick to install the Ubuntu Server iso on to.
Firstly, check where your USB stick is on your filesystem. For that, I use fdisk:
Bash
sudofdisk-l
Assuming the USB stick is located at “/dev/sdb“, I use the dd command to create my bootable USB (please check and double check where your USB is mounted on your system):
Step 3: Insert and boot to the bootable USB stick into the Homelab computer
Boot the computer that you’re using for your server, using the USB stick as a temporary boot device.
Step 4: Install the operating system
Follow the steps that the set up guide gives you.
As an aside, I set my server ssd drive up with the “LVM” option. This has helped immensely this week, as I have added a second drive and doubled my capacity to 440GB.
Step 5: install and enable ssh remote access
I can’t remember if ssh came installed or enabled, but you can install openssh and then enable the sshd service.
You can then connect to the server from a device on your network with:
Bash
sshusername@192.168.0.77
This assumes your server’s IP address is 192.168.0.77. Chances are very high it’ll be a different number (although the 192.168.0 section may be correct.
Everything else done remotely
I have an external keyboard in case I ever need to plug in to my server. However, now I have ssh enabled, I tend to just connect from my laptop using the ssh command show just above.
I’ve opted for what I believe is the easiest, and cheapest, method of setting up my Homelab.
I’m using my old work PC which has the following spec:
Quad core processor — i7, I think.
16gb of RAM
440GB ssd storage (2x 220gb in an LVM setup)
A USB plug-in network adapter (really want to upgrade to an internal one though)
My Homelab Goals
My homelab goals are centered around two fundamental tenets: lower cost for online services and privacy.
I want to be:
Hosting my own personal media backups: All my personal photos and videos I want stored in my own installation of Nextcloud. Along with those I want to also utilize its organizational apps too: calendar; todos; project planning; contacts.
Hosting my own music collection: despite hating everything Google stands for, I do enjoy using its Youtube Music service. However, I have many CDs (yes, CDs) in the loft and don’t like the idea of essentially renting access to music. Plus it would be nice to streaming music to offline smart speakers (i.e. not Alexa; Google Speaker; et al.)
Hosting old DVD films: I have lots of DVDs in the loft and would like to be able to watch them (without having to buy a new DVD player)
Learning more about networking: configuring my own network is enjoyable to me and is something I want to increase my knowledge in. Hosting my own services for my family and myself is a great way to do this.
Teach my Son how to own and control his own digital identity (he’s 7 months old): I want my Son to be armed with the knowledge of modern day digital existence and the privacy nightmares that engulf 95% of the web. And I want Him to have the knowledge and ability to be able to control his own data and identity, should He wish to when he’s older.
Documenting my journey
I will be documenting my Homelab journey as best as I can, and will tag all of these posts with the category of Homelab.
I’m now running pi-hole through my Raspberry Pi 2b.
It’s both amazing and depressing just how many trackers are being blocked by it. I even noticed a regular ping being made to an Amazon endpoint exactly every 10 minutes.
I will try and write up my set up soon, which is a mix of setting up the Raspberry Pi and configuring my home router.
I’ve also managed to finally get a home server running again – using Ubuntu Server LTS.
My plan on my server is to just install services I want to self-host using docker. Docker being the only program I’ve installed on the machine itself.
So far I have installed the following:
Home Assistant — On initial playing with this I have decided that it’s incredible. Connected to my LG TV and lets me control it from the app / laptop.
Portainer — A graphical way to interact with my docker containers on the server.
This is my first data visualization attempt and uses data from HM Land Registry to show to average cost of a semi-detached house in four counties across the past ten years.
When I first moved my Neovim configuration over to using lua, as opposed to the more traditional vimscript, I thought I was clever separating it up into many files and includes.
Turns out that it became annoying to edit my configuration. Not difficult; just faffy.
So I decided to just stick it all into a single init.lua file. And now its much nicer to work with in my opinion.
The structure of a newly-initialized Lupo website project is as follows:
Bash
../html/./src/./src/style.css./templates/./tmp/
All of your website source code lives within the ./src directory. This is where you structure your website however you want it to be structured in the final html.
You can write your pages / posts in markdown and lupo will convert them when building.
When building it into the final html, lupo will copy the structure of your ./src directory into your ./html directory, converting any markdown files (any files ending in .md) into html files.
Any JavaScript or CSS files are left alone and copied over in the same directory relative to the ./html root.
Starting a lupo website
Create a directory that you want to be your website project, and initialize it as a Lupo project:
Bash
mkdir./mywebsitecd./my-websitelupoinit
The init command will create the required directories, including a file located at $HOME/.config/lupo/config.
You don’t need to worry about the config file just yet.
Create your homepage file and add some text to it:
Now just run the build command to generate the final html:
Bash
lupobuild
You should now have two files in your ./html directory: an index.html file and a style.css file.
The index.html was converted from your ./src/index.md file and moved into the root of the ./html directory. The style.css file was copied over verbatim to the html directory.
Viewing your site locally
Lupo doesn’t currently have a way to launch a local webserver, but you could open a browser and point the address bar to the root of your project ./html folder.
I use an nginx docker image to preview my site locally, and will build in this functionality into lupo soon.
Page metadata
Each markdown page that you create, can have an option metadata section at the top of the page. This is known as “frontmatter”. Here is an example you could add to the top of your ./src/index.md file:
Markdown
---title:My Super Homepage---Here is the normal page content
That will set the page’s title to “My Super Homepage”. This will also make the %title% variable available in your template files. (More on templates further down the page)
If you re-run the lupo build command, and look again at your homepage, you should now see an <h1> tag withyou title inside.
The Index page
You can generate an index of all of your pages with the index command:
Bash
lupoindexlupobuild
Once you’ve built the website after running index, you will see a file at ./html/index/index.html. This is a simple index / archive of all of the pages on your website.
For pages with a title set in their metadata block, that title will be used in the index listing. For any pages without a title set, the uri to the page will be used instead.
@todo ADD SEARCH to source and add to docs here.
Tag index pages
Within your page metadata block, you can also define a list of “tags” like so:
Markdown
---title:My Super Pagetags:-tagone-tagtwo-anotherone---The page content.
When you run the lupo index command, it will also go through all of your pages and use the tags to generate “tag index pages”.
These are located at the following location/uri: ./html/tags/tagname/index.html.
These tag index pages will list all pages that contain that index’s tag.
Customizing your website
Lupo is very basic and doesn’t offer that much in the way of customization. And that is intentional – I built it as a simple tool for me and just wanted to share it with anyone else that may be interested.
That being said, there are currently two template files within the ./templates directory:
tags.template.html is used when generating the “tag index” pages and the main “index” page.
default.template.html is used for all other pages.
I am planning to add some flexibility to this in the near future and will update this page when added.
You are free to customize the templates as you want. And of course you can go wild with your CSS.
I’m also considering adding an opt-in css compile step to enable the use of something like sass.
New post helper
To help with the boilerplate of add a new “post”, I add the following command:
Bash
lupopost
When ran, it will ask you for a title. Once answered, it will generate the post src file and pre-fill the metadata block with that title and the current date and timestamp.
The post will be created at the following location:
Bash
./src/{year}/{month}/{date}/{timestamp}/{url-friendly-title}# For example:./src/2023/08/30/1693385086/lupo-static-site-generator/index.html
Page edit helper
At present, this requires you to have fzf installed. I am looking to try and replace that dependancy with the find command.
To help find a page you want to edit, you can run the following command:
Bash
lupoedit
This will open up a fuzzy search finder where you can type to search for the page you want to edit.
The results will narrow down as you type.
When you press enter, it will attmept to open that source page in your system’s default editor. Defined in your $EDITOR environment variable.
Automatic rebuild on save
This requires you to have inotifywait installed.
Sometimes you will be working on a longer-form page or post, and want to refresh the browser to see your changes as you write it.
It quickly becomes tedious to have to keep running lupo build to see those changes.
So running the following command will “watch” you ./src directory for any changes, and rebuild any file that is altered in any way. It will only rebuild that single file; not the entire project.
Deploying to a server
This requires you to have rsync installed.
This assumes that you have a server setup and ready to host a static html website.
With any luck you should see the feedback for the files pushed to your remote server.
Assuming you have set up you domain name to point to your server correctly, you should be able to visit you website in a browser and see your newly-deployed website.
Going live
This is an experimental feature
If you’ve got the lupo watch and lupo push commands working, then the live command should also work:
Bash
lupolive
This will watch your project for changes, and recompile each updated page and push it to your server as it is saved.
The feedback is a bit verbose currently and the logic needs making a bit smarter. But it does currently work in its initial form.
The main logic for this Ansible configuration happens in the setup.yml file. This file can be called whatever you like as we’ll call it by name later on.
Installing Ansible
You can install Ansible with your package manager of choice.
I install it using pacman on Arch Linux:
Bash
sudopacman-Sansible
The inventory.yml file
The inventory file is where I have set the relative configuration needed for the playbook.
The all key contains all of the host configurations (although I’m only using a single one).
Within that all key is vars.ansible_ssh_private_key_file which is just the local path to the ssh private key used to access the server.
This is the key I set up with Terraform in the previous guide.
Then the hosts key just contains the hosts I want to be able to target (im using the domain name that I set up in the previous Terraform guide)
The setup.yml file explained
The setup.yml file is what is known as an “Ansible Playbook”.
From my limited working knowledge of Ansible, a playbook is basically a set of tasks that are run against a server or a collection of servers.
In my own one I am currently only running it against a single server, which I am targeting via its domain name of “zet.davidpeach.me”
hosts:all tells it to run against all hosts that are defined in the ./inventory.yml file.
become:true is saying that ansible will switch to the root user on the server (defined on the next line with user: root) before running the playbook tasks.
The vars_files: part lets you set relative paths to files containing variables that are used in the playbook and inside the file ./files/nginx.conf.j2.
I wont go through each of the variables but hopefully you can see what they are doing.
The Playbook Tasks
Each of the tasks in the Playbook has a descriptive title that hopefully does well in explaining what the steps are doing.
The key value pairs of configuration after each of the task titles are pre-defined settings available to use in ansible.
The tasks read from top to bottom and essentially automate the steps that normally need to be manually done when preparing a server.
This command should start Ansible off. You should get the usual message about trusting the target host when first connecting to the server. Just answer “yes” and press enter.
You should now see the output for each step defined in the playbook.
The server should now be ready to deploy to.
Testing your webserver
In the ./files/nginx.conf.j2 there is a root directive on live 3. For me this is set to /var/www/{{ http_host }}. (http_host is a variable set in the vars/default.yml file).
SSH on to the server, using the private ssh key from the keypair I am using (see the Terraform guide for reference).
Bash
ssh-i~/.ssh/id_rsa.davidpeachmezet.davidpeach.me
Then on the server, create a basic index.html file in the website root defined in the default nginx file:
Now, going to your website url in a browser, you should be able to see the text “hello world” in the top left.
The server is ready to host a static html website.
Next Step
You can use whatever method you prefer to get your html files on to your server.
You could use rsync, scp, an overly-complicated CI pipeline, or – if your’e using lupo – your could have lupo deploy it straight to your server for you.
Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud provider’s dashboard and manually clicking buttons and setting up things yourself.
This is known as “Infrastructure as Code”.
It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.
This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.
You can then build upon those foundations and work on building out your own desired infrastructures.
The Terraform Flow
As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.
Firstly we write a configuration file that defines the infrastructure that we want.
Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
Finally we run the terraform plan command to test our infrastructure configuration, and then terraform apply to make it all live.
Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific env in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.
Creating an ssh key
In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought I’d create a key pair specific for my website deployment.
Bash
ssh-keygen-trsa
I give it a different name so as to not override my standard id_rsa one. I call it id_rsa.davidpeachme just so I know which is my website server one at a glance.
Describing your desired infrastructure with code
Terraform uses a declaritive language, as opposed to imperetive.
What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.
You dont need to be concerned with the nitty gritty of how it is achieved.
I have a real-life example that will show you exactly what a minimal configuration can look like.
The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.
Since I’m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.
The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.
Following that are the variables that I have defined myself in the ./terraform.tfvars file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.
The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.
Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.
This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.
I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.
I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.
Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.
YAML
data "digitalocean_domain" "domain" {name = var.domain_name}
This block is a little different. Here I am using the data property to grab information about something that already exists in my Digital Ocean account.
I have already set up my domain in Digital Ocean’s networking area.
This is the overarching domain itself – not the specific A record that will point to the server.
The reason i’m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run terraform destroy.
The final block creates the actual A record with my existing domain settings.
It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.
Testing and Running the config to create the infrastructure
If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:
Bash
terraformplan
If the output looks okay to you, then type the following command and enter “yes” when it asks you:
Bash
terraformapply
This should create the three items of infrastructure we have defined.
Next Step
Next we need to set that server up with the required software needed to run a static html website.
I will be doing this with a program called Ansible.
I’ll be writing up those steps in a zet very soon.
You can set these in your ~/.bashrc file. See mine in my dotfiles as a fuller example.
However, I recently came to want greater control over my development workflow. And so, with the help of videos by rwxrob, I came to embrace the idea of learning bash, and writing my own little scripts to help in various places in my workflow.
A custom bash script
For the example here, I’ll use the action of wanting to “exec” on to a local docker container.
Sometimes you’ll want to get into a shell within a local docker container to test / debug things.
I found I was repeating the same steps to do this and so I made a little script.
In order to better understand this script I’ll assume no prior knowledge and explain some bash concepts along the way.
Sh-bang line.
the first line is the “sh-bang”. It basically tells your shell which binary should execute this script when ran.
For example you could write a valid php script and add #!/usr/bin/php at the top, which would tell the shell to use your php binary to interpret the script.
So #!/usr/bash means we are writing a bash script.
Pipes
The pipe symbol: |.
In brief, a “pipe” in bash is a way to pass the output of the left hand command to the input of the right hand command.
So the order of the commands to be ran in the script is in this order:
docker container ls
fzf
awk ‘{print $1}’
xargs -o -I % docker exec -it % bash
docker container ls
This gives us the list of currently-running containers on our system. The output is the list like so (I’ve used an image as the formatting gets messed up when pasting into a post as text) :
fzf
So the output of the docker container ls command above is the table in the image above, which is several rows of text.
fzf is a “fuzzy finder” tool, which can be passed a list of pretty much anything, which can then be searched over by “fuzzy searching” the list.
In this case the list is each row of that output (header row included)
When you select (press enter) on your chosen row, that row of text is returned as the output of the command.
In this image example you can see I’ve typed in “app” to search for, and it has highlighted the closest matching row.
awk ‘{print $1}’
awk is an extremely powerful tool, built into linux distributions, that allows you to parse structured text and return specific parts of that text.
'{print $1}' is saying “take whatever input I’m given, split it up based on a delimeter, and return the item that is 1st ($1).
The default delimeter is a space. So looking at that previous image example, the first piece of text in the docker image rows is the image ID: `”df96280be3ad” in the app image chosen just above.
So pressing enter for that row from fzf, wil pass it to awk, which will then split that row up by spaces and return you the first element from that internal array of text items.
xargs -o -I % docker exec -it % bash
xargs is another powerful tool, which enables you to pass what ever is given as input, into another command. I’ll break it down further to explain the flow:
The beginning of the xargs command is as so:
Bash
xargs-o-I%
-o is needed when running an “interactive application”. Since our goal is to “exec” on to the docker container we choose, interactive is what we need. -o means to “open stdin (standard in) as /dev/tty in the child process before executing the command we specify.
Next, -I % is us telling xargs, “when you next see the ‘%’ character, replace it with what we give you as input. Which in this case will be that docker container ID returned from the awk command previously.
So when you replace the % character in the command that we are giving xargs, it will read as such:
Bash
dockerexec-itdf96280be3adbash
This is will “exec” on to that docker container and immediately run “bash” in that container.
Goal complete.
Put it in a script file
So all that’s needed now, is to have that full set of piped commands in an executable script:
My own version of this script is in a file called d8exec, which after saving it I ran:
Bash
chmod+x./d8exec
Call the script
In order to be able to call your script from anywhere in your terminal, you just need to add the script to a directory that is in your $PATH. I keep mine at ~/.local/bin/, which is pretty standard for a user’s own scripts in Linux.
You can see how I set my own in my .bashrc file here. The section that reads $HOME/.local/bin is the relevant piece. Each folder that is added to the $PATH is separated by the : character.
Feel free to explore further
You can look over all of my own little scripts in my bin folder for more inspiration for your own bash adventures.
Have fun. And don’t put anything into your scripts that you wouldn’t want others seeing (api keys / secrets etc)
If you want to do it on a repository by repository basis, you can run it from within each project, and omit the --global flag:
git config user.signingkey THIS0IS0YOUR0KEY0ID
Signing your commits
You can either set commit signing to true for all projects as the default, or by a repo by repo basis.
# global
git config --global commit.gpgsign true
# local
git config commit.gpgsign true
If you wanted to, you could even decide to sign commits per each commit, by not setting it as a config setting, but passing a flag on every commit:
git commit -S -m "My signed commit message"
Adding your public key to gitlab / github / wherever
Firstly export the public part of your key using your key id. Again, using the example key id from above:
# Show your public key in terminal
gpg --armor --export THIS0IS0YOUR0KEY0ID
# Copy straight to your system clipboard using "xclip"
gpg --armor --export THIS0IS0YOUR0KEY0ID | xclip -sel clipboard
This will spit out a large key text block begining and ending with comments. Copy all of the text that it gives you and paste it into the gpg textbox in your git forge of choice – gitlab / github / gitea / etc.
This post is currently in-progress, and is more of a brain-dump right now. But I like to share as often as I can otherwise I’d never share anything 🙂
Please view the official Vimwiki Github repository for up-to-date details of Vimwiki usage and installation. This page just documents my own processes at the time.
Installation
Add the following to plugins.lua
use "vimwiki/vimwiki"
Run the following two commands separately in the neovim command line:
:PackerSync
:PackerInstall
Close and re-open Neovim.
How I configure Vimwiki
I have 2 separate wikis set up in my Neovim.
One for my personal homepage and one for my commonplace site.
I set these up by adding the following in my dotfiles, at the following position: $NEOVIM_CONFIG_ROOT/after/plugin/vimwiki.lua. So for me that would be ~/.config/nvim/after/plugin/vimwiki.lua.
You could also put this command inside the config function in your plugins.lua file, where you require the vimwiki plugin. I just tend to put all my plugin-specific settings in their own “after/plugin” files for organisation.
vim.cmd([[
let wiki_1 = {}
let wiki_1.path = '~/vimwiki/website/'
let wiki_1.html_template = '~/vimwiki/website_html/'
let wiki_2 = {}
let wiki_2.path = '~/vimwiki/commonplace/'
let wiki_2.html_template = '~/vimwiki/commonplace_html/'
let g:vimwiki_list = [wiki_1, wiki_2]
call vimwiki#vars#init()
]])
The path keys tell vimwiki where to plave the root index.wiki file for each wiki you configure.
The html_template keys tell vimwiki where to place the compiled html files (when running the :VimwikiAll2HTML command).
I keep them separate as I am deploying them to separate domains on my server.
When I want to open and edit my website wiki, I enter 1<leader>ww.
When I want to open and edit my commonplace wiki, I enter 2<leader>ww.
Pressing those key bindings for the first time will ask you if you want the directories creating.
How I use vimwiki
At the moment, my usage is standard to what is described in the Github repository linked at the top of this page.
When I develop any custom workflows I’ll add them here.
Deployment
Setting up a server to deploy to is outside the scope of this post, but hope to write up a quick guide soon.
I run the following command from within vim on one of my wiki index pages, to export that entire wiki to html files:
:VimwikiAll2HTML
I then SCP the compiled HTML files to my server. Here is an example scp command that you can modify with your own paths: