Purchase a rare weapon from a gunsmith.
-
Using docker and docker compose for my Homelab
I’ve seen some very elaborate homelab set-ups online but wanted to get the easiest possible implementation I could, within my current skill set.
As I have quite a lot of experience with using docker for development in my day to day work, I thought I’d just try using docker compose to setup my homelab service
What is docker?
Docker is a piece of software that allows you to package up your services / apps in to “containers”, along with any dependencies that they need to run.
What this means for you, is that you can define all of the things you need to make your specific app work in a configuration file, called a
Dockerfile
. When the container is then built, it builds it with all of the dependencies that you specify.This is opposed to the older way of setting up a service / app /website, by installing the required dependencies manually on the host server itself.
By setting up services using docker (and its companion tool docker compose) You remove the need to install manual dependencies yourself.
Not only that, but if different services that you install require different versions of the same dependencies, containers keep those different versions separate.
Installing the docker tools
I use the guide for ubuntu on the official docker website.
Once docker and docker compose are installed on the server, I can then use a single configuration file for each of the services I want to put into my Home Lab. This means I don’t need to worry about the dependencies that those services need to work — because they are in their own containers, they are self-contained and need nothing to be added to the host system.
There are services that can help you manage docker too. But that was one step too far outside of my comfort zone for what I want to get working right now.
I will, however, be installing a service called “Portainer”, detailed in my next Home Lab post, which gives you a UI in which to look at the docker services you have running.
-
Homelab initial setup
I have gone with Ubuntu Server 22.04 LTS for my Homelab’s operating system.
Most of the videos I’ve seen for Homelab-related guides and reviews tend to revolve around Proxmox and/or TrueNAS. I have no experience with either of those, but I do have experience with Docker, so I am opting to go with straight up docker — at least for now.
Setting up the Operating system
I’m using a Linux-based system and so instructions are based on this.
Step 1: Download the Ubuntu Server iso image
Head here to download your preferred version of Ubuntu Server. I chose the latest LTS version at the time of writing (22.04)
Step 2: Create a bootable USB stick with the iso image you downloaded.
Once downloaded, insert and a usb stick to install the Ubuntu Server iso on to.
Firstly, check where your USB stick is on your filesystem. For that, I use fdisk:
Bashsudo fdisk -l
Assuming the USB stick is located at “
/dev/sdb
“, I use thedd
command to create my bootable USB (please check and double check where your USB is mounted on your system):Bashsudo dd bs=4M if=/path/to/Ubuntu-Server-22-04.iso of=/dev/sdb status=progress oflag=sync
Step 3: Insert and boot to the bootable USB stick into the Homelab computer
Boot the computer that you’re using for your server, using the USB stick as a temporary boot device.
Step 4: Install the operating system
Follow the steps that the set up guide gives you.
As an aside, I set my server ssd drive up with the “LVM” option. This has helped immensely this week, as I have added a second drive and doubled my capacity to 440GB.
Step 5: install and enable ssh remote access
I can’t remember if ssh came installed or enabled, but you can install
openssh
and then enable thesshd
service.You can then connect to the server from a device on your network with:
Bashssh username@192.168.0.77
This assumes your server’s IP address is
192.168.0.77
. Chances are very high it’ll be a different number (although the192.168.0
section may be correct.Everything else done remotely
I have an external keyboard in case I ever need to plug in to my server. However, now I have ssh enabled, I tend to just connect from my laptop using the ssh command show just above.
-
📂 Journal
Started to re-watch Breaking Bad. This’ll be my second time viewing. Think I’m gonna try and share my favourite shot from each episode.
-
Setting up mine, and my family’s, Homelab
I’ve opted for what I believe is the easiest, and cheapest, method of setting up my Homelab.
I’m using my old work PC which has the following spec:
- Quad core processor — i7, I think.
- 16gb of RAM
- 440GB ssd storage (2x 220gb in an LVM setup)
- A USB plug-in network adapter (really want to upgrade to an internal one though)
My Homelab Goals
My homelab goals are centered around two fundamental tenets: lower cost for online services and privacy.
I want to be:
- Hosting my own personal media backups: All my personal photos and videos I want stored in my own installation of Nextcloud. Along with those I want to also utilize its organizational apps too: calendar; todos; project planning; contacts.
- Hosting my own music collection: despite hating everything Google stands for, I do enjoy using its Youtube Music service. However, I have many CDs (yes, CDs) in the loft and don’t like the idea of essentially renting access to music. Plus it would be nice to streaming music to offline smart speakers (i.e. not Alexa; Google Speaker; et al.)
- Hosting old DVD films: I have lots of DVDs in the loft and would like to be able to watch them (without having to buy a new DVD player)
- Learning more about networking: configuring my own network is enjoyable to me and is something I want to increase my knowledge in. Hosting my own services for my family and myself is a great way to do this.
- Teach my Son how to own and control his own digital identity (he’s 7 months old): I want my Son to be armed with the knowledge of modern day digital existence and the privacy nightmares that engulf 95% of the web. And I want Him to have the knowledge and ability to be able to control his own data and identity, should He wish to when he’s older.
Documenting my journey
I will be documenting my Homelab journey as best as I can, and will tag all of these posts with the category of Homelab.
-
📂 Journal
Gutted that I’m now all up to date with Taskmaster. Only discovered it a month or so ago and been binging it.
-
I’m now running pi-hole through my Raspberry Pi 2b.
It’s both amazing and depressing just how many trackers are being blocked by it. I even noticed a regular ping being made to an Amazon endpoint exactly every 10 minutes.
I will try and write up my set up soon, which is a mix of setting up the Raspberry Pi and configuring my home router.
I’ve also managed to finally get a home server running again – using Ubuntu Server LTS.
My plan on my server is to just install services I want to self-host using docker. Docker being the only program I’ve installed on the machine itself.
So far I have installed the following:
- Home Assistant — On initial playing with this I have decided that it’s incredible. Connected to my LG TV and lets me control it from the app / laptop.
- Portainer — A graphical way to interact with my docker containers on the server.
-
📂 Journal
I have decided to get back into tinkering with my Raspberry Pi.
I will be blogging my journey as I stumble through my initial playing, through to building out my first proper homelab.
This first Raspberry Pi (model 2b) will be initially used as both a wireguard VPN server and a local DNS server.
-
📂 Journal
The God Slayer by Otep
I’ve loved Otep’s music since discovering the album “Sevas Tra” — with that insane album cover being the thing that brought me in.
Earlier today (yesterday) I listened to the recently-released The God Slayer, made up of half original songs and half covers.
Loved it, although Sevas Tra has been — and remains — my favourite of Otep’s.
My favourite songs from my first listen of the album are definitely the covers of Eminem’s “The way I Am” and the Beach Boys’ “California Girls”.
🏷️ Rock
-
Average Semi-detached house prices in UK by county – Statistical Analysis using R
This is my first data visualization attempt and uses data from HM Land Registry to show to average cost of a semi-detached house in four counties across the past ten years.
You can see the full repository for the project on Github.
The Code
Here I have included the code at the time of writing this post. The git repository code may now differ slightly.
library("tidyverse") regions <- c( "Derbyshire", "Leicestershire", "Staffordshire", "Warwickshire" ) data <- read.csv("props.csv") data %>% filter(Region_Name %in% regions) %>% filter(Date > "2013-01-01") %>% ggplot(aes( Date, Semi_Detached_Average_Price )) + geom_point(aes(color = Region_Name), size = 3) + theme_bw() + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust = 1)) + labs( title = "Average Semi-detached house prices per county", x = "Month and Year", y = "Average Price", color = "County" ) ggsave( "semi-detached-house-prices-derby-leicester-staffs-warwickshire.png", width = 4096, height = 2160, unit = "px" )
The Graph
Observations
Warwickshire has been the most expensive county to buy a semi-detached house out of the four counties observed.
Derbyshire has been the least expensive county to buy a semi-detached house out of the four counties observed.
The shapes of the line formed seem consistent across the counties; the rate of price increase seems similar between them.
A lot can happen over ten years.
🏷️ ggplot
-
Using a single file neovim configuration file
When I first moved my Neovim configuration over to using lua, as opposed to the more traditional vimscript, I thought I was clever separating it up into many files and includes.
Turns out that it became annoying to edit my configuration. Not difficult; just faffy.
So I decided to just stick it all into a single
init.lua
file. And now its much nicer to work with in my opinion.
-
📂 Journal
Don’t stop building
I really enjoy building scripts for my own workflow.
I wish I had the skills to build things in the real world, but until then I’ll keep building stuff in the digital space only.
Although I love working with PHP and Laravel, it is Bash that has re-ignited a passion in me to just build stuff without thinking its got to work towards being some kind of “profitable” side project.
Don’t. Stop. Building.
-
Lupo static site generator
What is Lupo?
Lupo is a simple static site generator, written in Bash.
I built it for myself to publish to a simple website of my own directly from the command line.
It was inspired by Rob Muhlestein and his approach to the Zettelkasten method.
Installation
Running through the following set of commands will install the lupo bash script to the following location on your system:
$HOME/.local/bin/lupo
If you add the
$HOME/.local/bin
directory to your$PATH
, then you can execute thelupo
command from anywhere.I chosen that directory as it seems to be a pretty standard location for user-specific scripts to live.
Bashgit clone https://github.com/davidpeach/lupo cd ./lupo ./install cd .. rm -rf ./lupo
Anatomy of a Lupo website
The structure of a newly-initialized Lupo website project is as follows:
Bash. ./html/ ./src/ ./src/style.css ./templates/ ./tmp/
All of your website source code lives within the
./src
directory. This is where you structure your website however you want it to be structured in the final html.You can write your pages / posts in markdown and lupo will convert them when building.
When building it into the final html, lupo will copy the structure of your
./src
directory into your./html
directory, converting any markdown files (any files ending in.md
) into html files.Any JavaScript or CSS files are left alone and copied over in the same directory relative to the
./html
root.Starting a lupo website
Create a directory that you want to be your website project, and initialize it as a Lupo project:
Bashmkdir ./my website cd ./my-website lupo init
The
init
command will create the required directories, including a file located at$HOME/.config/lupo/config
.You don’t need to worry about the config file just yet.
Create your homepage file and add some text to it:
Bashtouch ./src/index.md echo "Hello World" > ./src/index.md
Now just run the build command to generate the final html:
Bashlupo build
You should now have two files in your
./html
directory: anindex.html
file and astyle.css
file.The
index.html
was converted from your./src/index.md
file and moved into the root of the./html
directory. Thestyle.css
file was copied over verbatim to the html directory.Viewing your site locally
Lupo doesn’t currently have a way to launch a local webserver, but you could open a browser and point the address bar to the root of your project
./html
folder.I use an nginx docker image to preview my site locally, and will build in this functionality into lupo soon.
Page metadata
Each markdown page that you create, can have an option metadata section at the top of the page. This is known as “frontmatter”. Here is an example you could add to the top of your
./src/index.md
file:Markdown--- title: My Super Homepage --- Here is the normal page content
That will set the page’s title to “My Super Homepage”. This will also make the
%title%
variable available in your template files. (More on templates further down the page)If you re-run the
lupo build
command, and look again at your homepage, you should now see an<h1>
tag withyou title inside.The Index page
You can generate an index of all of your pages with the
index
command:Bashlupo index lupo build
Once you’ve built the website after running
index
, you will see a file at./html/index/index.html
. This is a simple index / archive of all of the pages on your website.For pages with a title set in their metadata block, that title will be used in the index listing. For any pages without a title set, the uri to the page will be used instead.
@todo ADD SEARCH to source and add to docs here.
Tag index pages
Within your page metadata block, you can also define a list of “tags” like so:
Markdown--- title: My Super Page tags: - tagone - tagtwo - anotherone --- The page content.
When you run the
lupo index
command, it will also go through all of your pages and use the tags to generate “tag index pages”.These are located at the following location/uri:
./html/tags/tagname/index.html
.These tag index pages will list all pages that contain that index’s tag.
Customizing your website
Lupo is very basic and doesn’t offer that much in the way of customization. And that is intentional – I built it as a simple tool for me and just wanted to share it with anyone else that may be interested.
That being said, there are currently two template files within the
./templates
directory:Bash./templates/default.template.html ./templates/tags.template.html
tags.template.html
is used when generating the “tag index” pages and the main “index” page.default.template.html
is used for all other pages.I am planning to add some flexibility to this in the near future and will update this page when added.
You are free to customize the templates as you want. And of course you can go wild with your CSS.
I’m also considering adding an opt-in css compile step to enable the use of something like
sass
.New post helper
To help with the boilerplate of add a new “post”, I add the following command:
Bashlupo post
When ran, it will ask you for a title. Once answered, it will generate the post src file and pre-fill the metadata block with that title and the current date and timestamp.
The post will be created at the following location:
Bash./src/{year}/{month}/{date}/{timestamp}/{url-friendly-title} # For example: ./src/2023/08/30/1693385086/lupo-static-site-generator/index.html
Page edit helper
At present, this requires you to have
fzf
installed. I am looking to try and replace that dependancy with thefind
command.To help find a page you want to edit, you can run the following command:
Bashlupo edit
This will open up a fuzzy search finder where you can type to search for the page you want to edit.
The results will narrow down as you type.
When you press enter, it will attmept to open that source page in your system’s default editor. Defined in your
$EDITOR
environment variable.Automatic rebuild on save
This requires you to have inotifywait installed.
Sometimes you will be working on a longer-form page or post, and want to refresh the browser to see your changes as you write it.
It quickly becomes tedious to have to keep running
lupo build
to see those changes.So running the following command will “watch” you
./src
directory for any changes, and rebuild any file that is altered in any way. It will only rebuild that single file; not the entire project.Deploying to a server
This requires you to have
rsync
installed.This assumes that you have a server setup and ready to host a static html website.
I covered how I set up my own server in This Terraform post and This Ansible post.
All that lupo needs to be able to deploy your site, is for you to add the required settings in your config file at
$HOME/.config/lupo/config
- remote_user – This is the user who owns the directory where the html files will be sent to.
- ssh_identity_key – This is the path to the private key file on your computer that pairs with the public key on your remote server.
- domain_name – The domain name pointing to your server.
- remote_directory – The full path to the directory where your html files are served from on your server.
For example:
Bashremote_user: david ssh_identity_key: ~/.ssh/id_rsa domain_name: example.com remote_directory: /var/www/example.com
Then run the following command:
Bashlupo push
With any luck you should see the feedback for the files pushed to your remote server.
Assuming you have set up you domain name to point to your server correctly, you should be able to visit you website in a browser and see your newly-deployed website.
Going live
This is an experimental feature
If you’ve got the
lupo watch
andlupo push
commands working, then the live command should also work:Bashlupo live
This will watch your project for changes, and recompile each updated page and push it to your server as it is saved.
The feedback is a bit verbose currently and the logic needs making a bit smarter. But it does currently work in its initial form.
🏷️ Lupo
-
Using ansible to prepare a digital ocean droplet to host a static website
Preface
This guide comes logically after the previous one I wrote about setting up a digital ocean server with Terraform.
You can clone my website’s ansible repository for reference.
The main logic for this Ansible configuration happens in the
setup.yml
file. This file can be called whatever you like as we’ll call it by name later on.Installing Ansible
You can install Ansible with your package manager of choice.
I install it using pacman on Arch Linux:
Bashsudo pacman -S ansible
The inventory.yml file
The inventory file is where I have set the relative configuration needed for the playbook.
The
all
key contains all of the host configurations (although I’m only using a single one).Within that
all
key isvars.ansible_ssh_private_key_file
which is just the local path to the ssh private key used to access the server.This is the key I set up with Terraform in the previous guide.
Then the
hosts
key just contains the hosts I want to be able to target (im using the domain name that I set up in the previous Terraform guide)The setup.yml file explained
The setup.yml file is what is known as an “Ansible Playbook”.
From my limited working knowledge of Ansible, a playbook is basically a set of tasks that are run against a server or a collection of servers.
In my own one I am currently only running it against a single server, which I am targeting via its domain name of “zet.davidpeach.me”
YAML- hosts: all become: true user: root vars_files: - vars/default.yml
This first section is the setup of the playbook.
hosts:all
tells it to run against all hosts that are defined in the./inventory.yml
file.become:true
is saying that ansible will switch to the root user on the server (defined on the next line withuser: root
) before running the playbook tasks.The
vars_files:
part lets you set relative paths to files containing variables that are used in the playbook and inside the file./files/nginx.conf.j2
.I wont go through each of the variables but hopefully you can see what they are doing.
The Playbook Tasks
Each of the tasks in the Playbook has a descriptive title that hopefully does well in explaining what the steps are doing.
The key value pairs of configuration after each of the task
title
s are pre-defined settings available to use in ansible.The tasks read from top to bottom and essentially automate the steps that normally need to be manually done when preparing a server.
Running the playbook
Bashcd ansible-project ansible-playbook setup.yml -i inventory.yml
This command should start Ansible off. You should get the usual message about trusting the target host when first connecting to the server. Just answer “yes” and press enter.
You should now see the output for each step defined in the playbook.
The server should now be ready to deploy to.
Testing your webserver
In the
./files/nginx.conf.j2
there is aroot
directive on live 3. For me this is set to/var/www/{{ http_host }}
. (http_host
is a variable set in thevars/default.yml
file).SSH on to the server, using the private ssh key from the keypair I am using (see the Terraform guide for reference).
Bashssh -i ~/.ssh/id_rsa.davidpeachme zet.davidpeach.me
Then on the server, create a basic
index.html
file in the website root defined in the default nginx file:Bashcd /var/www/zet.davidpeach.me touch index.html echo "hello world" > index.html
Now, going to your website url in a browser, you should be able to see the text “hello world” in the top left.
The server is ready to host a static html website.
Next Step
You can use whatever method you prefer to get your html files on to your server.
You could use
rsync
,scp
, an overly-complicated CI pipeline, or – if your’e using lupo – your could have lupo deploy it straight to your server for you.
-
Setting up a Digital Ocean droplet for a Lupo website with Terraform
Overview of this guide
My Terraform Repository used in this guide
Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud provider’s dashboard and manually clicking buttons and setting up things yourself.
This is known as “Infrastructure as Code”.
It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.
This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.
You can then build upon those foundations and work on building out your own desired infrastructures.
The Terraform Flow
As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.
- Firstly we write a configuration file that defines the infrastructure that we want.
- Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
- Finally we run the
terraform plan
command to test our infrastructure configuration, and thenterraform apply
to make it all live.
Installing the Terraform program
Terraform has installation instructions, but you may be able to find it with your package manager.
Here I am installing it on Arch Linux, by the way, with
pacman
Bashsudo pacman -S terraform
Setting the required variables
The configuration file for the infrastructure I am using requires only a single variable from outside. That is the
do_token
.This is created manually in the API section of the Digital Ocean dashboard. Create yours and keep its value to hand for usage later.
Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific
env
in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.Creating an ssh key
In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought I’d create a key pair specific for my website deployment.
Bashssh-keygen -t rsa
I give it a different name so as to not override my standard
id_rsa
one. I call itid_rsa.davidpeachme
just so I know which is my website server one at a glance.Describing your desired infrastructure with code
Terraform uses a declaritive language, as opposed to imperetive.
What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.
You dont need to be concerned with the nitty gritty of how it is achieved.
I have a real-life example that will show you exactly what a minimal configuration can look like.
Clone / fork the repository for my website server.
Explaination of my terraform repository
YAMLterraform { required_providers { digitalocean = { source = "digitalocean/digitalocean" version = "~> 2.0" } } } variable "do_token" {} # Variables whose values are defined in ./terraform.tfvars variable "domain_name" {} variable "droplet_image" {} variable "droplet_name" {} variable "droplet_region" {} variable "droplet_size" {} variable "ssh_key_name" {} variable "ssh_local_path" {} provider "digitalocean" { token = var.do_token }
The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.
Since I’m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.
The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.
Following that are the variables that I have defined myself in the
./terraform.tfvars
file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.
YAMLresource "digitalocean_ssh_key" "ssh_key" { name = var.ssh_key_name public_key = file(var.ssh_local_path) }
Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.
This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.
I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.
I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.
YAMLresource "digitalocean_droplet" "droplet" { image = var.droplet_image name = var.droplet_name region = var.droplet_region size = var.droplet_size ssh_keys = [digitalocean_ssh_key.ssh_key.fingerprint] }
Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.
YAMLdata "digitalocean_domain" "domain" { name = var.domain_name }
This block is a little different. Here I am using the
data
property to grab information about something that already exists in my Digital Ocean account.I have already set up my domain in Digital Ocean’s networking area.
This is the overarching domain itself – not the specific A record that will point to the server.
The reason i’m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run
terraform destroy
.YAMLresource "digitalocean_record" "record" { domain = data.digitalocean_domain.domain.id type = "A" name = "@" ttl = 60 value = "${digitalocean_droplet.droplet.ipv4_address}" }
The final block creates the actual A record with my existing domain settings.
It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.
Testing and Running the config to create the infrastructure
If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:
Bashterraform plan
If the output looks okay to you, then type the following command and enter “yes” when it asks you:
Bashterraform apply
This should create the three items of infrastructure we have defined.
Next Step
Next we need to set that server up with the required software needed to run a static html website.
I will be doing this with a program called Ansible.
I’ll be writing up those steps in a zet very soon.