Reached the Core of every Cauldron and accessed their information.
All Cores Overridden – Horizon Forbidden West
Reached the Core of every Cauldron and accessed their information.
love a good story
Reached the Core of every Cauldron and accessed their information.
Reached the Core of every Cauldron and accessed their information.
WordPress is incredible. There’s nothing I’d rather build my online home with. 💚
An overview of how I set up Kubernetes, and my projects to deploy to it.
WIP: post not yet finalized.
This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.
This is a descriptive post and contains nothing technical in the setting up of this infrastructure.
That will come in future posts.
Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.
Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.
I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.
I firstly set up my infrastructure in Digital Ocean and Github using Terraform.
This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.
After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.
I use Laravel Sail for local development.
For deployments I write a separate Dockerfile which builds off of a php-fpm container.
Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.
All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml
file in the project itself.
This file is used by the Github action responsible for deploying the project.
I add two workflow files for the project inside the ./.github/workflows/
directory. These are:
This file runs the full test suite, along with pint and larastan.
This file is triggered only on the main
branch, after the Tests (ci) action has completed successfully.
It will build the container image and tag it with the current git sha.
Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.
Then it pushes that image to my Digital Ocean container registry.
The next step does a find and replace to the project’s deployment.yml
file. I’ve included a snippet of that file below:
containers:
- name: davidpeachcouk
image:
ports:
- containerPort: 9000
It replaces that <IMAGE>
placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.
Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml
file with the kubectl command. After which, it just does a check to see that the deployment was a success.
Automating backups of Docker volumes from a Linux server to Digital Ocean spaces.
Backups are a must for pretty much anything digital. And automating those backups make life so much easier for you, should you lose your data.
My own use case is to backup the data on my home server, since these are storing my music collection and my family’s photos and documents.
All of the services on my home server are installed with Docker, with all of the data in separate Docker Volumes. This means I should only need to back those folders that get mounted into the containers, since the services themselves could be easily re-deployed.
I also want this data to be encrypted, since I will be keeping both an offline local copy, as well as storing a copy in a third party cloud provider (Digital Ocean spaces).
S3cmd is a command line utility for interacting with an S3-compliant storage system.
It will enable me to send a copy of my data to my Digital Ocean Spaces account, encrypting it before hand.
The official installation instructions for s3cmd can be found on the Github repository.
For Arch Linux I used:
sudo pacman -S s3cmd
And for my home server, which is running Ubuntu Server, I installed it via Python’s package manager, “pip”:
sudo pip install s3cmd
Once installed, the first step is to run through the configuration steps with this command:
s3cmd --configure
Then answer the questions that is asks you.
You’ll need these items to complete the steps:
The other options should be fine as their default values.
Your configuration will be stored as a plain text file at ~/.s3cmd
. This includes that encryption password.
Since all of the data I actually care about on my server will be in directories that get mounted into docker containers, I only need to compress and encrypt those directories for backing up.
If ever I need to re-install my server I can just start all of the fresh docker containers, then move my latest backups to the correct path on the new server.
Here is my bash script that will archive, compress and push my data to backup over to Digital Ocean spaces (encrypting it via GPG before sending it).
I have added comments above each section to try and make it more clear as to what each step is doing:
#!/usr/bin/bash
## Root directory where all my backups are kept.
basepath="/home/david/backups"
## Variables for use below.
appname="nextcloud"
volume_from="nextcloud-aio-nextcloud"
container_path="/mnt/ncdata"
## Ensure the backup folder for the service exists.
mkdir -p "$basepath"/"$appname"
## Get current timestamp for backup naming.
datetime=$(date +"%Y-%m-%d-%H-%M-%S")
## Start a new ubuntu container, mounting all the volumes from my nextcloud container
## (I use Nextcloud All in One, so my Nextcloud service is called "nextcloud-aio-nextcloud")
## Also mount the local "$basepath"/"$appname" to the ubuntu container's "/backups" path.
## Once the ubuntu container starts it will run the tar command, creating the tar archive from
## the contents of the "$container_path", which is from the Nextcloud volume I mounted with
## the --volumes-from flag.
docker run \
--rm \
--volumes-from "$volume_from" \
-v "$basepath"/"$appname":/backups \
ubuntu \
tar cvzf /backups/"$appname"-data-"$datetime".tar.gz "$container_path"
## Now I use the s3cmd command to move that newly-created
## backup tar archive to my Digital Ocean spaces.
s3cmd -e put \
"$basepath"/"$appname"/"$appname"-data-"$datetime".tar.gz \
s3://scottie/"$appname"/
Cron jobs are a way to automate any tasks you want to on a Linux system.
You can have fine-grained control over how often you want to run a task.
Although work with Linux’s cron scheduler is out of the context of this guide, I will share the setting I have for my Nextcloud backup, and a brief explanation of its configuration.
The command to edit what cron jobs are running on a Linux system, Ubuntu in my case, is:
crontab -e
This will open up a temporary file to edit, which will get written to the actual cron file when saved — provided it is syntactically correct.
This is the setting I have in mine for my Nextcloud backup (it should all be on a single line):
10 3 * * 1,4 /home/david/backup-nextcloud >> /home/david/backups/backup-nextcloud.log
The numbers and asterisks are telling cron when the given command should run:
10th minute
3rd Hour
* Day of month (not relevant here)
* Month (not relevant here)
1st,4th Day of the Week (Monday and Thursday)
So my configuration there says it will run the /home/david/backup-nextcloud
command every Monday and Thursday at 3:10am. It will then pipe the command’s output into my log file for my Nextcloud backups.
Download the file from your Digital Ocean spaces account.
Go into the directory it is downloaded to and run the file
command on the archive:
# For example
file nextcloud-data-2023-11-17-03-10-01.tar.gz
# You should get something like the following feedback:
nextcloud-data-2023-11-17-03-10-01.tar.gz: GPG symmetrically encrypted data (AES256 cipher)
You can decrypt the archive with the following command:
gpg --decrypt nextcloud-data-2023-11-17-03-10-01.tar.gz > nextcloud-backup.tar.gz
When you are prompted for a passphrase, enter the one you set up when configuring the s3cmd command previously.
You can now extract the archive and see your data:
tar -xzvf nextcloud-backup.tar.gz
The archive will be extracted into the current directory.
By trade I am a PHP developer. I’ve never done devops in a professional setting. However, for a while I have had a strange fascination with various continuous integration and deployment strategies I’ve seen at many of my places of work.
I’ve seen some very complicated setups over the years, which has created a mental block for me to really dig in and understand setting up integration and deployment workflows.
But in my current role at Geomiq, I had the opportunity of being shown a possible setup — specifically using Kubernetes. And that was sort of a gateway drug, which finally led me to getting a working workflow up and running.
I now want to start sharing what I have learnt and build out a fully-fledged deployment workflow. Not sure how many posts it will take, or what structure it will take, but my aim is to make devops and CI/CD as approachable as possible.
Kill 500 enemies with any rifle, repeater, or shotgun in any game mode.
Kill 500 enemies with any rifle, repeater, or shotgun in any game mode.
Complete Fort Mercer and Nosalida Hideouts.
Complete Fort Mercer and Nosalida Hideouts.
Complete Tumbleweed and Tesoro Azul Hideouts.
Complete Tumbleweed and Tesoro Azul Hideouts.
In this guide I’ll show you a way to get started with Terraform — specifically with Digital Ocean.
Terraform is a program that can be used to build your cloud-based infrastructure based off of configuration files that you write. It’s a part of what is referred to as “Infrastructure as code (Iac)”.
Instead of going into various cloud provider UI dashboards and clicking around to build your resources, Terraform can do all that provisioning for you. It uses the cloud provider APIs behind the scenes — you just write exactly the infrastructure that you want to end up with at the end.
In this guide, we will provision a simple Digital Ocean Server (a Droplet in Digital Ocean parlance) using Terraform from our local terminal.
If you don’t yet have a Digital Ocean account, feel free to use my referral link to set one up. With that link you’ll get $200 in credit to use over 60 days.
terraform
Terraform is available to install from pretty much all package repositories out there.
Installing it should be as simple as running a one-line command in your terminal.
In order to let the Terraform program make changes to your cloud provider account, you will need to set up API tokens and tell Terraform where to find them.
In this post I’ll only be setting up a single one for Digital Ocean.
main.tf
configuration fileA single main.tf
file will be enough to get you something working.
Add all of your needed resources / infrastructure in it.
apply
commandBy running the terraform apply
command against your main.tf
file, you can turn your empty cloud infrastructure into a working setup.
Terraform’s documentation details the numerous ways of getting it installed across operating systems.
I use Arch Linux and so install it like so:
sudo pacman -Sy terraform
You can check it is installed and discoverable on your system by checking the version you have installed:
terraform -v
# My Output
Terraform v1.6.4
on linux_amd64
Now create an empty directory, which will be your “terraform project”. It doesn’t matter what you call the folder.
Then inside that file create a file called main.tf
. We’ll come back to this file a little later.
Head to your Digital Ocean API Tokens dashboard and click “Generate New Token”. Give it a name, choose an expiry and make sure you click the “write” permission option. Click “generate token”.
There are a number of ways we can tell Terraform what our Digital Ocean API Token is:
apply
command.I will be opting for that third option, but I don’t want to have that token saved in my history or have to pass it in everytime I want to run a Terraform command.
So my solution is to write a small wrapper bash script that will read the contents of a file in my home directory (with my token in) and pass it as an argument to the Terraform apply command.
Create a file in your home directory called “terraform-test”. You can call it anything, just remember to reference it correctly when using it later in the guide.
Inside that file, paste only the API token that you got from your Digital Ocean API dashboard. Then save the file and close it.
Open a new file in the root of your Terraform project and add the following contents:
#!/usr/bin/bash
terraform "$@" -var "do_token=$(cat ~/terraform-test)"
Save that file as “myterraformwrapper”. (You can call it whatever you want, I use “myterraformwrapper” as an example).
Also make sure to give it executable permissions by running the following command against it:
chmod o+x myterraformwrapper
The myterraformwrapper
script does the following:
terraform
command.myterraformwrapper
get put in the place of "$@"
-var
flag and sets the do_token
parameter to the contents of the terraform-test
file you created previously.This means:
./myterraformwrapper apply
… behind the scenes, becomes…
terraform apply -var "do_token=CONTENTS_OF_YOUR_DO_TOKEN"
This means that you are not having to keep passing your Digital Ocean token in for every command, and you wont end up accidentally leaking the token inside your shell’s env
variables.
We will use that file later in this guide.
For this example, everything will be kept in a single file called main.tf
. When you start working on bigger infrastructure plans, there is nothing stopping you from splitting out your configuration into multiple, single-purpose files.
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
variable "do_token" {}
provider "digitalocean" {
token = var.do_token
}
resource "digitalocean_droplet" "droplet" {
image = "ubuntu-22-04-x64"
name = "terraform-test"
region = "lon1"
size = "s-1vcpu-1gb"
}
terraform
blockAt the top of the file is the terraform block. This sets up the various providers that we want to work with for building out our infrastructure. In this example we only need the digital ocean one.
variable
declarationsVariable declarations can be used to keep sensitive information out of out configuration — and thus source control later, as well as making our configuration more reusable.
Each of the variables that our configuration needs to run must be defined as a variable
like above. You can define variables in a few different ways, but here I have opted for the simplest.
We can see that all our configuration needs is a do_token
value passed to it.
provider
setupsEach of the providers that we declare in our terraform
block will probably need some kind of setup — such as an api token like our Digital Ocean example.
For us we can see that the setting up of Digital Ocean’s provider needs only a token, which we are passing it from the variable that we will pass in via the cli command.
resource
declarationsWe then declare the “resources” that we want Terraform to create for us in our Digital Ocean account. In this case we just want it to create a single small droplet as a proof of concept.
The values I have passed to the digitalocean_droplet
resource, would be great examples of where to use variables, potentially even with default placeholder values.
I have hard coded the values here for brevity.
apply
commandBefore running apply
for the first time, we first need to initialize the project:
terraform init
# You should see some feedback starting with this:
Terraform has been successfully initialized!
You can also run terraform plan
before the apply command to see what Terraform will be provisioning for you. However, when running terraform apply
, it shows you the plan and asks for explicit confirmation before building anything. So I rarely use plan
.
If you run terraform apply
, it will prompt you for any variables that your main.tf
requires — in our case the do_token
variable. We could type it / paste it in every time we want to run a command. But a more elegant solution would be to use that custom bash script we created earlier.
Assuming that bash script is in our current directory — the Terraform project folder — run the following:
./myterraformwrapper apply
This should display to you what it is planning to provision in your Digital Ocean account — a single Droplet.
Type the word “yes” and hit enter.
You should now see it giving you a status update every 10 seconds, ending in confirmation of the droplet being created.
If you hard back over to your Digital Ocean account dashboard, you should see that new droplet sitting there.
Just as Terraform can be used to create those resources, it can also be used to destroy them too. It goes without saying that you should always be mindful of just what you are destroying, but in this example we are just playing with a test droplet.
Run the following to destroy your newly-created droplet:
./myterraformwrapper destroy
Again, it will first show you what it is planning to change in your account — the destruction of that single droplet.
Type “yes” and hit enter to accept.
I love playing with Terraform, and will be sharing anything that I learn along my journey on my website.
You could start working through Terraform’s documentation to get a taste of what it can do for you.
You can even take a look at its excellent registry to see all of the providers that are available. Maybe even dig deep into the Digital Ocean provider documentation and see all of the available resources you could play with.
Just be careful how much you are creating and when testing don’t forget to run the destroy
command when you’re done. The whole point of storing your infrastructure as code is that it is dead simple to provision and destroy it all.
Just don’t get leaving test resources up and potentially running yourself a huge bill.
The Things which hurt, instruct.
— Benjamin Franklin
Deplete 30 items in public matches.
Deplete 30 items in public matches.
Purchase a rare weapon from a gunsmith.
Purchase a rare weapon from a gunsmith.
Complete “The Assault on Fort Mercer”
I’ve seen some very elaborate homelab set-ups online but wanted to get the easiest possible implementation I could, within my current skill set.
As I have quite a lot of experience with using docker for development in my day to day work, I thought I’d just try using docker compose to setup my homelab service
Docker is a piece of software that allows you to package up your services / apps in to “containers”, along with any dependencies that they need to run.
What this means for you, is that you can define all of the things you need to make your specific app work in a configuration file, called a Dockerfile
. When the container is then built, it builds it with all of the dependencies that you specify.
This is opposed to the older way of setting up a service / app /website, by installing the required dependencies manually on the host server itself.
By setting up services using docker (and its companion tool docker compose) You remove the need to install manual dependencies yourself.
Not only that, but if different services that you install require different versions of the same dependencies, containers keep those different versions separate.
I use the guide for ubuntu on the official docker website.
Once docker and docker compose are installed on the server, I can then use a single configuration file for each of the services I want to put into my Home Lab. This means I don’t need to worry about the dependencies that those services need to work — because they are in their own containers, they are self-contained and need nothing to be added to the host system.
There are services that can help you manage docker too. But that was one step too far outside of my comfort zone for what I want to get working right now.
I will, however, be installing a service called “Portainer”, detailed in my next Home Lab post, which gives you a UI in which to look at the docker services you have running.
I have gone with Ubuntu Server 22.04 LTS for my Homelab’s operating system.
Most of the videos I’ve seen for Homelab-related guides and reviews tend to revolve around Proxmox and/or TrueNAS. I have no experience with either of those, but I do have experience with Docker, so I am opting to go with straight up docker — at least for now.
I’m using a Linux-based system and so instructions are based on this.
Head here to download your preferred version of Ubuntu Server. I chose the latest LTS version at the time of writing (22.04)
Once downloaded, insert and a usb stick to install the Ubuntu Server iso on to.
Firstly, check where your USB stick is on your filesystem. For that, I use fdisk:
sudo fdisk -l
Assuming the USB stick is located at “/dev/sdb
“, I use the dd
command to create my bootable USB (please check and double check where your USB is mounted on your system):
sudo dd bs=4M if=/path/to/Ubuntu-Server-22-04.iso of=/dev/sdb status=progress oflag=sync
Boot the computer that you’re using for your server, using the USB stick as a temporary boot device.
Follow the steps that the set up guide gives you.
As an aside, I set my server ssd drive up with the “LVM” option. This has helped immensely this week, as I have added a second drive and doubled my capacity to 440GB.
I can’t remember if ssh came installed or enabled, but you can install openssh
and then enable the sshd
service.
You can then connect to the server from a device on your network with:
ssh username@192.168.0.77
This assumes your server’s IP address is 192.168.0.77
. Chances are very high it’ll be a different number (although the 192.168.0
section may be correct.
I have an external keyboard in case I ever need to plug in to my server. However, now I have ssh enabled, I tend to just connect from my laptop using the ssh command show just above.
Started to re-watch Breaking Bad. This’ll be my second time viewing. Think I’m gonna try and share my favourite shot from each episode.