Seeing this very long ladder going up Lichfield cathedral triggered me earlier today.
Absolutely terrifying. 😅🪜
Seeing this very long ladder going up Lichfield cathedral triggered me earlier today.
Absolutely terrifying. 😅🪜
As a self-professed Alan Partridge obsessive I am ashamed to say I’ve never listened to this podcast.
Currently 3 episodes in.
As incredible as I thought it would be.
🤣🤣🤣
The Death of Slim Shady (Coup de Grace) — Eminem.
First listen.
Oh my God. Arthur Morgan narrating and 10 and a half hour book about the history behind the Red Dead Redemption games.
Count me in.
New Eminem album “The Death of Slim Shady” out this Friday.
Looking forward to this one.
Metal Gear Solid V: The Phantom Pain
Episode 6: Where do the bees sleep? S-Rank score card.
Starting work and this morning I’ve already took Henry and Sherlock for a walk, after S-Ranking the mission “Traitor’s Caravan [Extreme] on Metal Gear Solid V.
Most of modern tech, be it devices or software / digital services are corrupt and cancerous. They promise increased productivity and ease of use in exchange for your privacy and your control.
My aim is to go clear, as much as I can, and document that journey here.
Some things I have already done, such as switching to a privacy-respecting email provider like Protonmail. And a privacy-focused messaging service like Signal.
My aim for these posts is to give others a kind of guide to do this themselves. Some of the steps will be difficult for those who don’t have the privilege of time and / or technical know how. This is also part of the problem – some alternatives are not easy or convenient to switch to. But I will do my best here.
live-posting your life is a great way to let burglers know when you’re not gonna be in.
I need to rebuild my site once again. Perhaps with Laravel. But definitely in line with my love of the indieweb
WordPress is incredible. There’s nothing I’d rather build my online home with. 💚
WIP: post not yet finalized.
This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.
This is a descriptive post and contains nothing technical in the setting up of this infrastructure.
That will come in future posts.
Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.
Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.
I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.
I firstly set up my infrastructure in Digital Ocean and Github using Terraform.
This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.
After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.
I use Laravel Sail for local development.
For deployments I write a separate Dockerfile which builds off of a php-fpm container.
Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.
All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml
file in the project itself.
This file is used by the Github action responsible for deploying the project.
I add two workflow files for the project inside the ./.github/workflows/
directory. These are:
This file runs the full test suite, along with pint and larastan.
This file is triggered only on the main
branch, after the Tests (ci) action has completed successfully.
It will build the container image and tag it with the current git sha.
Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.
Then it pushes that image to my Digital Ocean container registry.
The next step does a find and replace to the project’s deployment.yml
file. I’ve included a snippet of that file below:
containers:
- name: davidpeachcouk
image:
ports:
- containerPort: 9000
It replaces that <IMAGE>
placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.
Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml
file with the kubectl command. After which, it just does a check to see that the deployment was a success.
Backups are a must for pretty much anything digital. And automating those backups make life so much easier for you, should you lose your data.
My own use case is to backup the data on my home server, since these are storing my music collection and my family’s photos and documents.
All of the services on my home server are installed with Docker, with all of the data in separate Docker Volumes. This means I should only need to back those folders that get mounted into the containers, since the services themselves could be easily re-deployed.
I also want this data to be encrypted, since I will be keeping both an offline local copy, as well as storing a copy in a third party cloud provider (Digital Ocean spaces).
S3cmd is a command line utility for interacting with an S3-compliant storage system.
It will enable me to send a copy of my data to my Digital Ocean Spaces account, encrypting it before hand.
The official installation instructions for s3cmd can be found on the Github repository.
For Arch Linux I used:
sudo pacman -S s3cmd
And for my home server, which is running Ubuntu Server, I installed it via Python’s package manager, “pip”:
sudo pip install s3cmd
Once installed, the first step is to run through the configuration steps with this command:
s3cmd --configure
Then answer the questions that is asks you.
You’ll need these items to complete the steps:
The other options should be fine as their default values.
Your configuration will be stored as a plain text file at ~/.s3cmd
. This includes that encryption password.
Since all of the data I actually care about on my server will be in directories that get mounted into docker containers, I only need to compress and encrypt those directories for backing up.
If ever I need to re-install my server I can just start all of the fresh docker containers, then move my latest backups to the correct path on the new server.
Here is my bash script that will archive, compress and push my data to backup over to Digital Ocean spaces (encrypting it via GPG before sending it).
I have added comments above each section to try and make it more clear as to what each step is doing:
#!/usr/bin/bash
## Root directory where all my backups are kept.
basepath="/home/david/backups"
## Variables for use below.
appname="nextcloud"
volume_from="nextcloud-aio-nextcloud"
container_path="/mnt/ncdata"
## Ensure the backup folder for the service exists.
mkdir -p "$basepath"/"$appname"
## Get current timestamp for backup naming.
datetime=$(date +"%Y-%m-%d-%H-%M-%S")
## Start a new ubuntu container, mounting all the volumes from my nextcloud container
## (I use Nextcloud All in One, so my Nextcloud service is called "nextcloud-aio-nextcloud")
## Also mount the local "$basepath"/"$appname" to the ubuntu container's "/backups" path.
## Once the ubuntu container starts it will run the tar command, creating the tar archive from
## the contents of the "$container_path", which is from the Nextcloud volume I mounted with
## the --volumes-from flag.
docker run \
--rm \
--volumes-from "$volume_from" \
-v "$basepath"/"$appname":/backups \
ubuntu \
tar cvzf /backups/"$appname"-data-"$datetime".tar.gz "$container_path"
## Now I use the s3cmd command to move that newly-created
## backup tar archive to my Digital Ocean spaces.
s3cmd -e put \
"$basepath"/"$appname"/"$appname"-data-"$datetime".tar.gz \
s3://scottie/"$appname"/
Cron jobs are a way to automate any tasks you want to on a Linux system.
You can have fine-grained control over how often you want to run a task.
Although work with Linux’s cron scheduler is out of the context of this guide, I will share the setting I have for my Nextcloud backup, and a brief explanation of its configuration.
The command to edit what cron jobs are running on a Linux system, Ubuntu in my case, is:
crontab -e
This will open up a temporary file to edit, which will get written to the actual cron file when saved — provided it is syntactically correct.
This is the setting I have in mine for my Nextcloud backup (it should all be on a single line):
10 3 * * 1,4 /home/david/backup-nextcloud >> /home/david/backups/backup-nextcloud.log
The numbers and asterisks are telling cron when the given command should run:
10th minute
3rd Hour
* Day of month (not relevant here)
* Month (not relevant here)
1st,4th Day of the Week (Monday and Thursday)
So my configuration there says it will run the /home/david/backup-nextcloud
command every Monday and Thursday at 3:10am. It will then pipe the command’s output into my log file for my Nextcloud backups.
Download the file from your Digital Ocean spaces account.
Go into the directory it is downloaded to and run the file
command on the archive:
# For example
file nextcloud-data-2023-11-17-03-10-01.tar.gz
# You should get something like the following feedback:
nextcloud-data-2023-11-17-03-10-01.tar.gz: GPG symmetrically encrypted data (AES256 cipher)
You can decrypt the archive with the following command:
gpg --decrypt nextcloud-data-2023-11-17-03-10-01.tar.gz > nextcloud-backup.tar.gz
When you are prompted for a passphrase, enter the one you set up when configuring the s3cmd command previously.
You can now extract the archive and see your data:
tar -xzvf nextcloud-backup.tar.gz
The archive will be extracted into the current directory.