By trade I am a PHP developer. I’ve never done devops in a professional setting. However, for a while I have had a strange fascination with various continuous integration and deployment strategies I’ve seen at many of my places of work.
I’ve seen some very complicated setups over the years, which has created a mental block for me to really dig in and understand setting up integration and deployment workflows.
But in my current role at Geomiq, I had the opportunity of being shown a possible setup — specifically using Kubernetes. And that was sort of a gateway drug, which finally led me to getting a working workflow up and running.
I now want to start sharing what I have learnt and build out a fully-fledged deployment workflow. Not sure how many posts it will take, or what structure it will take, but my aim is to make devops and CI/CD as approachable as possible.
Terraform is a program that can be used to build your cloud-based infrastructure based off of configuration files that you write. It’s a part of what is referred to as “Infrastructure as code (Iac)”.
Instead of going into various cloud provider UI dashboards and clicking around to build your resources, Terraform can do all that provisioning for you. It uses the cloud provider APIs behind the scenes — you just write exactly the infrastructure that you want to end up with at the end.
In this guide, we will provision a simple Digital Ocean Server (a Droplet in Digital Ocean parlance) using Terraform from our local terminal.
If you don’t yet have a Digital Ocean account, feel free to use my referral link to set one up. With that link you’ll get $200 in credit to use over 60 days.
Setting up Terraform in 4 steps
1 :: Install terraform
Terraform is available to install from pretty much all package repositories out there.
Installing it should be as simple as running a one-line command in your terminal.
2 :: Configure any required cloud provider API tokens
In order to let the Terraform program make changes to your cloud provider account, you will need to set up API tokens and tell Terraform where to find them.
In this post I’ll only be setting up a single one for Digital Ocean.
3 :: Write your main.tf configuration file
A single main.tf file will be enough to get you something working.
Add all of your needed resources / infrastructure in it.
4 :: Run the apply command
By running the terraform apply command against your main.tf file, you can turn your empty cloud infrastructure into a working setup.
Step 1 :: Install Terraform
Terraform’s documentation details the numerous ways of getting it installed across operating systems.
I use Arch Linux and so install it like so:
Bash
sudopacman-Syterraform
You can check it is installed and discoverable on your system by checking the version you have installed:
Bash
terraform-v# My OutputTerraformv1.6.4onlinux_amd64
Now create an empty directory, which will be your “terraform project”. It doesn’t matter what you call the folder.
Then inside that file create a file called main.tf. We’ll come back to this file a little later.
Step 2 :: Configure any required cloud provider API tokens
Head to your Digital Ocean API Tokens dashboard and click “Generate New Token”. Give it a name, choose an expiry and make sure you click the “write” permission option. Click “generate token”.
There are a number of ways we can tell Terraform what our Digital Ocean API Token is:
Obviously we could hard code it for the purposes of just getting it running while learning, though I wouldn’t recommend this approach even in testing.
Another is to use Terraform-specific environment variables set on your system. This has been my approach in the past. However, I came to realize how unsafe this was as every program you install has the potential to read from your environment variable.
A third way is to pass it as a parameter when calling the apply command.
I will be opting for that third option, but I don’t want to have that token saved in my history or have to pass it in everytime I want to run a Terraform command.
So my solution is to write a small wrapper bash script that will read the contents of a file in my home directory (with my token in) and pass it as an argument to the Terraform apply command.
Creating a wrapper bash script to safely pass secret token to command
Create a file in your home directory called “terraform-test”. You can call it anything, just remember to reference it correctly when using it later in the guide.
Inside that file, paste only the API token that you got from your Digital Ocean API dashboard. Then save the file and close it.
Open a new file in the root of your Terraform project and add the following contents:
This means that you are not having to keep passing your Digital Ocean token in for every command, and you wont end up accidentally leaking the token inside your shell’s env variables.
We will use that file later in this guide.
Step 3 :: Write your main.tf configuration file
For this example, everything will be kept in a single file called main.tf. When you start working on bigger infrastructure plans, there is nothing stopping you from splitting out your configuration into multiple, single-purpose files.
At the top of the file is the terraform block. This sets up the various providers that we want to work with for building out our infrastructure. In this example we only need the digital ocean one.
variable declarations
Variable declarations can be used to keep sensitive information out of out configuration — and thus source control later, as well as making our configuration more reusable.
Each of the variables that our configuration needs to run must be defined as a variable like above. You can define variables in a few different ways, but here I have opted for the simplest.
We can see that all our configuration needs is a do_token value passed to it.
provider setups
Each of the providers that we declare in our terraform block will probably need some kind of setup — such as an api token like our Digital Ocean example.
For us we can see that the setting up of Digital Ocean’s provider needs only a token, which we are passing it from the variable that we will pass in via the cli command.
resource declarations
We then declare the “resources” that we want Terraform to create for us in our Digital Ocean account. In this case we just want it to create a single small droplet as a proof of concept.
The values I have passed to the digitalocean_droplet resource, would be great examples of where to use variables, potentially even with default placeholder values.
I have hard coded the values here for brevity.
Step 4 :: Run the apply command
Before running apply for the first time, we first need to initialize the project:
Bash
terraforminit# You should see some feedback starting with this:Terraformhasbeensuccessfullyinitialized!
You can also run terraform plan before the apply command to see what Terraform will be provisioning for you. However, when running terraform apply, it shows you the plan and asks for explicit confirmation before building anything. So I rarely use plan.
If you run terraform apply, it will prompt you for any variables that your main.tf requires — in our case the do_token variable. We could type it / paste it in every time we want to run a command. But a more elegant solution would be to use that custom bash script we created earlier.
Assuming that bash script is in our current directory — the Terraform project folder — run the following:
Bash
./myterraformwrapperapply
This should display to you what it is planning to provision in your Digital Ocean account — a single Droplet.
Type the word “yes” and hit enter.
You should now see it giving you a status update every 10 seconds, ending in confirmation of the droplet being created.
If you hard back over to your Digital Ocean account dashboard, you should see that new droplet sitting there.
Step 5 :: Bonus: destroying resources.
Just as Terraform can be used to create those resources, it can also be used to destroy them too. It goes without saying that you should always be mindful of just what you are destroying, but in this example we are just playing with a test droplet.
Run the following to destroy your newly-created droplet:
Bash
./myterraformwrapperdestroy
Again, it will first show you what it is planning to change in your account — the destruction of that single droplet.
Type “yes” and hit enter to accept.
Next Steps
I love playing with Terraform, and will be sharing anything that I learn along my journey on my website.
You could start working through Terraform’s documentation to get a taste of what it can do for you.
You can even take a look at its excellent registry to see all of the providers that are available. Maybe even dig deep into the Digital Ocean provider documentation and see all of the available resources you could play with.
Just be careful how much you are creating and when testing don’t forget to run the destroy command when you’re done. The whole point of storing your infrastructure as code is that it is dead simple to provision and destroy it all.
Just don’t get leaving test resources up and potentially running yourself a huge bill.
I’ve seen some very elaborate homelab set-ups online but wanted to get the easiest possible implementation I could, within my current skill set.
As I have quite a lot of experience with using docker for development in my day to day work, I thought I’d just try using docker compose to setup my homelab service
What is docker?
Docker is a piece of software that allows you to package up your services / apps in to “containers”, along with any dependencies that they need to run.
What this means for you, is that you can define all of the things you need to make your specific app work in a configuration file, called a Dockerfile. When the container is then built, it builds it with all of the dependencies that you specify.
This is opposed to the older way of setting up a service / app /website, by installing the required dependencies manually on the host server itself.
By setting up services using docker (and its companion tool docker compose) You remove the need to install manual dependencies yourself.
Not only that, but if different services that you install require different versions of the same dependencies, containers keep those different versions separate.
Once docker and docker compose are installed on the server, I can then use a single configuration file for each of the services I want to put into my Home Lab. This means I don’t need to worry about the dependencies that those services need to work — because they are in their own containers, they are self-contained and need nothing to be added to the host system.
There are services that can help you manage docker too. But that was one step too far outside of my comfort zone for what I want to get working right now.
I will, however, be installing a service called “Portainer”, detailed in my next Home Lab post, which gives you a UI in which to look at the docker services you have running.
Most of the videos I’ve seen for Homelab-related guides and reviews tend to revolve around Proxmox and/or TrueNAS. I have no experience with either of those, but I do have experience with Docker, so I am opting to go with straight up docker — at least for now.
Setting up the Operating system
I’m using a Linux-based system and so instructions are based on this.
Step 1: Download the Ubuntu Server iso image
Head here to download your preferred version of Ubuntu Server. I chose the latest LTS version at the time of writing (22.04)
Step 2: Create a bootable USB stick with the iso image you downloaded.
Once downloaded, insert and a usb stick to install the Ubuntu Server iso on to.
Firstly, check where your USB stick is on your filesystem. For that, I use fdisk:
Bash
sudofdisk-l
Assuming the USB stick is located at “/dev/sdb“, I use the dd command to create my bootable USB (please check and double check where your USB is mounted on your system):
Step 3: Insert and boot to the bootable USB stick into the Homelab computer
Boot the computer that you’re using for your server, using the USB stick as a temporary boot device.
Step 4: Install the operating system
Follow the steps that the set up guide gives you.
As an aside, I set my server ssd drive up with the “LVM” option. This has helped immensely this week, as I have added a second drive and doubled my capacity to 440GB.
Step 5: install and enable ssh remote access
I can’t remember if ssh came installed or enabled, but you can install openssh and then enable the sshd service.
You can then connect to the server from a device on your network with:
Bash
sshusername@192.168.0.77
This assumes your server’s IP address is 192.168.0.77. Chances are very high it’ll be a different number (although the 192.168.0 section may be correct.
Everything else done remotely
I have an external keyboard in case I ever need to plug in to my server. However, now I have ssh enabled, I tend to just connect from my laptop using the ssh command show just above.
Status
Started to re-watch Breaking Bad. This’ll be my second time viewing. Think I’m gonna try and share my favourite shot from each episode.
I’ve opted for what I believe is the easiest, and cheapest, method of setting up my Homelab.
I’m using my old work PC which has the following spec:
Quad core processor — i7, I think.
16gb of RAM
440GB ssd storage (2x 220gb in an LVM setup)
A USB plug-in network adapter (really want to upgrade to an internal one though)
My Homelab Goals
My homelab goals are centered around two fundamental tenets: lower cost for online services and privacy.
I want to be:
Hosting my own personal media backups: All my personal photos and videos I want stored in my own installation of Nextcloud. Along with those I want to also utilize its organizational apps too: calendar; todos; project planning; contacts.
Hosting my own music collection: despite hating everything Google stands for, I do enjoy using its Youtube Music service. However, I have many CDs (yes, CDs) in the loft and don’t like the idea of essentially renting access to music. Plus it would be nice to streaming music to offline smart speakers (i.e. not Alexa; Google Speaker; et al.)
Hosting old DVD films: I have lots of DVDs in the loft and would like to be able to watch them (without having to buy a new DVD player)
Learning more about networking: configuring my own network is enjoyable to me and is something I want to increase my knowledge in. Hosting my own services for my family and myself is a great way to do this.
Teach my Son how to own and control his own digital identity (he’s 7 months old): I want my Son to be armed with the knowledge of modern day digital existence and the privacy nightmares that engulf 95% of the web. And I want Him to have the knowledge and ability to be able to control his own data and identity, should He wish to when he’s older.
Documenting my journey
I will be documenting my Homelab journey as best as I can, and will tag all of these posts with the category of Homelab.
I’m now running pi-hole through my Raspberry Pi 2b.
It’s both amazing and depressing just how many trackers are being blocked by it. I even noticed a regular ping being made to an Amazon endpoint exactly every 10 minutes.
I will try and write up my set up soon, which is a mix of setting up the Raspberry Pi and configuring my home router.
I’ve also managed to finally get a home server running again – using Ubuntu Server LTS.
My plan on my server is to just install services I want to self-host using docker. Docker being the only program I’ve installed on the machine itself.
So far I have installed the following:
Home Assistant — On initial playing with this I have decided that it’s incredible. Connected to my LG TV and lets me control it from the app / laptop.
Portainer — A graphical way to interact with my docker containers on the server.
Status
I have decided to get back into tinkering with my Raspberry Pi.
I will be blogging my journey as I stumble through my initial playing, through to building out my first proper homelab.
This first Raspberry Pi (model 2b) will be initially used as both a wireguard VPN server and a local DNS server.
This is my first data visualization attempt and uses data from HM Land Registry to show to average cost of a semi-detached house in four counties across the past ten years.
When I first moved my Neovim configuration over to using lua, as opposed to the more traditional vimscript, I thought I was clever separating it up into many files and includes.
Turns out that it became annoying to edit my configuration. Not difficult; just faffy.
So I decided to just stick it all into a single init.lua file. And now its much nicer to work with in my opinion.
I really enjoy building scripts for my own workflow.
I wish I had the skills to build things in the real world, but until then I’ll keep building stuff in the digital space only.
Although I love working with PHP and Laravel, it is Bash that has re-ignited a passion in me to just build stuff without thinking its got to work towards being some kind of “profitable” side project.
The structure of a newly-initialized Lupo website project is as follows:
Bash
../html/./src/./src/style.css./templates/./tmp/
All of your website source code lives within the ./src directory. This is where you structure your website however you want it to be structured in the final html.
You can write your pages / posts in markdown and lupo will convert them when building.
When building it into the final html, lupo will copy the structure of your ./src directory into your ./html directory, converting any markdown files (any files ending in .md) into html files.
Any JavaScript or CSS files are left alone and copied over in the same directory relative to the ./html root.
Starting a lupo website
Create a directory that you want to be your website project, and initialize it as a Lupo project:
Bash
mkdir./mywebsitecd./my-websitelupoinit
The init command will create the required directories, including a file located at $HOME/.config/lupo/config.
You don’t need to worry about the config file just yet.
Create your homepage file and add some text to it:
Now just run the build command to generate the final html:
Bash
lupobuild
You should now have two files in your ./html directory: an index.html file and a style.css file.
The index.html was converted from your ./src/index.md file and moved into the root of the ./html directory. The style.css file was copied over verbatim to the html directory.
Viewing your site locally
Lupo doesn’t currently have a way to launch a local webserver, but you could open a browser and point the address bar to the root of your project ./html folder.
I use an nginx docker image to preview my site locally, and will build in this functionality into lupo soon.
Page metadata
Each markdown page that you create, can have an option metadata section at the top of the page. This is known as “frontmatter”. Here is an example you could add to the top of your ./src/index.md file:
Markdown
---title:My Super Homepage---Here is the normal page content
That will set the page’s title to “My Super Homepage”. This will also make the %title% variable available in your template files. (More on templates further down the page)
If you re-run the lupo build command, and look again at your homepage, you should now see an <h1> tag withyou title inside.
The Index page
You can generate an index of all of your pages with the index command:
Bash
lupoindexlupobuild
Once you’ve built the website after running index, you will see a file at ./html/index/index.html. This is a simple index / archive of all of the pages on your website.
For pages with a title set in their metadata block, that title will be used in the index listing. For any pages without a title set, the uri to the page will be used instead.
@todo ADD SEARCH to source and add to docs here.
Tag index pages
Within your page metadata block, you can also define a list of “tags” like so:
Markdown
---title:My Super Pagetags:-tagone-tagtwo-anotherone---The page content.
When you run the lupo index command, it will also go through all of your pages and use the tags to generate “tag index pages”.
These are located at the following location/uri: ./html/tags/tagname/index.html.
These tag index pages will list all pages that contain that index’s tag.
Customizing your website
Lupo is very basic and doesn’t offer that much in the way of customization. And that is intentional – I built it as a simple tool for me and just wanted to share it with anyone else that may be interested.
That being said, there are currently two template files within the ./templates directory:
tags.template.html is used when generating the “tag index” pages and the main “index” page.
default.template.html is used for all other pages.
I am planning to add some flexibility to this in the near future and will update this page when added.
You are free to customize the templates as you want. And of course you can go wild with your CSS.
I’m also considering adding an opt-in css compile step to enable the use of something like sass.
New post helper
To help with the boilerplate of add a new “post”, I add the following command:
Bash
lupopost
When ran, it will ask you for a title. Once answered, it will generate the post src file and pre-fill the metadata block with that title and the current date and timestamp.
The post will be created at the following location:
Bash
./src/{year}/{month}/{date}/{timestamp}/{url-friendly-title}# For example:./src/2023/08/30/1693385086/lupo-static-site-generator/index.html
Page edit helper
At present, this requires you to have fzf installed. I am looking to try and replace that dependancy with the find command.
To help find a page you want to edit, you can run the following command:
Bash
lupoedit
This will open up a fuzzy search finder where you can type to search for the page you want to edit.
The results will narrow down as you type.
When you press enter, it will attmept to open that source page in your system’s default editor. Defined in your $EDITOR environment variable.
Automatic rebuild on save
This requires you to have inotifywait installed.
Sometimes you will be working on a longer-form page or post, and want to refresh the browser to see your changes as you write it.
It quickly becomes tedious to have to keep running lupo build to see those changes.
So running the following command will “watch” you ./src directory for any changes, and rebuild any file that is altered in any way. It will only rebuild that single file; not the entire project.
Deploying to a server
This requires you to have rsync installed.
This assumes that you have a server setup and ready to host a static html website.
With any luck you should see the feedback for the files pushed to your remote server.
Assuming you have set up you domain name to point to your server correctly, you should be able to visit you website in a browser and see your newly-deployed website.
Going live
This is an experimental feature
If you’ve got the lupo watch and lupo push commands working, then the live command should also work:
Bash
lupolive
This will watch your project for changes, and recompile each updated page and push it to your server as it is saved.
The feedback is a bit verbose currently and the logic needs making a bit smarter. But it does currently work in its initial form.