This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.
This is a descriptive post and contains nothing technical in the setting up of this infrastructure.
That will come in future posts.
Services / Websites I use
Digital Ocean
Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.
Github
Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.
Terraform
I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.
Setting up the infrastructure
I firstly set up my infrastructure in Digital Ocean and Github using Terraform.
This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.
After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.
Setting up a Laravel project
I use Laravel Sail for local development.
For deployments I write a separate Dockerfile which builds off of a php-fpm container.
Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.
Kubernetes deployment file
All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml file in the project itself.
This file is used by the Github action responsible for deploying the project.
Github action workflows
I add two workflow files for the project inside the ./.github/workflows/ directory. These are:
ci.yml
This file runs the full test suite, along with pint and larastan.
deploy.yml
This file is triggered only on the main branch, after the Tests (ci) action has completed successfully.
It will build the container image and tag it with the current git sha.
Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.
Then it pushes that image to my Digital Ocean container registry.
The next step does a find and replace to the project’s deployment.yml file. I’ve included a snippet of that file below:
It replaces that <IMAGE> placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.
Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml file with the kubectl command. After which, it just does a check to see that the deployment was a success.
By trade I am a PHP developer. I’ve never done devops in a professional setting. However, for a while I have had a strange fascination with various continuous integration and deployment strategies I’ve seen at many of my places of work.
I’ve seen some very complicated setups over the years, which has created a mental block for me to really dig in and understand setting up integration and deployment workflows.
But in my current role at Geomiq, I had the opportunity of being shown a possible setup — specifically using Kubernetes. And that was sort of a gateway drug, which finally led me to getting a working workflow up and running.
I now want to start sharing what I have learnt and build out a fully-fledged deployment workflow. Not sure how many posts it will take, or what structure it will take, but my aim is to make devops and CI/CD as approachable as possible.
Terraform is a program that can be used to build your cloud-based infrastructure based off of configuration files that you write. It’s a part of what is referred to as “Infrastructure as code (Iac)”.
Instead of going into various cloud provider UI dashboards and clicking around to build your resources, Terraform can do all that provisioning for you. It uses the cloud provider APIs behind the scenes — you just write exactly the infrastructure that you want to end up with at the end.
In this guide, we will provision a simple Digital Ocean Server (a Droplet in Digital Ocean parlance) using Terraform from our local terminal.
If you don’t yet have a Digital Ocean account, feel free to use my referral link to set one up. With that link you’ll get $200 in credit to use over 60 days.
Setting up Terraform in 4 steps
1 :: Install terraform
Terraform is available to install from pretty much all package repositories out there.
Installing it should be as simple as running a one-line command in your terminal.
2 :: Configure any required cloud provider API tokens
In order to let the Terraform program make changes to your cloud provider account, you will need to set up API tokens and tell Terraform where to find them.
In this post I’ll only be setting up a single one for Digital Ocean.
3 :: Write your main.tf configuration file
A single main.tf file will be enough to get you something working.
Add all of your needed resources / infrastructure in it.
4 :: Run the apply command
By running the terraform apply command against your main.tf file, you can turn your empty cloud infrastructure into a working setup.
Step 1 :: Install Terraform
Terraform’s documentation details the numerous ways of getting it installed across operating systems.
I use Arch Linux and so install it like so:
Bash
sudopacman-Syterraform
You can check it is installed and discoverable on your system by checking the version you have installed:
Bash
terraform-v# My OutputTerraformv1.6.4onlinux_amd64
Now create an empty directory, which will be your “terraform project”. It doesn’t matter what you call the folder.
Then inside that file create a file called main.tf. We’ll come back to this file a little later.
Step 2 :: Configure any required cloud provider API tokens
Head to your Digital Ocean API Tokens dashboard and click “Generate New Token”. Give it a name, choose an expiry and make sure you click the “write” permission option. Click “generate token”.
There are a number of ways we can tell Terraform what our Digital Ocean API Token is:
Obviously we could hard code it for the purposes of just getting it running while learning, though I wouldn’t recommend this approach even in testing.
Another is to use Terraform-specific environment variables set on your system. This has been my approach in the past. However, I came to realize how unsafe this was as every program you install has the potential to read from your environment variable.
A third way is to pass it as a parameter when calling the apply command.
I will be opting for that third option, but I don’t want to have that token saved in my history or have to pass it in everytime I want to run a Terraform command.
So my solution is to write a small wrapper bash script that will read the contents of a file in my home directory (with my token in) and pass it as an argument to the Terraform apply command.
Creating a wrapper bash script to safely pass secret token to command
Create a file in your home directory called “terraform-test”. You can call it anything, just remember to reference it correctly when using it later in the guide.
Inside that file, paste only the API token that you got from your Digital Ocean API dashboard. Then save the file and close it.
Open a new file in the root of your Terraform project and add the following contents:
This means that you are not having to keep passing your Digital Ocean token in for every command, and you wont end up accidentally leaking the token inside your shell’s env variables.
We will use that file later in this guide.
Step 3 :: Write your main.tf configuration file
For this example, everything will be kept in a single file called main.tf. When you start working on bigger infrastructure plans, there is nothing stopping you from splitting out your configuration into multiple, single-purpose files.
At the top of the file is the terraform block. This sets up the various providers that we want to work with for building out our infrastructure. In this example we only need the digital ocean one.
variable declarations
Variable declarations can be used to keep sensitive information out of out configuration — and thus source control later, as well as making our configuration more reusable.
Each of the variables that our configuration needs to run must be defined as a variable like above. You can define variables in a few different ways, but here I have opted for the simplest.
We can see that all our configuration needs is a do_token value passed to it.
provider setups
Each of the providers that we declare in our terraform block will probably need some kind of setup — such as an api token like our Digital Ocean example.
For us we can see that the setting up of Digital Ocean’s provider needs only a token, which we are passing it from the variable that we will pass in via the cli command.
resource declarations
We then declare the “resources” that we want Terraform to create for us in our Digital Ocean account. In this case we just want it to create a single small droplet as a proof of concept.
The values I have passed to the digitalocean_droplet resource, would be great examples of where to use variables, potentially even with default placeholder values.
I have hard coded the values here for brevity.
Step 4 :: Run the apply command
Before running apply for the first time, we first need to initialize the project:
Bash
terraforminit# You should see some feedback starting with this:Terraformhasbeensuccessfullyinitialized!
You can also run terraform plan before the apply command to see what Terraform will be provisioning for you. However, when running terraform apply, it shows you the plan and asks for explicit confirmation before building anything. So I rarely use plan.
If you run terraform apply, it will prompt you for any variables that your main.tf requires — in our case the do_token variable. We could type it / paste it in every time we want to run a command. But a more elegant solution would be to use that custom bash script we created earlier.
Assuming that bash script is in our current directory — the Terraform project folder — run the following:
Bash
./myterraformwrapperapply
This should display to you what it is planning to provision in your Digital Ocean account — a single Droplet.
Type the word “yes” and hit enter.
You should now see it giving you a status update every 10 seconds, ending in confirmation of the droplet being created.
If you hard back over to your Digital Ocean account dashboard, you should see that new droplet sitting there.
Step 5 :: Bonus: destroying resources.
Just as Terraform can be used to create those resources, it can also be used to destroy them too. It goes without saying that you should always be mindful of just what you are destroying, but in this example we are just playing with a test droplet.
Run the following to destroy your newly-created droplet:
Bash
./myterraformwrapperdestroy
Again, it will first show you what it is planning to change in your account — the destruction of that single droplet.
Type “yes” and hit enter to accept.
Next Steps
I love playing with Terraform, and will be sharing anything that I learn along my journey on my website.
You could start working through Terraform’s documentation to get a taste of what it can do for you.
You can even take a look at its excellent registry to see all of the providers that are available. Maybe even dig deep into the Digital Ocean provider documentation and see all of the available resources you could play with.
Just be careful how much you are creating and when testing don’t forget to run the destroy command when you’re done. The whole point of storing your infrastructure as code is that it is dead simple to provision and destroy it all.
Just don’t get leaving test resources up and potentially running yourself a huge bill.
The main logic for this Ansible configuration happens in the setup.yml file. This file can be called whatever you like as we’ll call it by name later on.
Installing Ansible
You can install Ansible with your package manager of choice.
I install it using pacman on Arch Linux:
Bash
sudopacman-Sansible
The inventory.yml file
The inventory file is where I have set the relative configuration needed for the playbook.
The all key contains all of the host configurations (although I’m only using a single one).
Within that all key is vars.ansible_ssh_private_key_file which is just the local path to the ssh private key used to access the server.
This is the key I set up with Terraform in the previous guide.
Then the hosts key just contains the hosts I want to be able to target (im using the domain name that I set up in the previous Terraform guide)
The setup.yml file explained
The setup.yml file is what is known as an “Ansible Playbook”.
From my limited working knowledge of Ansible, a playbook is basically a set of tasks that are run against a server or a collection of servers.
In my own one I am currently only running it against a single server, which I am targeting via its domain name of “zet.davidpeach.me”
hosts:all tells it to run against all hosts that are defined in the ./inventory.yml file.
become:true is saying that ansible will switch to the root user on the server (defined on the next line with user: root) before running the playbook tasks.
The vars_files: part lets you set relative paths to files containing variables that are used in the playbook and inside the file ./files/nginx.conf.j2.
I wont go through each of the variables but hopefully you can see what they are doing.
The Playbook Tasks
Each of the tasks in the Playbook has a descriptive title that hopefully does well in explaining what the steps are doing.
The key value pairs of configuration after each of the task titles are pre-defined settings available to use in ansible.
The tasks read from top to bottom and essentially automate the steps that normally need to be manually done when preparing a server.
This command should start Ansible off. You should get the usual message about trusting the target host when first connecting to the server. Just answer “yes” and press enter.
You should now see the output for each step defined in the playbook.
The server should now be ready to deploy to.
Testing your webserver
In the ./files/nginx.conf.j2 there is a root directive on live 3. For me this is set to /var/www/{{ http_host }}. (http_host is a variable set in the vars/default.yml file).
SSH on to the server, using the private ssh key from the keypair I am using (see the Terraform guide for reference).
Bash
ssh-i~/.ssh/id_rsa.davidpeachmezet.davidpeach.me
Then on the server, create a basic index.html file in the website root defined in the default nginx file:
Now, going to your website url in a browser, you should be able to see the text “hello world” in the top left.
The server is ready to host a static html website.
Next Step
You can use whatever method you prefer to get your html files on to your server.
You could use rsync, scp, an overly-complicated CI pipeline, or – if your’e using lupo – your could have lupo deploy it straight to your server for you.