Lupo static site generator

What is Lupo?

Lupo is a simple static site generator, written in Bash.

I built it for myself to publish to a simple website of my own directly from the command line.

It was inspired by Rob Muhlestein and his approach to the Zettelkasten method.

Installation

Running through the following set of commands will install the lupo bash script to the following location on your system: $HOME/.local/bin/lupo

If you add the $HOME/.local/bin directory to your $PATH, then you can execute the lupo command from anywhere.

I chosen that directory as it seems to be a pretty standard location for user-specific scripts to live.

Bash
git clone https://github.com/davidpeach/lupo
cd ./lupo
./install
cd ..
rm -rf ./lupo

Anatomy of a Lupo website

The structure of a newly-initialized Lupo website project is as follows:

Bash
.
./html/
./src/
./src/style.css
./templates/
./tmp/

All of your website source code lives within the ./src directory. This is where you structure your website however you want it to be structured in the final html.

You can write your pages / posts in markdown and lupo will convert them when building.

When building it into the final html, lupo will copy the structure of your ./src directory into your ./html directory, converting any markdown files (any files ending in .md) into html files.

Any JavaScript or CSS files are left alone and copied over in the same directory relative to the ./html root.

Starting a lupo website

Create a directory that you want to be your website project, and initialize it as a Lupo project:

Bash
mkdir ./my website
cd ./my-website
lupo init

The init command will create the required directories, including a file located at $HOME/.config/lupo/config.

You don’t need to worry about the config file just yet.

Create your homepage file and add some text to it:

Bash
touch ./src/index.md
echo "Hello World" > ./src/index.md

Now just run the build command to generate the final html:

Bash
lupo build

You should now have two files in your ./html directory: an index.html file and a style.css file.

The index.html was converted from your ./src/index.md file and moved into the root of the ./html directory. The style.css file was copied over verbatim to the html directory.

Viewing your site locally

Lupo doesn’t currently have a way to launch a local webserver, but you could open a browser and point the address bar to the root of your project ./html folder.

I use an nginx docker image to preview my site locally, and will build in this functionality into lupo soon.

Page metadata

Each markdown page that you create, can have an option metadata section at the top of the page. This is known as “frontmatter”. Here is an example you could add to the top of your ./src/index.md file:

Markdown
---
title: My Super Homepage
---

Here is the normal page content

That will set the page’s title to “My Super Homepage”. This will also make the %title% variable available in your template files. (More on templates further down the page)

If you re-run the lupo build command, and look again at your homepage, you should now see an <h1> tag withyou title inside.

The Index page

You can generate an index of all of your pages with the index command:

Bash
lupo index

lupo build

Once you’ve built the website after running index, you will see a file at ./html/index/index.html. This is a simple index / archive of all of the pages on your website.

For pages with a title set in their metadata block, that title will be used in the index listing. For any pages without a title set, the uri to the page will be used instead.

@todo ADD SEARCH to source and add to docs here.

Tag index pages

Within your page metadata block, you can also define a list of “tags” like so:

Markdown
---
title: My Super Page
tags:
    - tagone
    - tagtwo
    - anotherone
---

The page content.

When you run the lupo index command, it will also go through all of your pages and use the tags to generate “tag index pages”.

These are located at the following location/uri: ./html/tags/tagname/index.html.

These tag index pages will list all pages that contain that index’s tag.

Customizing your website

Lupo is very basic and doesn’t offer that much in the way of customization. And that is intentional – I built it as a simple tool for me and just wanted to share it with anyone else that may be interested.

That being said, there are currently two template files within the ./templates directory:

Bash
./templates/default.template.html
./templates/tags.template.html

tags.template.html is used when generating the “tag index” pages and the main “index” page.

default.template.html is used for all other pages.

I am planning to add some flexibility to this in the near future and will update this page when added.

You are free to customize the templates as you want. And of course you can go wild with your CSS.

I’m also considering adding an opt-in css compile step to enable the use of something like sass.

New post helper

To help with the boilerplate of add a new “post”, I add the following command:

Bash
lupo post

When ran, it will ask you for a title. Once answered, it will generate the post src file and pre-fill the metadata block with that title and the current date and timestamp.

The post will be created at the following location:

Bash
./src/{year}/{month}/{date}/{timestamp}/{url-friendly-title}

# For example:
./src/2023/08/30/1693385086/lupo-static-site-generator/index.html

Page edit helper

At present, this requires you to have fzf installed. I am looking to try and replace that dependancy with the find command.

To help find a page you want to edit, you can run the following command:

Bash
lupo edit

This will open up a fuzzy search finder where you can type to search for the page you want to edit.

The results will narrow down as you type.

When you press enter, it will attmept to open that source page in your system’s default editor. Defined in your $EDITOR environment variable.

Automatic rebuild on save

This requires you to have inotifywait installed.

Sometimes you will be working on a longer-form page or post, and want to refresh the browser to see your changes as you write it.

It quickly becomes tedious to have to keep running lupo build to see those changes.

So running the following command will “watch” you ./src directory for any changes, and rebuild any file that is altered in any way. It will only rebuild that single file; not the entire project.

Deploying to a server

This requires you to have rsync installed.

This assumes that you have a server setup and ready to host a static html website.

I covered how I set up my own server in This Terraform post and This Ansible post.

All that lupo needs to be able to deploy your site, is for you to add the required settings in your config file at $HOME/.config/lupo/config

  • remote_user – This is the user who owns the directory where the html files will be sent to.
  • ssh_identity_key – This is the path to the private key file on your computer that pairs with the public key on your remote server.
  • domain_name – The domain name pointing to your server.
  • remote_directory – The full path to the directory where your html files are served from on your server.

For example:

Bash
remote_user: david
ssh_identity_key: ~/.ssh/id_rsa
domain_name: example.com
remote_directory: /var/www/example.com

Then run the following command:

Bash
lupo push

With any luck you should see the feedback for the files pushed to your remote server.

Assuming you have set up you domain name to point to your server correctly, you should be able to visit you website in a browser and see your newly-deployed website.

Going live

This is an experimental feature

If you’ve got the lupo watch and lupo push commands working, then the live command should also work:

Bash
lupo live

This will watch your project for changes, and recompile each updated page and push it to your server as it is saved.

The feedback is a bit verbose currently and the logic needs making a bit smarter. But it does currently work in its initial form.

Setting up a Digital Ocean droplet for a Lupo website with Terraform

Overview of this guide

My Terraform Repository used in this guide

Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud provider’s dashboard and manually clicking buttons and setting up things yourself.

This is known as “Infrastructure as Code”.

It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.

This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.

You can then build upon those foundations and work on building out your own desired infrastructures.

The Terraform Flow

As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.

  • Firstly we write a configuration file that defines the infrastructure that we want.
  • Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
  • Finally we run the terraform plan command to test our infrastructure configuration, and then terraform apply to make it all live.

Installing the Terraform program

Terraform has installation instructions, but you may be able to find it with your package manager.

Here I am installing it on Arch Linux, by the way, with pacman

Bash
sudo pacman -S terraform

Setting the required variables

The configuration file for the infrastructure I am using requires only a single variable from outside. That is the do_token.

This is created manually in the API section of the Digital Ocean dashboard. Create yours and keep its value to hand for usage later.

Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific env in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.

Creating an ssh key

In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought I’d create a key pair specific for my website deployment.

Bash
ssh-keygen -t rsa

I give it a different name so as to not override my standard id_rsa one. I call it id_rsa.davidpeachme just so I know which is my website server one at a glance.

Describing your desired infrastructure with code

Terraform uses a declaritive language, as opposed to imperetive.

What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.

You dont need to be concerned with the nitty gritty of how it is achieved.

I have a real-life example that will show you exactly what a minimal configuration can look like.

Clone / fork the repository for my website server.

Explaination of my terraform repository

YAML
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

variable "do_token" {}

# Variables whose values are defined in ./terraform.tfvars
variable "domain_name" {}
variable "droplet_image" {}
variable "droplet_name" {}
variable "droplet_region" {}
variable "droplet_size" {}
variable "ssh_key_name" {}
variable "ssh_local_path" {}

provider "digitalocean" {
  token = var.do_token
}

The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.

Since I’m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.

The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.

Following that are the variables that I have defined myself in the ./terraform.tfvars file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.

The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.

YAML
resource "digitalocean_ssh_key" "ssh_key" {
  name       = var.ssh_key_name
  public_key = file(var.ssh_local_path)
}

Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.

This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.

I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.

I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.

YAML
resource "digitalocean_droplet" "droplet" {
  image    = var.droplet_image
  name     = var.droplet_name
  region   = var.droplet_region
  size     = var.droplet_size
  ssh_keys = [digitalocean_ssh_key.ssh_key.fingerprint]
}

Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.

YAML
data "digitalocean_domain" "domain" {
  name = var.domain_name
}

This block is a little different. Here I am using the data property to grab information about something that already exists in my Digital Ocean account.

I have already set up my domain in Digital Ocean’s networking area.

This is the overarching domain itself – not the specific A record that will point to the server.

The reason i’m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run terraform destroy.

YAML
resource "digitalocean_record" "record" {
  domain = data.digitalocean_domain.domain.id
  type   = "A"
  name   = "@"
  ttl    = 60
  value  = "${digitalocean_droplet.droplet.ipv4_address}"
}

The final block creates the actual A record with my existing domain settings.

It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.

Testing and Running the config to create the infrastructure

If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:

Bash
terraform plan

If the output looks okay to you, then type the following command and enter “yes” when it asks you:

Bash
terraform apply

This should create the three items of infrastructure we have defined.

Next Step

Next we need to set that server up with the required software needed to run a static html website.

I will be doing this with a program called Ansible.

I’ll be writing up those steps in a zet very soon.