Category: Programming

Linux, Laravel, PHP. My notes and mini-guides regarding development-related things.

  • ๐Ÿ“‚

    Lupo static site generator

    What is Lupo?

    Lupo is a simple static site generator, written in Bash.

    I built it for myself to publish to a simple website of my own directly from the command line.

    It was inspired by Rob Muhlestein and his approach to the Zettelkasten method.

    Installation

    Running through the following set of commands will install the lupo bash script to the following location on your system: $HOME/.local/bin/lupo

    If you add the $HOME/.local/bin directory to your $PATH, then you can execute the lupo command from anywhere.

    I chosen that directory as it seems to be a pretty standard location for user-specific scripts to live.

    Bash
    git clone https://github.com/davidpeach/lupo
    cd ./lupo
    ./install
    cd ..
    rm -rf ./lupo

    Anatomy of a Lupo website

    The structure of a newly-initialized Lupo website project is as follows:

    Bash
    .
    ./html/
    ./src/
    ./src/style.css
    ./templates/
    ./tmp/

    All of your website source code lives within the ./src directory. This is where you structure your website however you want it to be structured in the final html.

    You can write your pages / posts in markdown and lupo will convert them when building.

    When building it into the final html, lupo will copy the structure of your ./src directory into your ./html directory, converting any markdown files (any files ending in .md) into html files.

    Any JavaScript or CSS files are left alone and copied over in the same directory relative to the ./html root.

    Starting a lupo website

    Create a directory that you want to be your website project, and initialize it as a Lupo project:

    Bash
    mkdir ./my website
    cd ./my-website
    lupo init

    The init command will create the required directories, including a file located at $HOME/.config/lupo/config.

    You donโ€™t need to worry about the config file just yet.

    Create your homepage file and add some text to it:

    Bash
    touch ./src/index.md
    echo "Hello World" > ./src/index.md

    Now just run the build command to generate the final html:

    Bash
    lupo build

    You should now have two files in your ./html directory: an index.html file and a style.css file.

    The index.html was converted from your ./src/index.md file and moved into the root of the ./html directory. The style.css file was copied over verbatim to the html directory.

    Viewing your site locally

    Lupo doesnโ€™t currently have a way to launch a local webserver, but you could open a browser and point the address bar to the root of your project ./html folder.

    I use an nginx docker image to preview my site locally, and will build in this functionality into lupo soon.

    Page metadata

    Each markdown page that you create, can have an option metadata section at the top of the page. This is known as โ€œfrontmatterโ€. Here is an example you could add to the top of your ./src/index.md file:

    Markdown
    ---
    title: My Super Homepage
    ---
    
    Here is the normal page content

    That will set the pageโ€™s title to โ€œMy Super Homepageโ€. This will also make the %title% variable available in your template files. (More on templates further down the page)

    If you re-run the lupo build command, and look again at your homepage, you should now see an <h1> tag withyou title inside.

    The Index page

    You can generate an index of all of your pages with the index command:

    Bash
    lupo index
    
    lupo build

    Once youโ€™ve built the website after running index, you will see a file at ./html/index/index.html. This is a simple index / archive of all of the pages on your website.

    For pages with a title set in their metadata block, that title will be used in the index listing. For any pages without a title set, the uri to the page will be used instead.

    @todo ADD SEARCH to source and add to docs here.

    Tag index pages

    Within your page metadata block, you can also define a list of โ€œtagsโ€ like so:

    Markdown
    ---
    title: My Super Page
    tags:
        - tagone
        - tagtwo
        - anotherone
    ---
    
    The page content.

    When you run the lupo index command, it will also go through all of your pages and use the tags to generate โ€œtag index pagesโ€.

    These are located at the following location/uri: ./html/tags/tagname/index.html.

    These tag index pages will list all pages that contain that indexโ€™s tag.

    Customizing your website

    Lupo is very basic and doesnโ€™t offer that much in the way of customization. And that is intentional – I built it as a simple tool for me and just wanted to share it with anyone else that may be interested.

    That being said, there are currently two template files within the ./templates directory:

    Bash
    ./templates/default.template.html
    ./templates/tags.template.html

    tags.template.html is used when generating the โ€œtag indexโ€ pages and the main โ€œindexโ€ page.

    default.template.html is used for all other pages.

    I am planning to add some flexibility to this in the near future and will update this page when added.

    You are free to customize the templates as you want. And of course you can go wild with your CSS.

    Iโ€™m also considering adding an opt-in css compile step to enable the use of something like sass.

    New post helper

    To help with the boilerplate of add a new โ€œpostโ€, I add the following command:

    Bash
    lupo post

    When ran, it will ask you for a title. Once answered, it will generate the post src file and pre-fill the metadata block with that title and the current date and timestamp.

    The post will be created at the following location:

    Bash
    ./src/{year}/{month}/{date}/{timestamp}/{url-friendly-title}
    
    # For example:
    ./src/2023/08/30/1693385086/lupo-static-site-generator/index.html

    Page edit helper

    At present, this requires you to have fzf installed. I am looking to try and replace that dependancy with the find command.

    To help find a page you want to edit, you can run the following command:

    Bash
    lupo edit

    This will open up a fuzzy search finder where you can type to search for the page you want to edit.

    The results will narrow down as you type.

    When you press enter, it will attmept to open that source page in your systemโ€™s default editor. Defined in your $EDITOR environment variable.

    Automatic rebuild on save

    This requires you to have inotifywait installed.

    Sometimes you will be working on a longer-form page or post, and want to refresh the browser to see your changes as you write it.

    It quickly becomes tedious to have to keep running lupo build to see those changes.

    So running the following command will โ€œwatchโ€ you ./src directory for any changes, and rebuild any file that is altered in any way. It will only rebuild that single file; not the entire project.

    Deploying to a server

    This requires you to have rsync installed.

    This assumes that you have a server setup and ready to host a static html website.

    I covered how I set up my own server in This Terraform post and This Ansible post.

    All that lupo needs to be able to deploy your site, is for you to add the required settings in your config file at $HOME/.config/lupo/config

    • remote_user – This is the user who owns the directory where the html files will be sent to.
    • ssh_identity_key – This is the path to the private key file on your computer that pairs with the public key on your remote server.
    • domain_name – The domain name pointing to your server.
    • remote_directory – The full path to the directory where your html files are served from on your server.

    For example:

    Bash
    remote_user: david
    ssh_identity_key: ~/.ssh/id_rsa
    domain_name: example.com
    remote_directory: /var/www/example.com

    Then run the following command:

    Bash
    lupo push

    With any luck you should see the feedback for the files pushed to your remote server.

    Assuming you have set up you domain name to point to your server correctly, you should be able to visit you website in a browser and see your newly-deployed website.

    Going live

    This is an experimental feature

    If youโ€™ve got the lupo watch and lupo push commands working, then the live command should also work:

    Bash
    lupo live

    This will watch your project for changes, and recompile each updated page and push it to your server as it is saved.

    The feedback is a bit verbose currently and the logic needs making a bit smarter. But it does currently work in its initial form.


  • ๐Ÿ“‚

    Using ansible to prepare a digital ocean droplet to host a static website

    Preface

    This guide comes logically after the previous one I wrote about setting up a digital ocean server with Terraform.

    You can clone my websiteโ€™s ansible repository for reference.

    The main logic for this Ansible configuration happens in the setup.yml file. This file can be called whatever you like as weโ€™ll call it by name later on.

    Installing Ansible

    You can install Ansible with your package manager of choice.

    I install it using pacman on Arch Linux:

    Bash
    sudo pacman -S ansible

    The inventory.yml file

    The inventory file is where I have set the relative configuration needed for the playbook.

    The all key contains all of the host configurations (although Iโ€™m only using a single one).

    Within that all key is vars.ansible_ssh_private_key_file which is just the local path to the ssh private key used to access the server.

    This is the key I set up with Terraform in the previous guide.

    Then the hosts key just contains the hosts I want to be able to target (im using the domain name that I set up in the previous Terraform guide)

    The setup.yml file explained

    The setup.yml file is what is known as an โ€œAnsible Playbookโ€.

    From my limited working knowledge of Ansible, a playbook is basically a set of tasks that are run against a server or a collection of servers.

    In my own one I am currently only running it against a single server, which I am targeting via its domain name of โ€œzet.davidpeach.meโ€

    YAML
    - hosts: all
      become: true
      user: root
      vars_files:
        - vars/default.yml
    

    This first section is the setup of the playbook.

    hosts:all tells it to run against all hosts that are defined in the ./inventory.yml file.

    become:true is saying that ansible will switch to the root user on the server (defined on the next line with user: root) before running the playbook tasks.

    The vars_files: part lets you set relative paths to files containing variables that are used in the playbook and inside the file ./files/nginx.conf.j2.

    I wont go through each of the variables but hopefully you can see what they are doing.

    The Playbook Tasks

    Each of the tasks in the Playbook has a descriptive title that hopefully does well in explaining what the steps are doing.

    The key value pairs of configuration after each of the task titles are pre-defined settings available to use in ansible.

    The tasks read from top to bottom and essentially automate the steps that normally need to be manually done when preparing a server.

    Running the playbook

    Bash
    cd ansible-project
    
    ansible-playbook setup.yml -i inventory.yml

    This command should start Ansible off. You should get the usual message about trusting the target host when first connecting to the server. Just answer โ€œyesโ€ and press enter.

    You should now see the output for each step defined in the playbook.

    The server should now be ready to deploy to.

    Testing your webserver

    In the ./files/nginx.conf.j2 there is a root directive on live 3. For me this is set to /var/www/{{ http_host }}. (http_host is a variable set in the vars/default.yml file).

    SSH on to the server, using the private ssh key from the keypair I am using (see the Terraform guide for reference).

    Bash
    ssh -i ~/.ssh/id_rsa.davidpeachme zet.davidpeach.me

    Then on the server, create a basic index.html file in the website root defined in the default nginx file:

    Bash
    cd /var/www/zet.davidpeach.me
    touch index.html
    echo "hello world" > index.html

    Now, going to your website url in a browser, you should be able to see the text โ€œhello worldโ€ in the top left.

    The server is ready to host a static html website.

    Next Step

    You can use whatever method you prefer to get your html files on to your server.

    You could use rsync, scp, an overly-complicated CI pipeline, or – if yourโ€™e using lupo – your could have lupo deploy it straight to your server for you.


  • ๐Ÿ“‚

    Setting up a Digital Ocean droplet for a Lupo website with Terraform

    Overview of this guide

    My Terraform Repository used in this guide

    Terraform is a program that enables you to set up all of your cloud-based infrastructure with configuration files. This is opposed to the traditional way of logging into a cloud providerโ€™s dashboard and manually clicking buttons and setting up things yourself.

    This is known as โ€œInfrastructure as Codeโ€.

    It can be intimidating to get started, but my aim with this guide is to get you to the point of being able to deploy a single server on Digital Ocean, along with some surrounding items like a DNS A record and an ssh key for remote access.

    This guide assumes that you have a Digital Ocean account and that you also have your domain and nameservers setup to point to Digital Ocean.

    You can then build upon those foundations and work on building out your own desired infrastructures.

    The Terraform Flow

    As a brief outline, here is what will happen when working with terraform, and will hopefully give you a broad picture from which I can fill in the blanks below.

    • Firstly we write a configuration file that defines the infrastructure that we want.
    • Then we need to set up any access tokens, ssh keys and terraform variables. Basically anything that our Terraform configuration needs to be able to complete its task.
    • Finally we run the terraform plan command to test our infrastructure configuration, and then terraform apply to make it all live.

    Installing the Terraform program

    Terraform has installation instructions, but you may be able to find it with your package manager.

    Here I am installing it on Arch Linux, by the way, with pacman

    Bash
    sudo pacman -S terraform

    Setting the required variables

    The configuration file for the infrastructure I am using requires only a single variable from outside. That is the do_token.

    This is created manually in the API section of the Digital Ocean dashboard. Create yours and keep its value to hand for usage later.

    Terraform accepts variables in a number of ways. I opt to save my tokens in my local password manager, and then use them when prompted by the terraform command. This is slightly more long-winding than just setting a terraform-specific env in your bashrc. However, I recently learned off rwxrob how much of a bad idea that is.

    Creating an ssh key

    In the main.tf file, I could have set the ssh public key path to my existing one. However, I thought Iโ€™d create a key pair specific for my website deployment.

    Bash
    ssh-keygen -t rsa

    I give it a different name so as to not override my standard id_rsa one. I call it id_rsa.davidpeachme just so I know which is my website server one at a glance.

    Describing your desired infrastructure with code

    Terraform uses a declaritive language, as opposed to imperetive.

    What this means for you, is that you write configuration files that describe the state that you want your infrastructure to be in. For example if you want a single server, you just add the server spec in your configuration and Terraform will work out how best to create it for you.

    You dont need to be concerned with the nitty gritty of how it is achieved.

    I have a real-life example that will show you exactly what a minimal configuration can look like.

    Clone / fork the repository for my website server.

    Explaination of my terraform repository

    YAML
    terraform {
      required_providers {
        digitalocean = {
          source = "digitalocean/digitalocean"
          version = "~> 2.0"
        }
      }
    }
    
    variable "do_token" {}
    
    # Variables whose values are defined in ./terraform.tfvars
    variable "domain_name" {}
    variable "droplet_image" {}
    variable "droplet_name" {}
    variable "droplet_region" {}
    variable "droplet_size" {}
    variable "ssh_key_name" {}
    variable "ssh_local_path" {}
    
    provider "digitalocean" {
      token = var.do_token
    }

    The first block tells terraform which providers I want to use. Providers are essentially the third-party APIs that I am going to interact with.

    Since Iโ€™m only creating a Digital Ocean droplet, and a couple of surrounding resources, I only need the digitalocean/digitalocean provider.

    The second block above tells terraform that it should expect – and require – a single variable to be able to run. This is the Digital Ocean Access Token that was obtained above in the previous section, from the Digital Ocean dashboard.

    Following that are the variables that I have defined myself in the ./terraform.tfvars file. That tfvars file would normally be kept out of a public repository. However, I kept it in so that you could hopefully just fork my repo and change those values for your own usage.

    The bottom block is the setting up of the provider. Basically just passing the access token into the provider so that it can perform the necessary API calls it needs to.

    YAML
    resource "digitalocean_ssh_key" "ssh_key" {
      name       = var.ssh_key_name
      public_key = file(var.ssh_local_path)
    }

    Here is the first resource that I am telling terraform to create. Its taking a public key on my local filesystem and sending it to Digital Ocean.

    This is needed for ssh access to the server once it is ready. However, it is added to the root account on the server.

    I use Ansible for setting up the server with the required programs once Terraform has built it. So this ssh key is actually used by Ansible to gain access to do its thing.

    I will have a separate guide soon on how I use ansible to set my server up ready to host my static website.

    YAML
    resource "digitalocean_droplet" "droplet" {
      image    = var.droplet_image
      name     = var.droplet_name
      region   = var.droplet_region
      size     = var.droplet_size
      ssh_keys = [digitalocean_ssh_key.ssh_key.fingerprint]
    }

    Here is the meat of the infrastructure – the droplet itself. I am telling it what operating system image I want to use; what size and region I want; and am telling it to make use of the ssh key I added in the previous block.

    YAML
    data "digitalocean_domain" "domain" {
      name = var.domain_name
    }

    This block is a little different. Here I am using the data property to grab information about something that already exists in my Digital Ocean account.

    I have already set up my domain in Digital Oceanโ€™s networking area.

    This is the overarching domain itself โ€“ not the specific A record that will point to the server.

    The reason iโ€™m doing it this way, is because I have got mailbox settings and TXT records that are working, so i dont want them to be potentially torn down and re-created with the rest of my infrastructure if I ever run terraform destroy.

    YAML
    resource "digitalocean_record" "record" {
      domain = data.digitalocean_domain.domain.id
      type   = "A"
      name   = "@"
      ttl    = 60
      value  = "${digitalocean_droplet.droplet.ipv4_address}"
    }

    The final block creates the actual A record with my existing domain settings.

    It uses the domain id given back by the data block i defined above, and the ip address of the created droplet for the A record value.

    Testing and Running the config to create the infrastructure

    If you now go into the root of your terraform project and run the following command, you should see it displays a write up of what it intends to create:

    Bash
    terraform plan

    If the output looks okay to you, then type the following command and enter โ€œyesโ€ when it asks you:

    Bash
    terraform apply

    This should create the three items of infrastructure we have defined.

    Next Step

    Next we need to set that server up with the required software needed to run a static html website.

    I will be doing this with a program called Ansible.

    Iโ€™ll be writing up those steps in a zet very soon.


  • ๐Ÿ“‚

    Beyond Aliases — define your development workflow with custom bash scripts

    Being a Linux user for just over 10 years now, I can’t imagine my life with my aliases.

    Aliases help with removing the repetition of commonly-used commands on a system.

    For example, here’s some of my own that I use with the Laravel framework:

    Bash
    alias a="php artisan"
    alias sail='[ -f sail ] && bash sail || bash vendor/bin/sail'
    alias stan="./vendor/bin/phpstan analyse"

    You can set these in your ~/.bashrc file. See mine in my dotfiles as a fuller example.

    However, I recently came to want greater control over my development workflow. And so, with the help of videos by rwxrob, I came to embrace the idea of learning bash, and writing my own little scripts to help in various places in my workflow.

    A custom bash script

    For the example here, I’ll use the action of wanting to “exec” on to a local docker container.

    Sometimes you’ll want to get into a shell within a local docker container to test / debug things.

    I found I was repeating the same steps to do this and so I made a little script.

    Here is the script in full:

    Bash
    #!/bin/bash
    
    docker container ls | fzf | awk '{print $1}' | \
    xargs -o -I % docker exec -it % bash

    Breaking it down

    In order to better understand this script I’ll assume no prior knowledge and explain some bash concepts along the way.

    Sh-bang line.

    the first line is the “sh-bang”. It basically tells your shell which binary should execute this script when ran.

    For example you could write a valid php script and add #!/usr/bin/php at the top, which would tell the shell to use your php binary to interpret the script.

    So #!/usr/bash means we are writing a bash script.

    Pipes

    The pipe symbol: |.

    In brief, a “pipe” in bash is a way to pass the output of the left hand command to the input of the right hand command.

    So the order of the commands to be ran in the script is in this order:

    1. docker container ls
    2. fzf
    3. awk ‘{print $1}’
    4. xargs -o -I % docker exec -it % bash

    docker container ls

    This gives us the list of currently-running containers on our system. The output is the list like so (I’ve used an image as the formatting gets messed up when pasting into a post as text) :

    fzf

    So the output of the docker container ls command above is the table in the image above, which is several rows of text.

    fzf is a “fuzzy finder” tool, which can be passed a list of pretty much anything, which can then be searched over by “fuzzy searching” the list.

    In this case the list is each row of that output (header row included)

    When you select (press enter) on your chosen row, that row of text is returned as the output of the command.

    In this image example you can see I’ve typed in “app” to search for, and it has highlighted the closest matching row.

    awk ‘{print $1}’

    awk is an extremely powerful tool, built into linux distributions, that allows you to parse structured text and return specific parts of that text.

    '{print $1}' is saying “take whatever input I’m given, split it up based on a delimeter, and return the item that is 1st ($1).

    The default delimeter is a space. So looking at that previous image example, the first piece of text in the docker image rows is the image ID: `”df96280be3ad” in the app image chosen just above.

    So pressing enter for that row from fzf, wil pass it to awk, which will then split that row up by spaces and return you the first element from that internal array of text items.

    xargs -o -I % docker exec -it % bash

    xargs is another powerful tool, which enables you to pass what ever is given as input, into another command. I’ll break it down further to explain the flow:

    The beginning of the xargs command is as so:

    Bash
    xargs -o -I %

    -o is needed when running an “interactive application”. Since our goal is to “exec” on to the docker container we choose, interactive is what we need. -o means to “open stdin (standard in) as /dev/tty in the child process before executing the command we specify.

    Next, -I % is us telling xargs, “when you next see the ‘%’ character, replace it with what we give you as input. Which in this case will be that docker container ID returned from the awk command previously.

    So when you replace the % character in the command that we are giving xargs, it will read as such:

    Bash
    docker exec -it df96280be3ad bash

    This is will “exec” on to that docker container and immediately run “bash” in that container.

    Goal complete.

    Put it in a script file

    So all that’s needed now, is to have that full set of piped commands in an executable script:

    Bash
    #!/bin/bash
    
    docker container ls | fzf | awk '{print $1}' | xargs -o -I % docker exec -it % bash

    My own version of this script is in a file called d8exec, which after saving it I ran:

    Bash
    chmod +x ./d8exec

    Call the script

    In order to be able to call your script from anywhere in your terminal, you just need to add the script to a directory that is in your $PATH. I keep mine at ~/.local/bin/, which is pretty standard for a user’s own scripts in Linux.

    You can see how I set my own in my .bashrc file here. The section that reads $HOME/.local/bin is the relevant piece. Each folder that is added to the $PATH is separated by the : character.

    Feel free to explore further

    You can look over all of my own little scripts in my bin folder for more inspiration for your own bash adventures.

    Have fun. And don’t put anything into your scripts that you wouldn’t want others seeing (api keys / secrets etc)


  • ๐Ÿ“‚

    Setting up a GPG Key with git to sign your commits

    Signing your git commits with GPG is really easy to set up and I’m always surprised by how many developers I meet that don’t do this.

    Of course it’s not required to push commits and has no baring on quality of code. But that green verified message next to your commits does feel good.

    Essentially there are three parts to this:

    1. Create your GPG key
    2. Tell git to use your GPG key to sign your commits
    3. Upload the public part of your GPG key to Gitlab / Github / etc

    Creating the GPG key if needed

    gpg --full-generate-key
    

    In the interactive guide, I choose:

    1. (1) RSA and RSA (default)
    2. 4096 bits long
    3. Does not expire
    4. Fill in Name, Email, Comment and Confirm.
    5. Enter passphrase when prompted.

    Getting the Key ID

    This will list all of your keys:

    gpg --list-secret-keys --keyid-format=long
    

    Example of the output:

    sec   rsa4096/THIS0IS0YOUR0KEY0ID 2020-12-25 [SC]
          KGHJ64GHG6HJGH5J4G6H5465HJGHJGHJG56HJ5GY
    uid                 [ultimate] Bob GPG Key<mail@your-domain.co.uk>
    

    In that example, the key id that you would need next is “THIS0IS0YOUR0KEY0ID” from the first line, after the forward slash.

    Tell your local git about the signing key

    To set the gpg key as the signing key for all of your git projects, run the following global git command:

    git config --global user.signingkey THIS0IS0YOUR0KEY0ID
    

    If you want to do it on a repository by repository basis, you can run it from within each project, and omit the --global flag:

    git config user.signingkey THIS0IS0YOUR0KEY0ID
    

    Signing your commits

    You can either set commit signing to true for all projects as the default, or by a repo by repo basis.

    # global
    git config --global commit.gpgsign true
    
    # local
    git config commit.gpgsign true
    

    If you wanted to, you could even decide to sign commits per each commit, by not setting it as a config setting, but passing a flag on every commit:

    git commit -S -m "My signed commit message"
    

    Adding your public key to gitlab / github / wherever

    Firstly export the public part of your key using your key id. Again, using the example key id from above:

    # Show your public key in terminal
    gpg --armor --export THIS0IS0YOUR0KEY0ID
    
    # Copy straight to your system clipboard using "xclip"
    gpg --armor --export THIS0IS0YOUR0KEY0ID | xclip -sel clipboard
    

    This will spit out a large key text block begining and ending with comments. Copy all of the text that it gives you and paste it into the gpg textbox in your git forge of choice – gitlab / github / gitea / etc.


  • ๐Ÿ“‚

    How I use vimwiki in neovim

    This post is currently in-progress, and is more of a brain-dump right now. But I like to share as often as I can otherwise I’d never share anything ๐Ÿ™‚

    Please view the official Vimwiki Github repository for up-to-date details of Vimwiki usage and installation. This page just documents my own processes at the time.

    Installation

    Add the following to plugins.lua

    use "vimwiki/vimwiki"

    Run the following two commands separately in the neovim command line:

    :PackerSync
    :PackerInstall

    Close and re-open Neovim.

    How I configure Vimwiki

    I have 2 separate wikis set up in my Neovim.

    One for my personal homepage and one for my commonplace site.

    I set these up by adding the following in my dotfiles, at the following position: $NEOVIM_CONFIG_ROOT/after/plugin/vimwiki.lua. So for me that would be ~/.config/nvim/after/plugin/vimwiki.lua.

    You could also put this command inside the config function in your plugins.lua file, where you require the vimwiki plugin. I just tend to put all my plugin-specific settings in their own “after/plugin” files for organisation.

    vim.cmd([[
      let wiki_1 = {}
      let wiki_1.path = '~/vimwiki/website/'
      let wiki_1.html_template = '~/vimwiki/website_html/'
      let wiki_2 = {}
      let wiki_2.path = '~/vimwiki/commonplace/'
      let wiki_2.html_template = '~/vimwiki/commonplace_html/'
      let g:vimwiki_list = [wiki_1, wiki_2]
      call vimwiki#vars#init()
    ]])

    The path keys tell vimwiki where to plave the root index.wiki file for each wiki you configure.

    The html_template keys tell vimwiki where to place the compiled html files (when running the :VimwikiAll2HTML command).

    I keep them separate as I am deploying them to separate domains on my server.

    When I want to open and edit my website wiki, I enter 1<leader>ww.

    When I want to open and edit my commonplace wiki, I enter 2<leader>ww.

    Pressing those key bindings for the first time will ask you if you want the directories creating.

    How I use vimwiki

    At the moment, my usage is standard to what is described in the Github repository linked at the top of this page.

    When I develop any custom workflows I’ll add them here.

    Deployment

    Setting up a server to deploy to is outside the scope of this post, but hope to write up a quick guide soon.

    I run the following command from within vim on one of my wiki index pages, to export that entire wiki to html files:

    :VimwikiAll2HTML

    I then SCP the compiled HTML files to my server. Here is an example scp command that you can modify with your own paths:

    scp -r ~/vimwiki/website_html/* your_user@your-domain.test:/var/www/website/public_html

    For the best deployment experience, I recommend setting up ssh key authentication to your server.

    For bonus points I also add a bash / zsh alias to wrap that scp command.


  • ๐Ÿ“‚

    General plugins I use in Neovim

    I define a “general plugin” as a plugin that I use regardless of the filetype I’m editing.

    These will add extra functionality for enhancing my Neovim experience.


    I use Which-key for displaying keybindings as I type them. For example if I press my <leader> key and wait a few milliseconds, it will display all keybindings I have set that begin with my <leader> key.

    It will also display any marks and registers I have set, when only pressing ' or @ respectively.

    use "folke/which-key.nvim"

    Vim-commentary makes it super easy to comment out lines in files using vim motions. So in normal mode you can enter gcc to comment out the current line; or 5gcc to comment out the next 5 lines.

    You can also make a visual selection and enter gc to comment out that selected block.

    use "tpope/vim-commentary"

    Vim-surround provides me with an extra set of abilities on text objects. It lets me add, remove and change surrounding elements.

    For example I can place my cursor over a word and enter ysiw" to surround that word with double quotes.

    Or I can make a visual selection and press S" to surround that selection with double quotes.

    use "tpope/vim-surround"

    Vim-unimpaired adds a bunch of extra mappings that tpope had in his own vimrc, which he extracted to a plugin.

    They include mappings for the [ and ] keys for previous and next items. For example using [b and ]b moves backwards and forwards through your open buffers. Whilst [q and ]q will move you backwards and forwards respectively through your quickfist list items.

    use "tpope/vim-unimpaired"

  • ๐Ÿ“‚

    Passive plugins I use in Neovim

    These plugins I use in Neovim are ones I consider “passive”. That is, they just sit there doing their thing in the background to enhance my development experience.

    Generally they wont offer extra keybindings or commands I will use day to day.

    You can view all the plugins I use in my plugins.lua file in my dotfiles.


    Vim-lastplace will remember the last edit position of each file you’re working with and place your cursor there when re-entering.

    use "farmergreg/vim-lastplace"

    Nvim-autopairs will automatically add closing characters when opening a “pair”, such as {, [ and (. It will then place your cursor between the two.

    use "windwp/nvim-autopairs"

    Neoscroll makes scrolling smooth in neovim.

    use "karb94/neoscroll.nvim"

    Vim-pasta will super-charge your pasting in neovim to preserve indents when pasting contents in with “p” and “P“.

    use({
      "sickill/vim-pasta",
      config = function()
        vim.g.pasta_disabled_filetypes = { 'fugitive' }
      end,
    })

    Here I am passing a config function to disable vim-pasta for “fugitive” filetypes. “Fugitive” is in reference to the vim-fugitive plugin that I will explain in another post.


    Nvim-colorizer will highlight any colour codes your write out.

    use "norcalli/nvim-colorizer.lua"

  • ๐Ÿ“‚

    How I use Neovim

    I try to use Neovim for as much development-related work as possible.

    This page serves as a point of reference for me, and other people interested, for what I use and how I use it.

    Feedback is welcome and would love to know how you use Neovim too!

    My complete Neovim configuration files can be found on Github.

    1. How I organise my Neovim configuration
    2. Passive plugins I use in Neovim
    3. General plugins I use in Neovim
    4. Development plugins I use in Neovim – coming soon
    5. Database client in Neovim (vim-dadbod and vim-dadbod-ui) – coming soon
    6. REST client in Neovim (vim-rest-client) – coming soon
    7. Personal Wiki in Neovim (vim-wiki) – coming soon

  • ๐Ÿ“‚

    Inventory app — saving inventory items.

    This is the absolute bare bones minimum implementation for my inventory keeping: saving items to my inventory list.

    Super simple, but meant only as an example of how I’d work when working on an API.

    Here are the changes made to my Inventory Manager. Those changes include the test and logic for the initial index endpoint too. I may blog about that part in a separate post soon.

    Writing the store test

    One of Laravel’s many strengths is how well it is set up for testing and just how nice those tests can read. Especially now that I’ve started using Pest.

    Here is the test I wrote for the store endpoint I was yet to write:

    test('inventory items can be created', function () {
        $response = $this->postJson(route(name: 'inventory.store'), [
            'name' => 'My Special Item',
        ]);
    
        $response->assertStatus(201);
    
        $this->assertDatabaseHas(Inventory::class, [
            'name' => 'My Special Item',
        ]);
    });

    Firstly I post to an endpoint, that I am yet to create, with the most minimal payload I want: an item’s name:

    $response = $this->postJson(route(name: 'inventory.store'), [
        'name' => 'My Special Item',
    ]);

    Then I can check I have the correct status code: an HTTP Created 201 status:

    $response->assertStatus(201);

    Finally I check that the database table where I will be saving my inventory items has the item I have created in the test:

    $this->assertDatabaseHas(Inventory::class, [
        'name' => 'My Special Item',
    ]);

    The first argument to the assertDatabaseHas method is the model class, which Laravel will use to determine the name of the table for that model. Either by convention, or by the value you override it with on the model.

    The second argument is an array that should match the table’s column name and value. Your model can have other columns and still pass. It will only validate that the keys and values you pass to it are correct; you don’t need to pass every column and value — that would become tedious.

    Writing the store logic

    There is a term I’ve heard in Test-driven development called “sliming it out”. If I remember correctly, this is when you let the test feedback errors dictate every single piece of code you add.

    You wouldn’t add any code at all until a test basically told you too.

    I wont lie – I actually love this idea, but it soon becomes tiresome. It’s great to do when you start out in TDD, in my opinion, but soon you’ll start seeing things you can add before running the test.

    For example, you know you’ll need a database table and a model class, and most likely a Model Factory for upcoming tests, so you could run the artisan command to generate those straight away:

    php artisan make:model -mf Inventory
    
    # or with sail
    ./vendor/bin/sail artisan make:model -mf Inventory

    I dont tend to generate my Controller classes with these, as I now use single-action controllers for personal projects.

    Store Route

    Within the routes/web.php file, I add the following:

    use App\Http\Controllers\Inventory\StoreController;
    
    Route::post('inventory', StoreController::class)->name('inventory.store');

    Using a single-action class here to keep logic separated. Some would see this as over-engineering, especially if keeping controller code to a minimum anyway, but I like the separation.

    Adding an explicit “name” to the endpoint, means I can just refer to it throughout the app with that name. Like in the test code above where I generate the endpoint with the “route” helper function:

    route(name: 'inventory.store')

    Store Controller

    <?php
    
    declare(strict_types = 1);
    
    namespace App\Http\Controllers\Inventory;
    
    use App\Http\Requests\InventoryStoreRequest;
    use App\Models\Inventory;
    use Illuminate\Contracts\Routing\ResponseFactory;
    use Illuminate\Http\Response;
    
    class StoreController
    {
        public function __invoke(InventoryStoreRequest $request): Response|ResponseFactory
        {
            Inventory::create([
                'name' => $request->get(key: 'name'),
            ]);
    
            return response(content: 'Inventory item created', status: 201);
        }
    }

    Super straight forward at the moment. After receiving the request via the custom request class (code below), I just create an inventory item with the name on the request.

    I then return a response with a message and an HTTP Created 201 status.

    This code does assume that it was created fine so I might look at a better implementation of this down the line…

    …but not before I have a test telling me it needs to change.

    InventoryStoreRequest class

    This is a standard generated request class with the following rules method:

    /**
     * Get the validation rules that apply to the request.
     *
     * @return array<string, mixed>
     */
    public function rules(): array
    {
        return [
            'name' => 'required',
        ];
    }

    Again, nothing much to it. It makes sure that a name is required to be passed.

    Its not saying anything about what that value could be. We could pass a date time or a mentally-long string.

    I’ll fix that in a future post.

    An extra test for the required name

    In order to be “belt and braces”, I have also added a test that proves that we require a name to be passed. Pest makes this laughable simple:

    test('inventory items require a name', function () {
        $this->postJson(route(name: 'inventory.store'))
            ->assertJsonValidationErrorFor('name');
    });

    This just performs a post request to the store endpoint, but passes no data. We then just chain the assertJsonValidationErrorFor method, giving it the parameter that should have caused the failed validation. In this case “name”.

    As the validation becomes more sophisticated I will look at adding more of these tests, and even possibly running all “required” fields through the some test method with Pests data functionality. Essentially the same as how PHPUnit’s Data Providers work.

    Useful Links

    Complete changes in git for when I added the store and the index endpoints to my Inventory app.