What follows is a comment I wrote on Link-Din to a future software developer looking for advice:

Have fun first.

Don’t choose some technology because people / job specs tell you that you should use it.

Explore different languages. Build little projects in those different languages.

Build your own personal website and blog about your learning.

If you go into development using react because that what people / bootcamps told you, then you will be a react developer for a while.

Im not saying thats a bad thing, but if you dont like what you’re using and only use it because you were told you should, you will soon be in the position of hating a job that you need to pay the bills.

Oh and dont listen to people who tell you that you MUST use “ai” in order to be a professional.

Learn for yourself and chart your progress. Just try and be a little better at the job than you were yesterday.

Best of luck in your journey.

Installing and setting up github cli

What is the github cli

The Github CLI tool is the official Github terminal tool for interacting with your github account, as well as any open source projects hosted on Github.

I’ve only just begun looking into it but am already trying to make it part of my personal development flow.

Installation

You can see the installation instructions here, or if you’re running on Arch Linux, just run this:

sudo pacman -S github-cli

Once installed, you should be able to run the following command and see the version you have installed:

gh --version

Authenticating

Before interacting with your github account, you will need to login via the cli tool.

Generate a Github Personal Access Token

Firstly, I generate a personal access token on the Github website. In my settings page I head to “Developer Settings” > “Personal Access Tokens” > “Tokens (classic)”.

I then create a new “classic” token (just my preference) and I select all permissions and give it an appropriate name.

Then I create it and keep the page open where it displays the access token. This is for pasting it into the terminal during the authentication flow next.

Go through the Github CLI authentication flow

Start the authentication flow by running the command:

gh auth login

The following highlights are the options I select when going through the login flow. Your needs may vary.

What account do you want to log into?
> Github.com
> Github Enterprise Server

What is your preferred protocol for Git operations?
> HTTPS
> SSH

Upload your SSH public key to your Github account?
> /path/to/.ssh/id_rsa.pub
> Skip

How would you like to authenticate Github CLI?
> Login with a web browser
> Paste an authentication token

I then paste in the access token from the still-open tokens page, and hit enter.

You should see it correctly authenticates you and displays who you are logged in as.

Check out the official documentation to see all of the available actions you can perform on your account.

How I organize my Neovim configuration

The entry point for my Neovim Configuration is the init.lua file.

Init.lua

My entrypoint file simply requires three other files:

require 'user.plugins'
require 'user.options'
require 'user.keymaps'

The user.plugins file is where I’m using Packer to require plugins for my configuration. I will be writing other posts around some of the plugins I use soon.

The user.options file is where I set all of the Neovim settings. Things such as mapping my leader key and setting number of spaces per tab:

vim.g.mapleader = " "
vim.g.maplocalleader = " "

vim.opt.expandtab = true
vim.opt.shiftwidth = 4
vim.opt.tabstop = 4
vim.opt.softtabstop = 4

...etc...

Finally, the user.keymaps file is where I set any general keymaps that aren’t associated with any specific plugins. For example, here I am remapping the arrow keys to specific buffer-related actions:

-- Easier buffer navigation.
vim.keymap.set("n", "", ":bp", { noremap = true, silent = true })
vim.keymap.set("n", "", ":bn", { noremap = true, silent = true })
vim.keymap.set("n", "", ":bd", { noremap = true, silent = true })
vim.keymap.set("n", "", ":%bd", { noremap = true, silent = true })

In that example, the left and right keys navigate to previous and next buffers. The down key closes the current buffer and the up key is the nuclear button that closes all open buffers.

Plugin-specific setup and mappings

For any plugin-specific setup and mappings, I am using Neovim’s “after” directory.

Basically, for every plugin you install, you can add a lua file within a directory at ./after/plugin/ from the root of your Neovim configuration.

So for example, to add settings / mappings for the “vim-test” plugin, I have added a file at: ./after/plugin/vim-test.lua with the following contents:

vim.cmd([[
  let test#php#phpunit#executable = 'docker-compose exec -T laravel.test php artisan test'
  let test#php#phpunit#options = '--colors=always'
  let g:test#strategy = 'neovim'
  let test#neovim#term_position = "vert botright 85"
  let g:test#neovim#start_normal = 1
]])

vim.keymap.set('n', 'tn', ':TestNearest', { silent = false })
vim.keymap.set('n', 'tf', ':TestFile', { silent = false })
vim.keymap.set('n', 'ts', ':TestSuite', { silent = false })
vim.keymap.set('n', 'tl', ':TestLast', { silent = false })
vim.keymap.set('n', 'tv', ':TestVisit', { silent = false })

This means that these settings and bindings will only be registered after the vim-test plugin has been loaded.

I used to just have extra required files in my main init.lua file, but this feels so much more cleaner in my opinion.

Update: 9th February 2023 — when setting up Neovim on a fresh system, I notice that I get a bunch of errors from the after files as they are executing on boot, before I’ve actually installed the plugins. I will add protected calls to the plugins soon to mitigate these errors.

Started working on a side project I’m calling “Pitch”. It’s an end to end encrypted website starter inspired by SiteJS by the Small Technology Foundation.

Got a basic vue app set up with the vue-cli but now can’t work out why my private key generator sometimes returns what I expect — a Uint8Array — and more often a mess of random characters.

Am I misunderstanding something I wonder?

Setting up Elasticsearch and Kibana using Docker for local development

How to set up Kibana and Elasticsearch locally, within Docker containers.

Overview

Elasticsearch is a super-fast search query program. Kibana is a separate program that can be used for interacting with elasticsearch.

Here I am setting up both Elasticsearch and Kibana in their own single Docker Containers. I do this as a way to help keep my computer relatively free from being cluttered with programs. Not only that, but since the containers are their own separate self-contained boxes, it also makes it easy to upgrade the Elasticsearch version I am using at a later date.

Or even remove them entirely with minimal fuss.

Please note: I am using version 7.10.1 of both programs in the examples below. You can look at each program’s respective docker hub pages to target the exact version you require:

Just replace any uses of “7.10.1” below with your own version.

Creating and running containers for the services needed

Run the two following commands to download and run Elasticsearch locally:

# Download the Elasticsearch docker image to your computer
docker pull elasticsearch:7.10.1

# Create a local container with Elasticsearch running
docker run -d --name my_elasticsearch --net elasticnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.ml.enabled=false" elasticsearch:7.10.1

# Start the container
docker container start my_elasticsearch

And then run the two following commands to download and run Kibana locally:

# Download the Kibana docker image to your computer
docker pull kibana:7.10.1

# Create a local container with Kibana running
docker run -d --name my_kibana --net elasticnetwork -e ELASTICSEARCH_URL=http://elasticsearch:9200 -p 5601:5601 kibana:7.10.1

# Start the container
docker container start my_kibana

Accessing Kibana

Since kibana will be connecting to our Elasticsearch container, which it was told to use with the ELASTICSEARCH_URL=http://elasticsearch:9200 section of the Kibana create command, we really only need to use Kibana.

Kibana has it’s own Devtools for querying Elasticsearch, which so far has been enough for my own usecases.

head to http://localhost:5601 to access your own Kibana installation.

Note: You can send curl requests directly to your Elasticsearch from the terminal by targeting the http://127.0.0.1:9200 endpoint.

Deleting the containers

If you wish to remove Elasticsearch and/or Kibana from your computer, then enter the following commands into your terminal.

Using Docker for local development makes this a cinch.

# Stop the Elasticsearch container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_elasticsearch

# Delete the Elasticsearch container
docker container rm my_elasticsearch

# Stop the Kibana container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_kibana

# Delete the Kibana container
docker container rm my_kibana

If you need to set up the two programs again, you can just use the create commands shown above to create them as you did originally.

Install MongoDB with Docker for local development

Pull the docker image for mongo down to your computer.

docker pull mongo

Run the mongo container in the background, isolated from the rest of your computer.

# Command explained below
docker run -d -p 27017:27017 --name mongodb mongo -v /data/db:/data/db

What I love about this approach is that I don’t start muddying up my computer installing new programs — especially if it’s just for the purposes of experimenting with new technologies.

The main run command explained:

  • “docker run -d” tells docker to run in detached mode, which means it will run in the background. Otherwise if we close that terminal it will stop execution of the program docker is running (mongo in this case).
  • “-p 27017:27017” maps your computer’s port number 27017 so it forwards its requests into the container using the same port. (I always forget which port represents the computer and which is the container)
  • “–name mongodb” just gives the container that will be created a nice name. Otherwise Docker will generate and random name.
  • “mongo” is just telling Docker which image to create.
  • “-v /data/db:/data/db” tells Docker to map the /data/db directory on your computer to the /data/db directory in the container. This will ensure that if you restart the container, you will retain the mongo db data.

Bulk converting large PS4 screenshot png images into 1080p jpg’s

A niche example of how I bulk convert my screenshots to make them more website-friendly.

I tend to have my screenshots set to the highest resolution when saving on my PlayStation 4.

However, when I upload to the screenshots area of this website, I don’t want the images to be that big — either in dimensions or file size.

This snippet is how I bulk convert those images ready for uploading. I use an Ubuntu 20.04 operating system when running this.

# Make sure ImageMagick is installed
sudo apt install imagemagick

# Run the command
mogrify -resize 1920x1080 -format jpg folder/*.png

You can change the widthxheight dimensions after the -resize flag for your own required size. As well as changing the required image format after the -format flag.

Updating PHP versions in Ubuntu 20.04

Installing an older PHP version and switching to it in Ubuntu.

For an older PHP project, I needed to install an older version of PHP. This is what I did to set that up.

Installing a different PHP version

sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install -y php7.1

Rebinding php to required version

Some of these binds are probably not need. I think the main ones, at least for my use case, were php and phar.

sudo update-alternatives --set php /usr/bin/php7.1
sudo update-alternatives --set phar /usr/bin/phar7.1
sudo update-alternatives --set phar.phar /usr/bin/phar.phar7.1
sudo update-alternatives --set phpize /usr/bin/phpize7.1
sudo update-alternatives --set php-config /usr/bin/php-config7.1

For some reason the --set flag stopped working, so I had to use:

sudo update-alternatives --config php
sudo update-alternatives --config phar

etc. And update each one with the terminal prompt options for each.

p.s. If using PHP-FPM, you could also set up different server conf files and point the FPM path to the version you need. My need was just because I was using the command line in the older project.

Started to learn Rust

Today is the day when I start to learn the Rust programming language.

Today is the day when I finally started to learn a new programming language — one that I have never touched before. I had briefly touched on and considered Go. However, I have settled on Rust.

It is the first compiled language that I have started to learn and am looking forward to the challenges it will bring.

I will also be doing my best to blog about things I learn — much of it probably in more of a brain-dump format — to both help others as well as reinforcing my own learning.

Docker braindump

A collection of my learnings, notes and musings on Docker.

These are currently random notes and are not much help to anybody yet. They will get tidied as I add to the page.

Docker Swarm

Docker swarm secrets

From inside a docker swarm manager node, there are two ways of creating a secret.

Using a string value:

printf <your_secret_value> | docker secret create your_secret_key -

Using a file path:

docker secret create your_secret_key ./your_secret_value.json

Docker swarm secrets are saved, encrypted, and are accessible to containers via a filepath:

/run/secrets/your_secret_key.

Posts to digest

https://www.bretfisher.com/docker-swarm-firewall-ports/

https://www.bretfisher.com/docker/

https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose