Installing and setting up github cli

What is the github cli

The Github CLI tool is the official Github terminal tool for interacting with your github account, as well as any open source projects hosted on Github.

I’ve only just begun looking into it but am already trying to make it part of my personal development flow.

Installation

You can see the installation instructions here, or if you’re running on Arch Linux, just run this:

sudo pacman -S github-cli

Once installed, you should be able to run the following command and see the version you have installed:

gh --version

Authenticating

Before interacting with your github account, you will need to login via the cli tool.

Generate a Github Personal Access Token

Firstly, I generate a personal access token on the Github website. In my settings page I head to “Developer Settings” > “Personal Access Tokens” > “Tokens (classic)”.

I then create a new “classic” token (just my preference) and I select all permissions and give it an appropriate name.

Then I create it and keep the page open where it displays the access token. This is for pasting it into the terminal during the authentication flow next.

Go through the Github CLI authentication flow

Start the authentication flow by running the command:

gh auth login

The following highlights are the options I select when going through the login flow. Your needs may vary.

What account do you want to log into?
> Github.com
> Github Enterprise Server

What is your preferred protocol for Git operations?
> HTTPS
> SSH

Upload your SSH public key to your Github account?
> /path/to/.ssh/id_rsa.pub
> Skip

How would you like to authenticate Github CLI?
> Login with a web browser
> Paste an authentication token

I then paste in the access token from the still-open tokens page, and hit enter.

You should see it correctly authenticates you and displays who you are logged in as.

Check out the official documentation to see all of the available actions you can perform on your account.

How I organize my Neovim configuration

The entry point for my Neovim Configuration is the init.lua file.

Init.lua

My entrypoint file simply requires three other files:

require 'user.plugins'
require 'user.options'
require 'user.keymaps'

The user.plugins file is where I’m using Packer to require plugins for my configuration. I will be writing other posts around some of the plugins I use soon.

The user.options file is where I set all of the Neovim settings. Things such as mapping my leader key and setting number of spaces per tab:

vim.g.mapleader = " "
vim.g.maplocalleader = " "

vim.opt.expandtab = true
vim.opt.shiftwidth = 4
vim.opt.tabstop = 4
vim.opt.softtabstop = 4

...etc...

Finally, the user.keymaps file is where I set any general keymaps that aren’t associated with any specific plugins. For example, here I am remapping the arrow keys to specific buffer-related actions:

-- Easier buffer navigation.
vim.keymap.set("n", "", ":bp", { noremap = true, silent = true })
vim.keymap.set("n", "", ":bn", { noremap = true, silent = true })
vim.keymap.set("n", "", ":bd", { noremap = true, silent = true })
vim.keymap.set("n", "", ":%bd", { noremap = true, silent = true })

In that example, the left and right keys navigate to previous and next buffers. The down key closes the current buffer and the up key is the nuclear button that closes all open buffers.

Plugin-specific setup and mappings

For any plugin-specific setup and mappings, I am using Neovim’s “after” directory.

Basically, for every plugin you install, you can add a lua file within a directory at ./after/plugin/ from the root of your Neovim configuration.

So for example, to add settings / mappings for the “vim-test” plugin, I have added a file at: ./after/plugin/vim-test.lua with the following contents:

vim.cmd([[
  let test#php#phpunit#executable = 'docker-compose exec -T laravel.test php artisan test'
  let test#php#phpunit#options = '--colors=always'
  let g:test#strategy = 'neovim'
  let test#neovim#term_position = "vert botright 85"
  let g:test#neovim#start_normal = 1
]])

vim.keymap.set('n', 'tn', ':TestNearest', { silent = false })
vim.keymap.set('n', 'tf', ':TestFile', { silent = false })
vim.keymap.set('n', 'ts', ':TestSuite', { silent = false })
vim.keymap.set('n', 'tl', ':TestLast', { silent = false })
vim.keymap.set('n', 'tv', ':TestVisit', { silent = false })

This means that these settings and bindings will only be registered after the vim-test plugin has been loaded.

I used to just have extra required files in my main init.lua file, but this feels so much more cleaner in my opinion.

Update: 9th February 2023 — when setting up Neovim on a fresh system, I notice that I get a bunch of errors from the after files as they are executing on boot, before I’ve actually installed the plugins. I will add protected calls to the plugins soon to mitigate these errors.

Started working on a side project I’m calling “Pitch”. It’s an end to end encrypted website starter inspired by SiteJS by the Small Technology Foundation.

Got a basic vue app set up with the vue-cli but now can’t work out why my private key generator sometimes returns what I expect — a Uint8Array — and more often a mess of random characters.

Am I misunderstanding something I wonder?

Setting up Elasticsearch and Kibana using Docker for local development

How to set up Kibana and Elasticsearch locally, within Docker containers.

Overview

Elasticsearch is a super-fast search query program. Kibana is a separate program that can be used for interacting with elasticsearch.

Here I am setting up both Elasticsearch and Kibana in their own single Docker Containers. I do this as a way to help keep my computer relatively free from being cluttered with programs. Not only that, but since the containers are their own separate self-contained boxes, it also makes it easy to upgrade the Elasticsearch version I am using at a later date.

Or even remove them entirely with minimal fuss.

Please note: I am using version 7.10.1 of both programs in the examples below. You can look at each program’s respective docker hub pages to target the exact version you require:

Just replace any uses of “7.10.1” below with your own version.

Creating and running containers for the services needed

Run the two following commands to download and run Elasticsearch locally:

# Download the Elasticsearch docker image to your computer
docker pull elasticsearch:7.10.1

# Create a local container with Elasticsearch running
docker run -d --name my_elasticsearch --net elasticnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.ml.enabled=false" elasticsearch:7.10.1

# Start the container
docker container start my_elasticsearch

And then run the two following commands to download and run Kibana locally:

# Download the Kibana docker image to your computer
docker pull kibana:7.10.1

# Create a local container with Kibana running
docker run -d --name my_kibana --net elasticnetwork -e ELASTICSEARCH_URL=http://elasticsearch:9200 -p 5601:5601 kibana:7.10.1

# Start the container
docker container start my_kibana

Accessing Kibana

Since kibana will be connecting to our Elasticsearch container, which it was told to use with the ELASTICSEARCH_URL=http://elasticsearch:9200 section of the Kibana create command, we really only need to use Kibana.

Kibana has it’s own Devtools for querying Elasticsearch, which so far has been enough for my own usecases.

head to http://localhost:5601 to access your own Kibana installation.

Note: You can send curl requests directly to your Elasticsearch from the terminal by targeting the http://127.0.0.1:9200 endpoint.

Deleting the containers

If you wish to remove Elasticsearch and/or Kibana from your computer, then enter the following commands into your terminal.

Using Docker for local development makes this a cinch.

# Stop the Elasticsearch container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_elasticsearch

# Delete the Elasticsearch container
docker container rm my_elasticsearch

# Stop the Kibana container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_kibana

# Delete the Kibana container
docker container rm my_kibana

If you need to set up the two programs again, you can just use the create commands shown above to create them as you did originally.

Install MongoDB with Docker for local development

Pull the docker image for mongo down to your computer.

docker pull mongo

Run the mongo container in the background, isolated from the rest of your computer.

# Command explained below
docker run -d -p 27017:27017 --name mongodb mongo -v /data/db:/data/db

What I love about this approach is that I don’t start muddying up my computer installing new programs — especially if it’s just for the purposes of experimenting with new technologies.

The main run command explained:

  • “docker run -d” tells docker to run in detached mode, which means it will run in the background. Otherwise if we close that terminal it will stop execution of the program docker is running (mongo in this case).
  • “-p 27017:27017” maps your computer’s port number 27017 so it forwards its requests into the container using the same port. (I always forget which port represents the computer and which is the container)
  • “–name mongodb” just gives the container that will be created a nice name. Otherwise Docker will generate and random name.
  • “mongo” is just telling Docker which image to create.
  • “-v /data/db:/data/db” tells Docker to map the /data/db directory on your computer to the /data/db directory in the container. This will ensure that if you restart the container, you will retain the mongo db data.

Bulk converting large PS4 screenshot png images into 1080p jpg’s

A niche example of how I bulk convert my screenshots to make them more website-friendly.

I tend to have my screenshots set to the highest resolution when saving on my PlayStation 4.

However, when I upload to the screenshots area of this website, I don’t want the images to be that big — either in dimensions or file size.

This snippet is how I bulk convert those images ready for uploading. I use an Ubuntu 20.04 operating system when running this.

# Make sure ImageMagick is installed
sudo apt install imagemagick

# Run the command
mogrify -resize 1920x1080 -format jpg folder/*.png

You can change the widthxheight dimensions after the -resize flag for your own required size. As well as changing the required image format after the -format flag.

Updating PHP versions in Ubuntu 20.04

Installing an older PHP version and switching to it in Ubuntu.

For an older PHP project, I needed to install an older version of PHP. This is what I did to set that up.

Installing a different PHP version

sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install -y php7.1

Rebinding php to required version

Some of these binds are probably not need. I think the main ones, at least for my use case, were php and phar.

sudo update-alternatives --set php /usr/bin/php7.1
sudo update-alternatives --set phar /usr/bin/phar7.1
sudo update-alternatives --set phar.phar /usr/bin/phar.phar7.1
sudo update-alternatives --set phpize /usr/bin/phpize7.1
sudo update-alternatives --set php-config /usr/bin/php-config7.1

For some reason the --set flag stopped working, so I had to use:

sudo update-alternatives --config php
sudo update-alternatives --config phar

etc. And update each one with the terminal prompt options for each.

p.s. If using PHP-FPM, you could also set up different server conf files and point the FPM path to the version you need. My need was just because I was using the command line in the older project.

Started to learn Rust

Today is the day when I start to learn the Rust programming language.

Today is the day when I finally started to learn a new programming language — one that I have never touched before. I had briefly touched on and considered Go. However, I have settled on Rust.

It is the first compiled language that I have started to learn and am looking forward to the challenges it will bring.

I will also be doing my best to blog about things I learn — much of it probably in more of a brain-dump format — to both help others as well as reinforcing my own learning.

Docker braindump

A collection of my learnings, notes and musings on Docker.

These are currently random notes and are not much help to anybody yet. They will get tidied as I add to the page.

Docker Swarm

Docker swarm secrets

From inside a docker swarm manager node, there are two ways of creating a secret.

Using a string value:

printf <your_secret_value> | docker secret create your_secret_key -

Using a file path:

docker secret create your_secret_key ./your_secret_value.json

Docker swarm secrets are saved, encrypted, and are accessible to containers via a filepath:

/run/secrets/your_secret_key.

Posts to digest

https://www.bretfisher.com/docker-swarm-firewall-ports/

https://www.bretfisher.com/docker/

https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose

Been learning to use Docker Swarm

Despite not yet managing to get what I have learnt implemented, I have nonetheless took on board some good concepts around docker and docker swarm

After getting half-way through a Docker Mastery series on Udemy, I decided I would like to move my WordPress website, this one, to using a 3-node swarm.

After a few days of editing and re-arranging my docker-compose.yml file (the local dev configuration file that can also be used for starting up a swarm since compose version 3.3) I have decided to just keep my website hosted on its single regular server. (Although I had already moved the database to its own dedicated server).

Despite the fact that I haven’t actually managed to move over to using a swarm (and to be honest it isn’t even needed for me) I have managed to dive into a bunch of concepts around Docker and its Swarm component and feel that I have added a few new things to me dev toolkit.

I think I will definitely be putting together a little demo in a swarm across three separate servers. But for now I will keep my website settled as it is. 😀

What I have learned – or rather reminded myself of, whilst sat in at home during this damn isolation, is that it is important to keep looking into complimentary technologies around my everyday development skill set.

How I would set up Laravel with Docker

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

I tried to avoid Docker for the longest time due to the ease of just running php artisan serve. However, when you have some dependancies that your site will rely on, Docker can be helpful — especially when having multiple developers — in getting up and running with the whole codebase easier.

This post assumes you have setup a basic Laravel project on a Linux computer, and have both Docker and Docker Compose installed locally.

What will this project use?

This is only a basic example to get up and running with the following dependancies. You can add more items to your docker-compose.yml file as you need to.

Note: whatever you choose to name each extra service in your docker-compose.yml file, use its key as the reference point in your .env file.

  • The main site codebase
  • A MySQL database
  • an NGINX webserver
  • PHP

docker-compose.yml

Have a file in the project root, named `docker-compose.yml

version: "3.3"

services:
  mysql:
    image: mysql:8.0
    restart: on-failure
    env_file:
      - .env
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
  nginx:
    image: nginx:1.15.3-alpine
    restart: on-failure
    volumes:
      - './public/:/usr/src/app'
      - './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
    ports:
      - 80:80
    env_file:
      - .env
    depends_on:
      - php
  php:
    build:
      context: .
      dockerfile: './docker/php/Dockerfile'
    restart: on-failure
    env_file:
      - .env
    user: ${LOCAL_USER}

Dockerfile

Have a Dockerfile located here: ./docker/php/Dockerfile. I keep it in a separate folder for tidiness.

# ./docker/php/Dockerfile
FROM php:7.2-fpm

RUN docker-php-ext-install pdo_mysql

RUN pecl install apcu-5.1.8
RUN docker-php-ext-enable apcu

RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
    && php -r "if (hash_file('SHA384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
    && php composer-setup.php --filename=composer \
    && php -r "unlink('composer-setup.php');" \
    && mv composer /usr/local/bin/composer

WORKDIR /usr/src/app

COPY ./ /usr/src/app

RUN PATH=$PATH:/usr/src/app/vendor/bin:bin

default.conf

Have a default.conf file for the project’s nginx container saved here: ./docker/nginx/default.conf

# ./docker/nginx/default.conf
server {
 server_name ~.*;

 location / {
     root /usr/src/app;

     try_files $uri /index.php$is_args$args;
 }

 location ~ ^/index\.php(/|$) {
     client_max_body_size 50m;

     fastcgi_pass php:9000;
     fastcgi_buffers 16 16k;
     fastcgi_buffer_size 32k;
     include fastcgi_params;
     fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
 }

 error_log /dev/stderr debug;
 access_log /dev/stdout;
}

Add the necessary variables to your .env file

There are some variables used in the docker-compose.yml file that need to be added to the .env file. These could be added directly, but this makes it more straightforward for other developers to customise their own setup.

MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=example
LOCAL_USER=1000:1000

The MYSQL_ROOT_PASSWORD and MYSQL_DATABASE are self-explanatory, but theLOCAL_USER variable refers to the user id and group id of the currently logged in person on the host machine. This normally defaults to 1000 for both user and group.

If your user and/or group ids happen to be different, just alter the variable value.

Note: find out your own ids by opening your terminal and typing id followed by enter. You should see something like the following:

uid=1000(david) gid=1000(david) groups=1000(david),4(adm),27(sudo),1001(rvm)

uid and gid are the numbers you need, for user and group respectively.

Run it

Run the following two commands separately then once they are finished head to http:localhost to view the running code.

Note: This setup uses port 80 so you may need to disable any local nginx / apache that may be running currently.

docker-compose build
docker-compose up -d

Any mistakes or issues, just email me.

Thanks for reading.

How to easily set a custom redirect in Laravel form requests

In Laravel you can create custom request classes where you can house the validation for any given route. If that validation then fails, Laravel’s default action is to redirect the visitor back to the previous page. This is commonly used for when a form is submitted incorrectly – The visitor will be redirected back to said form to correct the errors. Sometimes, however, you may wish to redirect the visitor to a different location altogether.

TL;DR (Too long; didn’t read)

At the top of your custom request class, add one of the following protected properties and give it your required value. I have given example values to demonstrate:

protected $redirect = '/custom-page'; // Any URL or path
protected $redirectRoute = 'pages.custom-page'; // The named route of the page
protected $redirectAction = 'PagesController@customPage'; // The controller action to use.

This will then redirect your visitor to that location should they fail any of the validation checks within your custom form request class.

Explaination

When you create a request class through the Laravel artisan command, it will create one that extends the base Laravel class Illuminate\Foundation\Http\FormRequest. Within this class the three protected properties listed above are initialised from line 33, but not set to a value.

Then further down the page of the base class, on line 127 at the time of writing, there is a protected method called getRedirectUrl. This method performs a series of checks for whether or not any of the three redirect properties have actually been set. The first one it finds to be set by you, in the order given above, is the one that will be used as the custom redirect location.

Here is that getRedirectUrl method for your convenience:

/**
* Get the URL to redirect to on a validation error.
*
* @return string
*/
protected function getRedirectUrl()
{
    $url = $this->redirector->getUrlGenerator();

    if ($this->redirect) {
        return $url->to($this->redirect);
    } elseif ($this->redirectRoute) {
        return $url->route($this->redirectRoute);
    } elseif ($this->redirectAction) {
        return $url->action($this->redirectAction);
    }

    return $url->previous();
}

Do you have any extra tips to add to this? Let me know in the comments below.

Thanks.

Stretching before running

One of the annoying things about web development, is having to learn completely new paradigms every now and again. I’m all for improving my skills and being more efficient in my work, but when i have to halt to have to learn a whole new separate thing, it grinds my gears a bit. The idea … Continue reading “Stretching before running”

One of the annoying things about web development, is having to learn completely new paradigms every now and again. I’m all for improving my skills and being more efficient in my work, but when i have to halt to have to learn a whole new separate thing, it grinds my gears a bit.

The idea I’m talking about today is something called Elasticsearch. Yet another data access methodology and one that I’ve never had to mess with before. Don’t get me wrong I’m not hating on the technology, I would simply rather not have to learn this whole new way of searching data.

I’m easily confused.

On a more positive note i have started taking steps towards actually building my first online product / service which i think could be very handy for a lot, if not all, bloggers. It’s something I built for myself and thought about how others could benefit from it too.

Stay tuned for more info as it develops.

Tonight I’ve been enrolled into a run up the canal by my lady. Although i felt a sudden drive for it this morning, that has since passed after my tiring over one of the most boring tutorial videos I’ve ever watched – about the aforementioned Elasticsearch. I’m sure she won’t let me shirk my running responsibilities and I’ll be jogging round the block in no time.