Tag: Programming

  • πŸ“‚ ,

    I have decided to get back into tinkering with my Raspberry Pi.

    I will be blogging my journey as I stumble through my initial playing, through to building out my first proper homelab.

    This first Raspberry Pi (model 2b) will be initially used as both a wireguard VPN server and a local DNS server.


  • πŸ“‚ ,

    Installing and setting up github cli

    What is the github cli

    The Github CLI tool is the official Github terminal tool for interacting with your github account, as well as any open source projects hosted on Github.

    I’ve only just begun looking into it but am already trying to make it part of my personal development flow.

    Installation

    You can see the installation instructions here, or if you’re running on Arch Linux, just run this:

    sudo pacman -S github-cli

    Once installed, you should be able to run the following command and see the version you have installed:

    gh --version

    Authenticating

    Before interacting with your github account, you will need to login via the cli tool.

    Generate a Github Personal Access Token

    Firstly, I generate a personal access token on the Github website. In my settings page I head to “Developer Settings” > “Personal Access Tokens” > “Tokens (classic)”.

    I then create a new “classic” token (just my preference) and I select all permissions and give it an appropriate name.

    Then I create it and keep the page open where it displays the access token. This is for pasting it into the terminal during the authentication flow next.

    Go through the Github CLI authentication flow

    Start the authentication flow by running the command:

    gh auth login

    The following highlights are the options I select when going through the login flow. Your needs may vary.

    What account do you want to log into?
    > Github.com
    > Github Enterprise Server
    
    What is your preferred protocol for Git operations?
    > HTTPS
    > SSH
    
    Upload your SSH public key to your Github account?
    > /path/to/.ssh/id_rsa.pub
    > Skip
    
    How would you like to authenticate Github CLI?
    > Login with a web browser
    > Paste an authentication token

    I then paste in the access token from the still-open tokens page, and hit enter.

    You should see it correctly authenticates you and displays who you are logged in as.

    Check out the official documentation to see all of the available actions you can perform on your account.


  • πŸ“‚ ,

    How I organize my Neovim configuration

    The entry point for my Neovim Configuration is the init.lua file.

    Init.lua

    My entrypoint file simply requires three other files:

    Lua
    require 'user.plugins'
    require 'user.options'
    require 'user.keymaps'

    The user.plugins file is where I’m using Packer to require plugins for my configuration. I will be writing other posts around some of the plugins I use soon.

    The user.options file is where I set all of the Neovim settings. Things such as mapping my leader key and setting number of spaces per tab:

    Lua
    vim.g.mapleader = " "
    vim.g.maplocalleader = " "
    
    vim.opt.expandtab = true
    vim.opt.shiftwidth = 4
    vim.opt.tabstop = 4
    vim.opt.softtabstop = 4
    
    ...etc...

    Finally, the user.keymaps file is where I set any general keymaps that aren’t associated with any specific plugins. For example, here I am remapping the arrow keys to specific buffer-related actions:

    Lua
    -- Easier buffer navigation.
    vim.keymap.set("n", "<Left>", ":bp<cr>", { noremap = true, silent = true })
    vim.keymap.set("n", "<Right>", ":bn<cr>", { noremap = true, silent = true })
    vim.keymap.set("n", "<Down>", ":bd<cr>", { noremap = true, silent = true })
    vim.keymap.set("n", "<Up>", ":%bd<cr>", { noremap = true, silent = true })

    In that example, the left and right keys navigate to previous and next buffers. The down key closes the current buffer and the up key is the nuclear button that closes all open buffers.

    Plugin-specific setup and mappings

    For any plugin-specific setup and mappings, I am using Neovim’s “after” directory.

    Basically, for every plugin you install, you can add a lua file within a directory at ./after/plugin/ from the root of your Neovim configuration.

    So for example, to add settings / mappings for the “vim-test” plugin, I have added a file at: ./after/plugin/vim-test.lua with the following contents:

    Lua
    vim.cmd([[
      let test#php#phpunit#executable = 'docker-compose exec -T laravel.test php artisan test'
      let test#php#phpunit#options = '--colors=always'
      let g:test#strategy = 'neovim'
      let test#neovim#term_position = "vert botright 85"
      let g:test#neovim#start_normal = 1
    ]])
    
    vim.keymap.set('n', '<Leader>tn', ':TestNearest<CR>', { silent = false })
    vim.keymap.set('n', '<Leader>tf', ':TestFile<CR>', { silent = false })
    vim.keymap.set('n', '<Leader>ts', ':TestSuite<CR>', { silent = false })
    vim.keymap.set('n', '<Leader>tl', ':TestLast<CR>', { silent = false })
    vim.keymap.set('n', '<Leader>tv', ':TestVisit<CR>', { silent = false })

    This means that these settings and bindings will only be registered after the vim-test plugin has been loaded.

    I used to just have extra required files in my main init.lua file, but this feels so much more cleaner in my opinion.

    Update: 9th February 2023 — when setting up Neovim on a fresh system, I notice that I get a bunch of errors from the after files as they are executing on boot, before I’ve actually installed the plugins. I will add protected calls to the plugins soon to mitigate these errors.


  • πŸ“‚

    Started working on a side project I’m calling β€œPitch”. It’s an end to end encrypted website starter inspired by SiteJS by the Small Technology Foundation.

    Got a basic vue app set up with the vue-cli but now can’t work out why my private key generator sometimes returns what I expect β€” a Uint8Array β€” and more often a mess of random characters.

    Am I misunderstanding something I wonder?


  • πŸ“‚

    Setting up Elasticsearch and Kibana using Docker for local development

    Overview

    Elasticsearch is a super-fast search query program. Kibana is a separate program that can be used for interacting with elasticsearch.

    Here I am setting up both Elasticsearch and Kibana in their own single Docker Containers. I do this as a way to help keep my computer relatively free from being cluttered with programs. Not only that, but since the containers are their own separate self-contained boxes, it also makes it easy to upgrade the Elasticsearch version I am using at a later date.

    Or even remove them entirely with minimal fuss.

    Please note: I am using version 7.10.1 of both programs in the examples below. You can look at each program’s respective docker hub pages to target the exact version you require:

    Just replace any uses of “7.10.1” below with your own version.

    Creating and running containers for the services needed

    Run the two following commands to download and run Elasticsearch locally:

    # Download the Elasticsearch docker image to your computer
    docker pull elasticsearch:7.10.1
    
    # Create a local container with Elasticsearch running
    docker run -d --name my_elasticsearch --net elasticnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.ml.enabled=false" elasticsearch:7.10.1
    
    # Start the container
    docker container start my_elasticsearch

    And then run the two following commands to download and run Kibana locally:

    # Download the Kibana docker image to your computer
    docker pull kibana:7.10.1
    
    # Create a local container with Kibana running
    docker run -d --name my_kibana --net elasticnetwork -e ELASTICSEARCH_URL=http://elasticsearch:9200 -p 5601:5601 kibana:7.10.1
    
    # Start the container
    docker container start my_kibana

    Accessing Kibana

    Since kibana will be connecting to our Elasticsearch container, which it was told to use with the ELASTICSEARCH_URL=http://elasticsearch:9200 section of the Kibana create command, we really only need to use Kibana.

    Kibana has it’s own Devtools for querying Elasticsearch, which so far has been enough for my own usecases.

    head to http://localhost:5601 to access your own Kibana installation.

    Note: You can send curl requests directly to your Elasticsearch from the terminal by targeting the http://127.0.0.1:9200 endpoint.

    Deleting the containers

    If you wish to remove Elasticsearch and/or Kibana from your computer, then enter the following commands into your terminal.

    Using Docker for local development makes this a cinch.

    # Stop the Elasticsearch container if it is running
    # (Use it's name you gave it in the "--name" argument as its handle)
    docker container stop my_elasticsearch
    
    # Delete the Elasticsearch container
    docker container rm my_elasticsearch
    
    # Stop the Kibana container if it is running
    # (Use it's name you gave it in the "--name" argument as its handle)
    docker container stop my_kibana
    
    # Delete the Kibana container
    docker container rm my_kibana

    If you need to set up the two programs again, you can just use the create commands shown above to create them as you did originally.


  • πŸ“‚

    Install MongoDB with Docker for local development

    Pull the docker image for mongo down to your computer.

    docker pull mongo

    Run the mongo container in the background, isolated from the rest of your computer.

    # Command explained below
    docker run -d -p 27017:27017 --name mongodb mongo -v /data/db:/data/db

    What I love about this approach is that I don’t start muddying up my computer installing new programs — especially if it’s just for the purposes of experimenting with new technologies.

    The main run command explained:

    • “docker run -d” tells docker to run in detached mode, which means it will run in the background. Otherwise if we close that terminal it will stop execution of the program docker is running (mongo in this case).
    • “-p 27017:27017” maps your computer’s port number 27017 so it forwards its requests into the container using the same port. (I always forget which port represents the computer and which is the container)
    • “–name mongodb” just gives the container that will be created a nice name. Otherwise Docker will generate and random name.
    • “mongo” is just telling Docker which image to create.
    • “-v /data/db:/data/db” tells Docker to map the /data/db directory on your computer to the /data/db directory in the container. This will ensure that if you restart the container, you will retain the mongo db data.


  • πŸ“‚

    Setting up my own labs

    I’m going to begin setting up my own “labs” area to play around with various web technologies.

    For the longest time now I have been holding myself back quite a bit by only really learning technologies around current roles at the time and for my own personal site. This has mainly revolved around Laravel and to a lesser extent WordPress.

    Whilst I will continue to love both of those projects, I do want to start pushing myself to learn things that are completely out of my comfort zone.

    I will also begin writing more about my learning, discoveries and new things that excite me in web development — something I haven’t done for a long while.


  • πŸ“‚

    Fixing my local development file / folder permissions

    sudo find . -type d -exec chmod g+rwx {} +
    sudo find . -type f -exec chmod g+rw {} +

  • πŸ“‚

    Bulk converting large PS4 screenshot png images into 1080p jpg’s

    I tend to have my screenshots set to the highest resolution when saving on my PlayStation 4.

    However, when I upload to the screenshots area of this website, I don’t want the images to be that big — either in dimensions or file size.

    This snippet is how I bulk convert those images ready for uploading. I use an Ubuntu 20.04 operating system when running this.

    # Make sure ImageMagick is installed
    sudo apt install imagemagick
    
    # Run the command
    mogrify -resize 1920x1080 -format jpg folder/*.png

    You can change the widthxheight dimensions after the -resize flag for your own required size. As well as changing the required image format after the -format flag.


  • πŸ“‚

    Updating PHP versions in Ubuntu 20.04

    For an older PHP project, I needed to install an older version of PHP. This is what I did to set that up.

    Installing a different PHP version

    sudo add-apt-repository ppa:ondrej/php
    sudo apt-get update
    sudo apt-get install -y php7.1

    Rebinding php to required version

    Some of these binds are probably not need. I think the main ones, at least for my use case, were php and phar.

    sudo update-alternatives --set php /usr/bin/php7.1
    sudo update-alternatives --set phar /usr/bin/phar7.1
    sudo update-alternatives --set phar.phar /usr/bin/phar.phar7.1
    sudo update-alternatives --set phpize /usr/bin/phpize7.1
    sudo update-alternatives --set php-config /usr/bin/php-config7.1

    For some reason the --set flag stopped working, so I had to use:

    sudo update-alternatives --config php
    sudo update-alternatives --config phar
    
    etc. And update each one with the terminal prompt options for each.

    p.s. If using PHP-FPM, you could also set up different server conf files and point the FPM path to the version you need. My need was just because I was using the command line in the older project.