Tag: Github

  • ๐Ÿ“‚

    How I deploy a Laravel project to a Kubernetes Cluster

    WIP: post not yet finalized.

    This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.

    This is a descriptive post and contains nothing technical in the setting up of this infrastructure.

    That will come in future posts.

    Services / Websites I use

    Digital Ocean

    Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.

    Github

    Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.

    Terraform

    I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.

    Setting up the infrastructure

    I firstly set up my infrastructure in Digital Ocean and Github using Terraform.

    This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.

    After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.

    Setting up a Laravel project

    I use Laravel Sail for local development.

    For deployments I write a separate Dockerfile which builds off of a php-fpm container.

    Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.

    Kubernetes deployment file

    All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml file in the project itself.

    This file is used by the Github action responsible for deploying the project.

    Github action workflows

    I add two workflow files for the project inside the ./.github/workflows/ directory. These are:

    ci.yml

    This file runs the full test suite, along with pint and larastan.

    deploy.yml

    This file is triggered only on the main branch, after the Tests (ci) action has completed successfully.

    It will build the container image and tag it with the current git sha.

    Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.

    Then it pushes that image to my Digital Ocean container registry.

    The next step does a find and replace to the project’s deployment.yml file. I’ve included a snippet of that file below:

    YAML
          containers:
          - name: davidpeachcouk
            image: <strong><IMAGE></strong>
            ports:
            - containerPort: 9000

    It replaces that <IMAGE> placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.

    Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml file with the kubectl command. After which, it just does a check to see that the deployment was a success.


  • ๐Ÿ“‚

    Setting up a GPG Key with git to sign your commits

    Signing your git commits with GPG is really easy to set up and I’m always surprised by how many developers I meet that don’t do this.

    Of course it’s not required to push commits and has no baring on quality of code. But that green verified message next to your commits does feel good.

    Essentially there are three parts to this:

    1. Create your GPG key
    2. Tell git to use your GPG key to sign your commits
    3. Upload the public part of your GPG key to Gitlab / Github / etc

    Creating the GPG key if needed

    gpg --full-generate-key
    

    In the interactive guide, I choose:

    1. (1) RSA and RSA (default)
    2. 4096 bits long
    3. Does not expire
    4. Fill in Name, Email, Comment and Confirm.
    5. Enter passphrase when prompted.

    Getting the Key ID

    This will list all of your keys:

    gpg --list-secret-keys --keyid-format=long
    

    Example of the output:

    sec   rsa4096/THIS0IS0YOUR0KEY0ID 2020-12-25 [SC]
          KGHJ64GHG6HJGH5J4G6H5465HJGHJGHJG56HJ5GY
    uid                 [ultimate] Bob GPG Key<mail@your-domain.co.uk>
    

    In that example, the key id that you would need next is “THIS0IS0YOUR0KEY0ID” from the first line, after the forward slash.

    Tell your local git about the signing key

    To set the gpg key as the signing key for all of your git projects, run the following global git command:

    git config --global user.signingkey THIS0IS0YOUR0KEY0ID
    

    If you want to do it on a repository by repository basis, you can run it from within each project, and omit the --global flag:

    git config user.signingkey THIS0IS0YOUR0KEY0ID
    

    Signing your commits

    You can either set commit signing to true for all projects as the default, or by a repo by repo basis.

    # global
    git config --global commit.gpgsign true
    
    # local
    git config commit.gpgsign true
    

    If you wanted to, you could even decide to sign commits per each commit, by not setting it as a config setting, but passing a flag on every commit:

    git commit -S -m "My signed commit message"
    

    Adding your public key to gitlab / github / wherever

    Firstly export the public part of your key using your key id. Again, using the example key id from above:

    # Show your public key in terminal
    gpg --armor --export THIS0IS0YOUR0KEY0ID
    
    # Copy straight to your system clipboard using "xclip"
    gpg --armor --export THIS0IS0YOUR0KEY0ID | xclip -sel clipboard
    

    This will spit out a large key text block begining and ending with comments. Copy all of the text that it gives you and paste it into the gpg textbox in your git forge of choice – gitlab / github / gitea / etc.


  • ๐Ÿ“‚ ,

    Installing and setting up github cli

    What is the github cli

    The Github CLI tool is the official Github terminal tool for interacting with your github account, as well as any open source projects hosted on Github.

    I’ve only just begun looking into it but am already trying to make it part of my personal development flow.

    Installation

    You can see the installation instructions here, or if you’re running on Arch Linux, just run this:

    sudo pacman -S github-cli

    Once installed, you should be able to run the following command and see the version you have installed:

    gh --version

    Authenticating

    Before interacting with your github account, you will need to login via the cli tool.

    Generate a Github Personal Access Token

    Firstly, I generate a personal access token on the Github website. In my settings page I head to “Developer Settings” > “Personal Access Tokens” > “Tokens (classic)”.

    I then create a new “classic” token (just my preference) and I select all permissions and give it an appropriate name.

    Then I create it and keep the page open where it displays the access token. This is for pasting it into the terminal during the authentication flow next.

    Go through the Github CLI authentication flow

    Start the authentication flow by running the command:

    gh auth login

    The following highlights are the options I select when going through the login flow. Your needs may vary.

    What account do you want to log into?
    > Github.com
    > Github Enterprise Server
    
    What is your preferred protocol for Git operations?
    > HTTPS
    > SSH
    
    Upload your SSH public key to your Github account?
    > /path/to/.ssh/id_rsa.pub
    > Skip
    
    How would you like to authenticate Github CLI?
    > Login with a web browser
    > Paste an authentication token

    I then paste in the access token from the still-open tokens page, and hit enter.

    You should see it correctly authenticates you and displays who you are logged in as.

    Check out the official documentation to see all of the available actions you can perform on your account.


  • ๐Ÿ“‚ ,

    Starting a new Laravel 9 project

    Whenever I start a new Laravel project, whether that’s a little side-project idea or just having a play, I try to follow the same process.

    I recently read Steve’s post here on starting your first Laravel 9 Application, so thought I would write down my own setup.

    Whereas Steve’s guide walks you through the beginnings of building a new app, I’m only going to show what I do to get a new project in a ready state I’m happy with before beginning a build.

    This includes initial setup, static analysis, xdebug setup and CI pipeline setup (with Github Actions).


    Pre-requisites

    Before starting, I already have docker and docker-compose installed for my system (Arch Linux BTW).

    Oh and curl is installed, which is used for pulling the project down in the initial setup.

    Other than that, everything that is needed is contained within the Docker containers.

    I then use Laravel’s quick setup from their documentation.


    Initial setup

    Using Laravel’s magic endpoint here, we can get a new Laravel project setup with docker-compose support right out of the box. This could take a little time — especially the first time your run it, as it downloads all of the docker images needed for the local setup.

    curl -s https://laravel.build/my-new-site | bash

    At the end of the installation, it will ask you your password in order to finalise the last steps.

    Once finished, you should be able to start up your new local project with the following command:

    cd my-new-site
    
    ./vendor/bin/sail up -d

    If you now direct your browser to http://localhost , you should see the default Laravel landing page.


    Code style fixing with Laravel Pint

    Keeping a consistant coding style across a project is one of the most important aspects of development — especially within teams.

    Pint is Laravel’s in-house development library to enable the fixing of any deviations from a given style guide, and is actually included as a dev dependancy in new Laravel projects.

    Whether you accept it’s opinionated defaults or define your own rules in a “pint.json” file in the root of your project, is up to you.

    In order to run it, you simply run the following command:

    ./vendor/bin/sail bin pint

    A fresh installation of Laravel should give you no issues whatsoever.

    I advise you to make running this command often — especially before making new commits to your version control.


    Static Analysis with Larastan

    Static analysis is a great method for testing your code for things that would perhaps end up as run time errors in your code later down the line.

    It analyses your code without executing it, and warns of any bugs and breakages it finds. It’s clever stuff.

    Install Larastan with the following command:

    ./vendor/bin/sail composer require nunomaduro/larastan:^2.0 --dev

    Create a file called “phpstan.neon” in the root of your project with the following contents:

    includes:
        - ./vendor/nunomaduro/larastan/extension.neon
    
    parameters:
    
        paths:
            - app/
    
        # Level 9 is the highest level
        level: 5
    

    Then run the analyser with the following command:

    ./vendor/bin/sail bin phpstan analyse

    You can actually set the level in your phpstan.neon file to 9 and it will pass in a fresh Laravel application.

    The challenge is to keep it passing at level 9.


    Line by Line debugging with Xdebug

    At the time of writing, xdebug does come installed with the Laravel sail dockerfiles. However, the setup does need an extra step to make it work fully (at least in my experience)

    Aside:

    There are two parts to xdebug to think about and set up.

    Firstly is the server configuration — this is the installation of xdebug on the php server and setting the correct configuration in the xdebug.ini file.

    The second part is setting up your IDE / PDE to accept the messages that xdebug is sending from the server in order to display the debugging information in a meaningful way.

    I will show here what is needed to get the server correctly set up. However, you will need to look into how your chosen editor works to receive xdebug messages. VS Code has a plugin that is apparently easy to setup for this.

    I use Neovim, and will be sharing a guide soon for how to get debugging with xdebug working in Neovim soon.

    Enable Xdebug in Laravel Sail

    In order to “turn on” xdebug in Laravel Sail, we just need to enable it by way of an environment variable in the .env file.

    Inside your project’s .env file, put the following:

    SAIL_XDEBUG_MODE=develop,debug

    Unfortunately, in my own experience this hasn’t been enough to have xdebug working in my editor (Neovim). And looking around Stack Overflow et. al, I’m not the only one.

    However, what follows is how I get the xdebug server correctly configured for me to debug in Neovim. You will need to take an extra step or two for your editor of choice in order to receive those xdebug messages and have them displayed for you.

    Publish the Sail runtime files

    One thing Laravel does really well, is creating sensible defaults with the ease of overriding those defaults — and Sail is no different.

    Firstly, publish the Laravel sail files to your project root with the following command:

    ./vendor/bin/sail artisan sail:publish

    Create an xdebug ini file

    After publishing the sail stuff above, you will have a folder in the root of your project called “docker”. Within that folder you will have different folders for each of the supported PHP versions.

    I like to use the latest version, so I would create my xdebug ini file in the ./docker/8.2/ directory, at the time of writing.

    I name my file ext-xdebug.ini, and add the following contents to it. You may need extra lines added depending on your IDE’s setup requirements too.

    [xdebug]
    xdebug.start_with_request=yes
    xdebug.discover_client_host=true
    xdebug.max_nesting_level=256
    xdebug.client_port=9003
    xdebug.mode=debug
    xdebug.client_host=host.docker.internal

    Add a Dockerfile step to use the new xdebug ini file

    Within the Dockerfile located at ./docker/8.2/Dockerfile, find the lines near the bottom of the file that are copying files from the project into the container, and add another copy line below them as follows:

    COPY start-container /usr/local/bin/start-container
    COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
    COPY php.ini /etc/php/8.2/cli/conf.d/99-sail.ini
    COPY ext-xdebug.ini /etc/php/8.2/cli/conf.d/ext-xdebug.ini

    Optionally rename the docker image

    It is recommended that you rename the image name within your project’s ./docker-compose.yml file, towards the top:

    laravel.test:
        build:
            context: ./docker/8.2
            dockerfile: Dockerfile
            args:
                WWWGROUP: '${WWWGROUP}'
        image: sail-8.2/app
        image: renamed-sail-8.2/app

    This is only if you have multiple Laravel projects using sail, as the default name will clash between projects.

    Rebuild the Image.

    Now we need to rebuild the image in order to get our new xdebug configuration file into our container.

    From the root of your project, run the following command to rebuild the container without using the existing cache.

    ./vendor/bin/sail build --no-cache

    Then bring the containers up again:

    ./vendor/bin/sail up -d

    Continuous Integration with Github Actions

    I use Github for storing a backup of my projects.

    I have recently started using Github’s actions to run a workflow for testing my code when I push it to the repository.

    In that workflow it first installs the code and it’s dependancies. It then creates an artifact tar file of that working codebase and uses it for the three subsequent workflows I run after, in parallel: Pint code fixing; Larastan Static Analysis and Feature & Unit Tests.

    The full ci workflow file I use is stored as a Github Gist. Copy the contents of that file into a file located in a ./.github/workflows/ directory. You can name the file itself whatever you’d like. A convention is to name it “ci.yml”.

    The Github Action yaml explained

    When to run the action

    Firstly I only want the workflow to run when pushing to any branch and when creating pull requests into the “main” branch.

    on:
      push:
        branches: [ "*" ]
      pull_request:
        branches: [ "main" ]

    Setting up the code to be used in multiple CI checks.

    I like to get the codebase into a testable state and reuse that state for all of my tests / checks.

    This enables me to not only keep each CI step separated from the others, but also means I can run them in parallel.

    setup:
        name: Setting up CI environment
        runs-on: ubuntu-latest
        steps:
        - uses: shivammathur/setup-php@15c43e89cdef867065b0213be354c2841860869e
          with:
            php-version: '8.1'
        - uses: actions/checkout@v3
        - name: Copy .env
          run: php -r "file_exists('.env') || copy('.env.example', '.env');"
        - name: Install Dependencies
          run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress --prefer-dist
        - name: Generate key
          run: php artisan key:generate
        - name: Directory Permissions
          run: chmod -R 777 storage bootstrap/cache
        - name: Tar it up 
          run: tar -cvf setup.tar ./
        - name: Upload setup artifact
          uses: actions/upload-artifact@v3
          with:
            name: setup-artifact
            path: setup.tar
    

    This step creates an artifact tar file from the project that has been setup and had its dependancies installed.

    That tar file will then be called upon in the three following CI steps, extracted and used for each test / check.

    Running the CI steps in parallel

    Each of the CI steps I have defined — “pint”, “larastan” and “test-suite” — all require the “setup” step to have completed before running.

    pint:
        name: Pint Check
        runs-on: ubuntu-latest
        needs: setup
        steps:
        - name: Download Setup Artifact
          uses: actions/download-artifact@v3
          with:
            name: setup-artifact
        - name: Extraction
          run: tar -xvf setup.tar
        - name: Running Pint
          run: ./vendor/bin/pint

    This is because they all use the artifact that is created in that setup step. The artifact being the codebase with all dependancies in a testable state, ready to be extracted in each of the CI steps.

    pint:
        name: Pint Check
        runs-on: ubuntu-latest
        needs: setup
        steps:
        - name: Download Setup Artifact
          uses: actions/download-artifact@v3
          with:
            name: setup-artifact
        - name: Extraction
          run: tar -xvf setup.tar
        - name: Running Pint
          run: ./vendor/bin/pint

    Those three steps will be run in parallel as a default; there’s nothing we need to do there.

    Using the example gist file as is, should result in a full passing suite.


    Further Steps

    That is the end of my starting a new Laravel project from fresh, but there are other steps that will inevitably come later on — not least the Continuous Delivery (deployment) of the application when the time arrises.

    You could leverage the excellent Laravel Forge for your deployments — and I would actually recommend this approach.

    However, I do have a weird interest in Kubernetes at the moment and so will be putting together a tutorial for deploying your Laravel Application to Kubernetes in Digital Ocean. Keep an eye out for that guide — I will advertise that post on my Twitter page when it goes live.