Category: Programming

Linux, Laravel, PHP. My notes and mini-guides regarding development-related things.

  • ๐Ÿ“‚

    Connecting to a VPN in Arch Linux with nmcli

    nmcli is the command line tool for interacting with NetworkManager.

    For work I sometimes need to connect to a vpn using an .ovpn (openvpn) file.

    This method should work for other vpn types (I’ve only used openvpn)

    Installing the tools

    All three of the required programs are available via the official Arch repositories.

    Importing the ovpn file into your Network Manager

    Once you’ve got the openvpn file on your computer, you can import it into your Network Manager configuration with the following command:

    # Replace the file path with your own correct one.
    nmcli connection import type openvpn file /path/to/your-file.ovpn

    You should see a message saying that the connection was succesfully added.

    Activate the connection

    Activating the connection will connect you to the VPN specified with that .ovpn file.

    nmcli connection up your-file

    If you need to provide a password to your vpn connection, you can add the --ask flag, which will make the connection up command ask you for a password:

    nmcli connection up your-file --ask

    Disconnect

    To disconnect from the VPN, just run the down command as follows:

    nmcli connection down you-file

    Other Links:

    Network Manager on the Arch Wiki.


  • ๐Ÿ“‚

    Installing and setting up github cli

    What is the github cli

    The Github CLI tool is the official Github terminal tool for interacting with your github account, as well as any open source projects hosted on Github.

    I’ve only just begun looking into it but am already trying to make it part of my personal development flow.

    Installation

    You can see the installation instructions here, or if you’re running on Arch Linux, just run this:

    sudo pacman -S github-cli

    Once installed, you should be able to run the following command and see the version you have installed:

    gh --version

    Authenticating

    Before interacting with your github account, you will need to login via the cli tool.

    Generate a Github Personal Access Token

    Firstly, I generate a personal access token on the Github website. In my settings page I head to “Developer Settings” > “Personal Access Tokens” > “Tokens (classic)”.

    I then create a new “classic” token (just my preference) and I select all permissions and give it an appropriate name.

    Then I create it and keep the page open where it displays the access token. This is for pasting it into the terminal during the authentication flow next.

    Go through the Github CLI authentication flow

    Start the authentication flow by running the command:

    gh auth login

    The following highlights are the options I select when going through the login flow. Your needs may vary.

    What account do you want to log into?
    > Github.com
    > Github Enterprise Server
    
    What is your preferred protocol for Git operations?
    > HTTPS
    > SSH
    
    Upload your SSH public key to your Github account?
    > /path/to/.ssh/id_rsa.pub
    > Skip
    
    How would you like to authenticate Github CLI?
    > Login with a web browser
    > Paste an authentication token

    I then paste in the access token from the still-open tokens page, and hit enter.

    You should see it correctly authenticates you and displays who you are logged in as.

    Check out the official documentation to see all of the available actions you can perform on your account.


  • ๐Ÿ“‚

    Adding Laravel Jetstream to a fresh Laravel project

    I only have this post here as there was a couple of extra steps I made after regular installation, which I wanted to keep a note of.

    Here are the changes made to my Inventory Manager.

    Follow the Jetstream Installation guide

    Firstly I just follow the official installation guide.

    When it came to running the Jetstream install command in the docs, this was the specific flavour I ran:

    php artisan jetstream:install livewire --pest

    This sets it up to use Livewire, as I wanted to learn that along the way, as well as setting up the Jetstream tests as Pest ones.

    Again, I’m not too familiar with Pest (still loving phpunit) but thought it was worth learning.

    Enable API functionality

    I want to build my Inventory Manager as a separate API and front end, so I enabled the API functionality after install.

    Enabling the built-in API functionality, which is Laravel Sanctum by the way, is as easy as uncommenting a line in your ./config/jetstream.php file:

    'features' => [
        // Features::termsAndPrivacyPolicy(),
        // Features::profilePhotos(),
        Features::api(),
        // Features::teams(['invitations' => true]),
        Features::accountDeletion(),
    ],

    The Features::api(), line should be commented out by default; just uncomment it and you’re good to go.

    Setup Pest testing

    The only thing that tripped me up was that I hadn’t previously setup pest, which was causing the Jetstream tests to fail.

    So I ran the following command, which is modified for my using Laravel Sail, from the Pest Documentation:

    ./vendor/bin/sail artisan pest:install

    I then also added the RefreshDatabase trait to my ./tests/TestCase.php file.

    Then all of my tests pass.

    That is Jetstream setup and ready to continue for me.


  • ๐Ÿ“‚

    How I organize my Neovim configuration

    The entry point for my Neovim Configuration is the init.lua file.

    Init.lua

    My entrypoint file simply requires three other files:

    require 'user.plugins'
    require 'user.options'
    require 'user.keymaps'

    The user.plugins file is where I’m using Packer to require plugins for my configuration. I will be writing other posts around some of the plugins I use soon.

    The user.options file is where I set all of the Neovim settings. Things such as mapping my leader key and setting number of spaces per tab:

    vim.g.mapleader = " "
    vim.g.maplocalleader = " "
    
    vim.opt.expandtab = true
    vim.opt.shiftwidth = 4
    vim.opt.tabstop = 4
    vim.opt.softtabstop = 4
    
    ...etc...

    Finally, the user.keymaps file is where I set any general keymaps that aren’t associated with any specific plugins. For example, here I am remapping the arrow keys to specific buffer-related actions:

    -- Easier buffer navigation.
    vim.keymap.set("n", "", ":bp", { noremap = true, silent = true })
    vim.keymap.set("n", "", ":bn", { noremap = true, silent = true })
    vim.keymap.set("n", "", ":bd", { noremap = true, silent = true })
    vim.keymap.set("n", "", ":%bd", { noremap = true, silent = true })

    In that example, the left and right keys navigate to previous and next buffers. The down key closes the current buffer and the up key is the nuclear button that closes all open buffers.

    Plugin-specific setup and mappings

    For any plugin-specific setup and mappings, I am using Neovim’s “after” directory.

    Basically, for every plugin you install, you can add a lua file within a directory at ./after/plugin/ from the root of your Neovim configuration.

    So for example, to add settings / mappings for the “vim-test” plugin, I have added a file at: ./after/plugin/vim-test.lua with the following contents:

    vim.cmd([[
      let test#php#phpunit#executable = 'docker-compose exec -T laravel.test php artisan test'
      let test#php#phpunit#options = '--colors=always'
      let g:test#strategy = 'neovim'
      let test#neovim#term_position = "vert botright 85"
      let g:test#neovim#start_normal = 1
    ]])
    
    vim.keymap.set('n', 'tn', ':TestNearest', { silent = false })
    vim.keymap.set('n', 'tf', ':TestFile', { silent = false })
    vim.keymap.set('n', 'ts', ':TestSuite', { silent = false })
    vim.keymap.set('n', 'tl', ':TestLast', { silent = false })
    vim.keymap.set('n', 'tv', ':TestVisit', { silent = false })

    This means that these settings and bindings will only be registered after the vim-test plugin has been loaded.

    I used to just have extra required files in my main init.lua file, but this feels so much more cleaner in my opinion.

    Update: 9th February 2023 — when setting up Neovim on a fresh system, I notice that I get a bunch of errors from the after files as they are executing on boot, before I’ve actually installed the plugins. I will add protected calls to the plugins soon to mitigate these errors.


  • ๐Ÿ“‚

    Starting a new Laravel 9 project

    Whenever I start a new Laravel project, whether that’s a little side-project idea or just having a play, I try to follow the same process.

    I recently read Steve’s post here on starting your first Laravel 9 Application, so thought I would write down my own setup.

    Whereas Steve’s guide walks you through the beginnings of building a new app, I’m only going to show what I do to get a new project in a ready state I’m happy with before beginning a build.

    This includes initial setup, static analysis, xdebug setup and CI pipeline setup (with Github Actions).


    Pre-requisites

    Before starting, I already have docker and docker-compose installed for my system (Arch Linux BTW).

    Oh and curl is installed, which is used for pulling the project down in the initial setup.

    Other than that, everything that is needed is contained within the Docker containers.

    I then use Laravel’s quick setup from their documentation.


    Initial setup

    Using Laravel’s magic endpoint here, we can get a new Laravel project setup with docker-compose support right out of the box. This could take a little time — especially the first time your run it, as it downloads all of the docker images needed for the local setup.

    curl -s https://laravel.build/my-new-site | bash

    At the end of the installation, it will ask you your password in order to finalise the last steps.

    Once finished, you should be able to start up your new local project with the following command:

    cd my-new-site
    
    ./vendor/bin/sail up -d

    If you now direct your browser to http://localhost , you should see the default Laravel landing page.


    Code style fixing with Laravel Pint

    Keeping a consistant coding style across a project is one of the most important aspects of development — especially within teams.

    Pint is Laravel’s in-house development library to enable the fixing of any deviations from a given style guide, and is actually included as a dev dependancy in new Laravel projects.

    Whether you accept it’s opinionated defaults or define your own rules in a “pint.json” file in the root of your project, is up to you.

    In order to run it, you simply run the following command:

    ./vendor/bin/sail bin pint

    A fresh installation of Laravel should give you no issues whatsoever.

    I advise you to make running this command often — especially before making new commits to your version control.


    Static Analysis with Larastan

    Static analysis is a great method for testing your code for things that would perhaps end up as run time errors in your code later down the line.

    It analyses your code without executing it, and warns of any bugs and breakages it finds. It’s clever stuff.

    Install Larastan with the following command:

    ./vendor/bin/sail composer require nunomaduro/larastan:^2.0 --dev

    Create a file called “phpstan.neon” in the root of your project with the following contents:

    includes:
        - ./vendor/nunomaduro/larastan/extension.neon
    
    parameters:
    
        paths:
            - app/
    
        # Level 9 is the highest level
        level: 5
    

    Then run the analyser with the following command:

    ./vendor/bin/sail bin phpstan analyse

    You can actually set the level in your phpstan.neon file to 9 and it will pass in a fresh Laravel application.

    The challenge is to keep it passing at level 9.


    Line by Line debugging with Xdebug

    At the time of writing, xdebug does come installed with the Laravel sail dockerfiles. However, the setup does need an extra step to make it work fully (at least in my experience)

    Aside:

    There are two parts to xdebug to think about and set up.

    Firstly is the server configuration — this is the installation of xdebug on the php server and setting the correct configuration in the xdebug.ini file.

    The second part is setting up your IDE / PDE to accept the messages that xdebug is sending from the server in order to display the debugging information in a meaningful way.

    I will show here what is needed to get the server correctly set up. However, you will need to look into how your chosen editor works to receive xdebug messages. VS Code has a plugin that is apparently easy to setup for this.

    I use Neovim, and will be sharing a guide soon for how to get debugging with xdebug working in Neovim soon.

    Enable Xdebug in Laravel Sail

    In order to “turn on” xdebug in Laravel Sail, we just need to enable it by way of an environment variable in the .env file.

    Inside your project’s .env file, put the following:

    SAIL_XDEBUG_MODE=develop,debug

    Unfortunately, in my own experience this hasn’t been enough to have xdebug working in my editor (Neovim). And looking around Stack Overflow et. al, I’m not the only one.

    However, what follows is how I get the xdebug server correctly configured for me to debug in Neovim. You will need to take an extra step or two for your editor of choice in order to receive those xdebug messages and have them displayed for you.

    Publish the Sail runtime files

    One thing Laravel does really well, is creating sensible defaults with the ease of overriding those defaults — and Sail is no different.

    Firstly, publish the Laravel sail files to your project root with the following command:

    ./vendor/bin/sail artisan sail:publish

    Create an xdebug ini file

    After publishing the sail stuff above, you will have a folder in the root of your project called “docker”. Within that folder you will have different folders for each of the supported PHP versions.

    I like to use the latest version, so I would create my xdebug ini file in the ./docker/8.2/ directory, at the time of writing.

    I name my file ext-xdebug.ini, and add the following contents to it. You may need extra lines added depending on your IDE’s setup requirements too.

    [xdebug]
    xdebug.start_with_request=yes
    xdebug.discover_client_host=true
    xdebug.max_nesting_level=256
    xdebug.client_port=9003
    xdebug.mode=debug
    xdebug.client_host=host.docker.internal

    Add a Dockerfile step to use the new xdebug ini file

    Within the Dockerfile located at ./docker/8.2/Dockerfile, find the lines near the bottom of the file that are copying files from the project into the container, and add another copy line below them as follows:

    COPY start-container /usr/local/bin/start-container
    COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
    COPY php.ini /etc/php/8.2/cli/conf.d/99-sail.ini
    COPY ext-xdebug.ini /etc/php/8.2/cli/conf.d/ext-xdebug.ini

    Optionally rename the docker image

    It is recommended that you rename the image name within your project’s ./docker-compose.yml file, towards the top:

    laravel.test:
        build:
            context: ./docker/8.2
            dockerfile: Dockerfile
            args:
                WWWGROUP: '${WWWGROUP}'
        image: sail-8.2/app
        image: renamed-sail-8.2/app

    This is only if you have multiple Laravel projects using sail, as the default name will clash between projects.

    Rebuild the Image.

    Now we need to rebuild the image in order to get our new xdebug configuration file into our container.

    From the root of your project, run the following command to rebuild the container without using the existing cache.

    ./vendor/bin/sail build --no-cache

    Then bring the containers up again:

    ./vendor/bin/sail up -d

    Continuous Integration with Github Actions

    I use Github for storing a backup of my projects.

    I have recently started using Github’s actions to run a workflow for testing my code when I push it to the repository.

    In that workflow it first installs the code and it’s dependancies. It then creates an artifact tar file of that working codebase and uses it for the three subsequent workflows I run after, in parallel: Pint code fixing; Larastan Static Analysis and Feature & Unit Tests.

    The full ci workflow file I use is stored as a Github Gist. Copy the contents of that file into a file located in a ./.github/workflows/ directory. You can name the file itself whatever you’d like. A convention is to name it “ci.yml”.

    The Github Action yaml explained

    When to run the action

    Firstly I only want the workflow to run when pushing to any branch and when creating pull requests into the “main” branch.

    on:
      push:
        branches: [ "*" ]
      pull_request:
        branches: [ "main" ]

    Setting up the code to be used in multiple CI checks.

    I like to get the codebase into a testable state and reuse that state for all of my tests / checks.

    This enables me to not only keep each CI step separated from the others, but also means I can run them in parallel.

    setup:
        name: Setting up CI environment
        runs-on: ubuntu-latest
        steps:
        - uses: shivammathur/setup-php@15c43e89cdef867065b0213be354c2841860869e
          with:
            php-version: '8.1'
        - uses: actions/checkout@v3
        - name: Copy .env
          run: php -r "file_exists('.env') || copy('.env.example', '.env');"
        - name: Install Dependencies
          run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress --prefer-dist
        - name: Generate key
          run: php artisan key:generate
        - name: Directory Permissions
          run: chmod -R 777 storage bootstrap/cache
        - name: Tar it up 
          run: tar -cvf setup.tar ./
        - name: Upload setup artifact
          uses: actions/upload-artifact@v3
          with:
            name: setup-artifact
            path: setup.tar
    

    This step creates an artifact tar file from the project that has been setup and had its dependancies installed.

    That tar file will then be called upon in the three following CI steps, extracted and used for each test / check.

    Running the CI steps in parallel

    Each of the CI steps I have defined — “pint”, “larastan” and “test-suite” — all require the “setup” step to have completed before running.

    pint:
        name: Pint Check
        runs-on: ubuntu-latest
        needs: setup
        steps:
        - name: Download Setup Artifact
          uses: actions/download-artifact@v3
          with:
            name: setup-artifact
        - name: Extraction
          run: tar -xvf setup.tar
        - name: Running Pint
          run: ./vendor/bin/pint

    This is because they all use the artifact that is created in that setup step. The artifact being the codebase with all dependancies in a testable state, ready to be extracted in each of the CI steps.

    pint:
        name: Pint Check
        runs-on: ubuntu-latest
        needs: setup
        steps:
        - name: Download Setup Artifact
          uses: actions/download-artifact@v3
          with:
            name: setup-artifact
        - name: Extraction
          run: tar -xvf setup.tar
        - name: Running Pint
          run: ./vendor/bin/pint

    Those three steps will be run in parallel as a default; there’s nothing we need to do there.

    Using the example gist file as is, should result in a full passing suite.


    Further Steps

    That is the end of my starting a new Laravel project from fresh, but there are other steps that will inevitably come later on — not least the Continuous Delivery (deployment) of the application when the time arrises.

    You could leverage the excellent Laravel Forge for your deployments — and I would actually recommend this approach.

    However, I do have a weird interest in Kubernetes at the moment and so will be putting together a tutorial for deploying your Laravel Application to Kubernetes in Digital Ocean. Keep an eye out for that guide — I will advertise that post on my Twitter page when it goes live.


  • ๐Ÿ“‚

    Given, When, Then — how I approach Test-driven development in Laravel

    Laravel is an incredible PHP framework and the best starting point for pretty much any web-based application (if writing it in PHP, that is).

    Along with it’s many amazing features, comes a beautiful framework from which to test what you are building.

    For the longest time I cowered at the idea of writing automated tests for what I built. It was a way of working that was brought in by a previous workplace of mine and my brain fought against it for ages.

    Since that time a few years ago I slowly came to like the idea of testing. Then over the past year or so I have grown to love it.

    I have met some people that are incredibly talented developers, but for me made the prospect of automated testing both confusing and intimidating.

    That was until I came across Adam Wathan’s excellent Test Driven Laravel course. He made testing immediately approachable and broke it down into three distinct phases (per test): “Given”, “When” and “Then”. Also known as “Arrange”, “Act” and “Assert”. I forget which phrase he used, but either way the idea is like this.

    “Given” this environment

    The first step is to set up the “world” in which the test should happen.

    One example would be if you were building an API that would return PlayStation game data to you. In order to return games, there must be games there to return.

    In Laravel we have factories that we can create for quickly creating test entries for our models. Here is an example of a Game model that uses its factory to create a game for us:

    $game = Game::factory()->create([
        'title' => 'The Last of Us part 2',
        'developer' => 'Naughty Dog',
    ]);

    “When” this thing I want to test happens

    Here you would do the thing that you are testing.

    Maybe that is sending some data to an API endpoint in your application. Or perhaps you are testing a single utility class can do a specific action, so you call the method on that class.

    Here, let’s continue the idea of return games from an api call. We’ll use the $game variable from the previous example and access its ID to build our GET endpoint:

    $response = $this->json('get', '/api/games/' . $game->id,);

    Here the $response variable gets the response from the json get call, allowing you to later make assertions against it.

    “Then” I should see this particular outcome

    In this last step you would make assertions against what has happened. This could be checking if a record exists in a database with specific values, or asserting that an email got sent.

    Basically anything you need to make sure happened, or didn’t happen, for you to be sure you are getting your desired functionality.

    Let’s finish our game example by asserting that we got json back with the expected data. We do this by calling the appropriate method off of the $response variable from the previous example.

    $response->assertJson([
        'title' => 'The Last of Us part 2',
        'developer' => 'Naughty Dog',
    ]);

    The full example test code

    $game = Game::factory()->create([
        'title' => 'The Last of Us part 2',
        'developer' => 'Naughty Dog',
    ]);
    $response = $this->json('get', '/api/games/' . $game->id,);
    $response->assertJson([
        'title' => 'The Last of Us part 2',
        'developer' => 'Naughty Dog',
    ]);

    Much more to explore

    There is so much to automated testing and I’m still relatively new to it all myself.

    You can “fake” other things in your application in order to not run live things in tests. For example when testing emails are sent you don’t really want to be actually sending emails when you run your tests. Therefore you would “fake” the functionality of sending the mail.

    I hope that this post has been an easy-to-follow intro to how I myself approach testing.

    I have found that even as my tests have gotten more complex in certain situations, I still always stick to the same structural idea:

    1. Given this is the world my code lives in.
    2. When I perform this particular action.
    3. Then I should see this specific outcome.


  • ๐Ÿ“‚

    PHP Psalm warning for RouteServiceProvider configureRateLimiting method

    When running psalm in a Laravel project, I get the following error by default:

    PossiblyNullArgument - app/Providers/RouteServiceProvider.php:45:46 - 
    Argument 1 of Illuminate\Cache\RateLimiting\Limit::by cannot be null, 
    possibly null value provided

    This is the default implementation for configureRateLimiting in the RouteServiceProvider class in Laravel:

    protected function configureRateLimiting()
    {
        RateLimiter::for('api', function (Request $request) {
            return Limit::perMinute(60)->by($request->user()?->id ?: $request->ip());
        });
    }

    I change it to the following to get psalm to pass (I’ve added named parameters and the static keyword before the callback function):

    protected function configureRateLimiting()
    {
        RateLimiter::for(name: 'api', callback: static function (Request $request) {
            $limitIdentifier = $request->user()?->id ?: $request->ip();
            if (!is_null($limitIdentifier)) {
                return Limit::perMinute(maxAttempts: 60)->by(key: $limitIdentifier);
            }
        });
    }

  • ๐Ÿ“‚

    Sprinklings of Docker for local development

    When I search for docker-related topics online, it almost seems to me that there are two trains of thought for the most part:

    • Those who use a full docker / docker-compose setup for local development.
    • Those who hate and/or fear docker and would rather just install and do everything locally.

    I believe either of these is a valid approach — whatever feels right to you. Of course it does also depend on how your company / team works.

    But I’d like to introduce you to a third way of working on a project — sprinklings of docker, I call it ๐Ÿ˜€.

    The idea is essentially to just use docker for certain things in a project as you develop it locally.

    This is how I tend to work, but is by no means what I would call “the right way”; it’s just what works best for me.

    How I work with Docker.

    I am primarily a Laravel developer, and work as such at the excellent company — and Laravel PartnerJump 24.

    As I am a php developer, it stands to reason that I have php installed on my system. I also have nginx installed, so I can run a php application locally and serve it at a local domain without needing docker.

    Historically, when I would need a MySQL database (which is often the case) I would have gotten MySQL installed on my system.

    Which is fine.

    But I’m becoming a bit of a neat freak in my older age and so want to keep my computer as clean as possible within reason.

    So what I do now is start a new docker container for MySQL and connect to that instead:

    # Bash command to start up a docker container with MySQL in it
    # And use port 33061 on my local machine to connect to it.
    docker run \
    --name=mysql \
    --publish 33061:3306 \
    --env MYSQL_DATABASE=my_disposable_db \
    --env MYSQL_ROOT_PASSWORD=password \
    --detach mysql

    Then in my Laravel .env configuration I would add this:

    DB_HOST=0.0.0.0:33061
    DB_DATABASE=my_disposable_db
    DB_USERNAME=root
    DB_PASSWORD=password

    The benefit of working this way is that if anything happens to my MySQL container — any corruptions or just ending up with a whole mess of databases old and new in there, I can just destroy the container and start a new one afresh.

    Not to mention when I want to upgrade the MySQL version im working with… or even test with a lower version.

    docker container stop mysql
    docker container rm mysql
    # And then re-run the "docker run" command above.
    # Or even run it with different variables / ports.

    The same goes for any other database engines too: Postgres; Redis; MariaDB. Any can just be started up on your system as a standalone Docker container and connected to easily from your website / app in development.

    # Start a Postgres container
    docker run \
    --name postgres \
    --publish 5480:5432 \
    --env POSTGRES_PASSWORD=password \
    --detach postgres:11-alpine
    
    # Start a redis container
    docker run \
    --name redis \
    --publish 6379:6379 \
    --detach redis
    
    # Start a Mariadb container
    docker run \
    --name some-mariadb \
    --publish 33062:3306 \
    --env MARIADB_USER=example-user \
    --env MARIADB_PASSWORD=my_cool_secret \
    --env MARIADB_ROOT_PASSWORD=my-secret-pw  \
    --detach mariadb

    And with them all being self-contained and able to be exposed to any port on the host machine, you could have as many as you wanted running at the same time… if you were so inclined.

    I love how this approach keeps my computer clean of extra programs. And how it makes it super easy to have multiple versions of the same thing installed at the same time.

    Docker doesn’t have to be scary when taken in small doses. ๐Ÿ˜Š


  • ๐Ÿ“‚

    PHP’s __call magic method and named arguments

    Whilst working on a little library recently, I discovered some interesting behavior with PHP’s __call magic method. Specifically around using named arguments in methods that are caught by the __call method.

    Given the following class:

    <?php
    class EmptyClass
    {
        public function __call(string $name, array $args)
        {
            var_dump($args); die;
        }
    }

    Calling a non-existing method without named parameters would result in the arguments being given to __call as an indexed array:

    $myClass = new EmptyClass;
    
    $myClass->method(
        'Argument A',
        'Argument B',
    );
    
    // This var dumps: [0 => 'Argument A', 1 => 'Argument B']

    However, passing those values with named parameters, will cause them to be given to __call as an associative array:

    $myClass = new EmptyClass;
    
    $myClass->method(
        firstArg: 'Argument A',
        secondArg: 'Argument B',
    );
    
    // This var dumps: ['firstArg' => 'Argument A', 'secondArg' => 'Argument B']

    I’m not sure if this is helpful to anyone but I thought it was quite interesting so thought I’d share. ๐Ÿ™‚


  • ๐Ÿ“‚

    What is the PHP __call magic method?

    Consider this PHP class:

    <?php
    class FooClass
    {
        public function bar(): string
        {
            return 'Bar';
        }
    }

    We could call the bar method as follows:

    <?php
    $fooClass = new FooClass;
    
    $fooClass->bar();
    
    // returns the string 'Bar'

    However, in PHP, we have the ability to call methods that don’t actually exist on a class. They can instead be caught by a “magic method” named __call, which you can define on your class.

    <?php
    class BazClass
    {
        public function __call(string $name, array $args)
        {
            // $name will be given the value of the method
            // that you are trying to call
    
            // $args will be given all of the values that
            // you have passed into the method you are
            // trying to call
        }
    }

    So if you instantiated the BazClass above and called a non-existing method on it with some arguments, you would see the following behavior:

    <?php
    $bazClass = new BazClass;
    $bazClass->lolcats('are' 'awesome');

    In this example, BazClass‘s __call method would catch this method call, as there is no method on it named lolcats.

    The $name value in __call would then be set to the string “lolcats”, and the $args value would be set to the array [0 => 'are', 1 => 'awesome'].

    You may not end up using the __call method much in your day to day work, but it is used by frameworks that you possibly will be using, such as Laravel.