How I deploy a Laravel project to a Kubernetes Cluster

An overview of how I set up Kubernetes, and my projects to deploy to it.

WIP: post not yet finalized.

This is an overview of how I would setup a Kubernetes cluster, along with how I would set up my projects to deploy to that cluster.

This is a descriptive post and contains nothing technical in the setting up of this infrastructure.

That will come in future posts.

Services / Websites I use

Digital Ocean

Within Digital Ocean, I use their managed Kubernetes, Managed database, DNS, S3-compatible spaces with CDN and Container registry.

Github

Github is what I use for my origin repository for all IaC code, and project code. I also use the actions CI features for automated tests and deployments.

Terraform

I use Terraform for creating my infrastructure, along with Terraform cloud for hosting my Terraform state files.

Setting up the infrastructure

I firstly set up my infrastructure in Digital Ocean and Github using Terraform.

This infrastructure includes these resources in Digital Ocean: Kubernetes Cluster, Spaces bucket and Managed MySQL database. As well as two Action secrets in Github for: Digital Ocean Access Token and the Digital Ocean Registry Endpoint.

After the initial infrastructure is setup — the Kubernetes cluster specifically, I then use Helm to install the nginx-ingress-controller into the cluster.

Setting up a Laravel project

I use Laravel Sail for local development.

For deployments I write a separate Dockerfile which builds off of a php-fpm container.

Any environment variables I need, I add them as a Kubernetes secret via the kubectl command from my local machine.

Kubernetes deployment file

All the things that my kubernetes cluster needs to know how to deploy my Laravel project are in a deployment.yml file in the project itself.

This file is used by the Github action responsible for deploying the project.

Github action workflows

I add two workflow files for the project inside the ./.github/workflows/ directory. These are:

ci.yml

This file runs the full test suite, along with pint and larastan.

deploy.yml

This file is triggered only on the main branch, after the Tests (ci) action has completed successfully.

It will build the container image and tag it with the current git sha.

Following that, it will install doctl and authenticate with my Digital Ocean account using the action secret for the secret token I added during the initial Terraform stage.

Then it pushes that image to my Digital Ocean container registry.

The next step does a find and replace to the project’s deployment.yml file. I’ve included a snippet of that file below:

      containers:
      - name: davidpeachcouk
        image: 
        ports:
        - containerPort: 9000

It replaces that <IMAGE> placeholder with the full path to the newly-created image. It uses the other Github secret that was added in the Terraform stage: the Digital Ocean Registry Endpoint.

Finally it sets up access to the Kubernetes cluster using the authenticated doctl command, before running the deployment.yml file with the kubectl command. After which, it just does a check to see that the deployment was a success.

Don’t stop building

I really enjoy building scripts for my own workflow.

I wish I had the skills to build things in the real world, but until then I’ll keep building stuff in the digital space only.

Although I love working with PHP and Laravel, it is Bash that has re-ignited a passion in me to just build stuff without thinking its got to work towards being some kind of “profitable” side project.

Don’t. Stop. Building.

Adding Laravel Jetstream to a fresh Laravel project

I only have this post here as there was a couple of extra steps I made after regular installation, which I wanted to keep a note of.

Here are the changes made to my Inventory Manager.

Follow the Jetstream Installation guide

Firstly I just follow the official installation guide.

When it came to running the Jetstream install command in the docs, this was the specific flavour I ran:

php artisan jetstream:install livewire --pest

This sets it up to use Livewire, as I wanted to learn that along the way, as well as setting up the Jetstream tests as Pest ones.

Again, I’m not too familiar with Pest (still loving phpunit) but thought it was worth learning.

Enable API functionality

I want to build my Inventory Manager as a separate API and front end, so I enabled the API functionality after install.

Enabling the built-in API functionality, which is Laravel Sanctum by the way, is as easy as uncommenting a line in your ./config/jetstream.php file:

'features' => [
    // Features::termsAndPrivacyPolicy(),
    // Features::profilePhotos(),
    Features::api(),
    // Features::teams(['invitations' => true]),
    Features::accountDeletion(),
],

The Features::api(), line should be commented out by default; just uncomment it and you’re good to go.

Setup Pest testing

The only thing that tripped me up was that I hadn’t previously setup pest, which was causing the Jetstream tests to fail.

So I ran the following command, which is modified for my using Laravel Sail, from the Pest Documentation:

./vendor/bin/sail artisan pest:install

I then also added the RefreshDatabase trait to my ./tests/TestCase.php file.

Then all of my tests pass.

That is Jetstream setup and ready to continue for me.

Starting a new Laravel 9 project

Whenever I start a new Laravel project, whether that’s a little side-project idea or just having a play, I try to follow the same process.

I recently read Steve’s post here on starting your first Laravel 9 Application, so thought I would write down my own setup.

Whereas Steve’s guide walks you through the beginnings of building a new app, I’m only going to show what I do to get a new project in a ready state I’m happy with before beginning a build.

This includes initial setup, static analysis, xdebug setup and CI pipeline setup (with Github Actions).


Pre-requisites

Before starting, I already have docker and docker-compose installed for my system (Arch Linux BTW).

Oh and curl is installed, which is used for pulling the project down in the initial setup.

Other than that, everything that is needed is contained within the Docker containers.

I then use Laravel’s quick setup from their documentation.


Initial setup

Using Laravel’s magic endpoint here, we can get a new Laravel project setup with docker-compose support right out of the box. This could take a little time — especially the first time your run it, as it downloads all of the docker images needed for the local setup.

curl -s https://laravel.build/my-new-site | bash

At the end of the installation, it will ask you your password in order to finalise the last steps.

Once finished, you should be able to start up your new local project with the following command:

cd my-new-site

./vendor/bin/sail up -d

If you now direct your browser to http://localhost , you should see the default Laravel landing page.


Code style fixing with Laravel Pint

Keeping a consistant coding style across a project is one of the most important aspects of development — especially within teams.

Pint is Laravel’s in-house development library to enable the fixing of any deviations from a given style guide, and is actually included as a dev dependancy in new Laravel projects.

Whether you accept it’s opinionated defaults or define your own rules in a “pint.json” file in the root of your project, is up to you.

In order to run it, you simply run the following command:

./vendor/bin/sail bin pint

A fresh installation of Laravel should give you no issues whatsoever.

I advise you to make running this command often — especially before making new commits to your version control.


Static Analysis with Larastan

Static analysis is a great method for testing your code for things that would perhaps end up as run time errors in your code later down the line.

It analyses your code without executing it, and warns of any bugs and breakages it finds. It’s clever stuff.

Install Larastan with the following command:

./vendor/bin/sail composer require nunomaduro/larastan:^2.0 --dev

Create a file called “phpstan.neon” in the root of your project with the following contents:

includes:
    - ./vendor/nunomaduro/larastan/extension.neon

parameters:

    paths:
        - app/

    # Level 9 is the highest level
    level: 5

Then run the analyser with the following command:

./vendor/bin/sail bin phpstan analyse

You can actually set the level in your phpstan.neon file to 9 and it will pass in a fresh Laravel application.

The challenge is to keep it passing at level 9.


Line by Line debugging with Xdebug

At the time of writing, xdebug does come installed with the Laravel sail dockerfiles. However, the setup does need an extra step to make it work fully (at least in my experience)

Aside:

There are two parts to xdebug to think about and set up.

Firstly is the server configuration — this is the installation of xdebug on the php server and setting the correct configuration in the xdebug.ini file.

The second part is setting up your IDE / PDE to accept the messages that xdebug is sending from the server in order to display the debugging information in a meaningful way.

I will show here what is needed to get the server correctly set up. However, you will need to look into how your chosen editor works to receive xdebug messages. VS Code has a plugin that is apparently easy to setup for this.

I use Neovim, and will be sharing a guide soon for how to get debugging with xdebug working in Neovim soon.

Enable Xdebug in Laravel Sail

In order to “turn on” xdebug in Laravel Sail, we just need to enable it by way of an environment variable in the .env file.

Inside your project’s .env file, put the following:

SAIL_XDEBUG_MODE=develop,debug

Unfortunately, in my own experience this hasn’t been enough to have xdebug working in my editor (Neovim). And looking around Stack Overflow et. al, I’m not the only one.

However, what follows is how I get the xdebug server correctly configured for me to debug in Neovim. You will need to take an extra step or two for your editor of choice in order to receive those xdebug messages and have them displayed for you.

Publish the Sail runtime files

One thing Laravel does really well, is creating sensible defaults with the ease of overriding those defaults — and Sail is no different.

Firstly, publish the Laravel sail files to your project root with the following command:

./vendor/bin/sail artisan sail:publish

Create an xdebug ini file

After publishing the sail stuff above, you will have a folder in the root of your project called “docker”. Within that folder you will have different folders for each of the supported PHP versions.

I like to use the latest version, so I would create my xdebug ini file in the ./docker/8.2/ directory, at the time of writing.

I name my file ext-xdebug.ini, and add the following contents to it. You may need extra lines added depending on your IDE’s setup requirements too.

[xdebug]
xdebug.start_with_request=yes
xdebug.discover_client_host=true
xdebug.max_nesting_level=256
xdebug.client_port=9003
xdebug.mode=debug
xdebug.client_host=host.docker.internal

Add a Dockerfile step to use the new xdebug ini file

Within the Dockerfile located at ./docker/8.2/Dockerfile, find the lines near the bottom of the file that are copying files from the project into the container, and add another copy line below them as follows:

COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.2/cli/conf.d/99-sail.ini
COPY ext-xdebug.ini /etc/php/8.2/cli/conf.d/ext-xdebug.ini

Optionally rename the docker image

It is recommended that you rename the image name within your project’s ./docker-compose.yml file, towards the top:

laravel.test:
    build:
        context: ./docker/8.2
        dockerfile: Dockerfile
        args:
            WWWGROUP: '${WWWGROUP}'
    image: sail-8.2/app
    image: renamed-sail-8.2/app

This is only if you have multiple Laravel projects using sail, as the default name will clash between projects.

Rebuild the Image.

Now we need to rebuild the image in order to get our new xdebug configuration file into our container.

From the root of your project, run the following command to rebuild the container without using the existing cache.

./vendor/bin/sail build --no-cache

Then bring the containers up again:

./vendor/bin/sail up -d

Continuous Integration with Github Actions

I use Github for storing a backup of my projects.

I have recently started using Github’s actions to run a workflow for testing my code when I push it to the repository.

In that workflow it first installs the code and it’s dependancies. It then creates an artifact tar file of that working codebase and uses it for the three subsequent workflows I run after, in parallel: Pint code fixing; Larastan Static Analysis and Feature & Unit Tests.

The full ci workflow file I use is stored as a Github Gist. Copy the contents of that file into a file located in a ./.github/workflows/ directory. You can name the file itself whatever you’d like. A convention is to name it “ci.yml”.

The Github Action yaml explained

When to run the action

Firstly I only want the workflow to run when pushing to any branch and when creating pull requests into the “main” branch.

on:
  push:
    branches: [ "*" ]
  pull_request:
    branches: [ "main" ]

Setting up the code to be used in multiple CI checks.

I like to get the codebase into a testable state and reuse that state for all of my tests / checks.

This enables me to not only keep each CI step separated from the others, but also means I can run them in parallel.

setup:
    name: Setting up CI environment
    runs-on: ubuntu-latest
    steps:
    - uses: shivammathur/setup-php@15c43e89cdef867065b0213be354c2841860869e
      with:
        php-version: '8.1'
    - uses: actions/checkout@v3
    - name: Copy .env
      run: php -r "file_exists('.env') || copy('.env.example', '.env');"
    - name: Install Dependencies
      run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress --prefer-dist
    - name: Generate key
      run: php artisan key:generate
    - name: Directory Permissions
      run: chmod -R 777 storage bootstrap/cache
    - name: Tar it up 
      run: tar -cvf setup.tar ./
    - name: Upload setup artifact
      uses: actions/upload-artifact@v3
      with:
        name: setup-artifact
        path: setup.tar

This step creates an artifact tar file from the project that has been setup and had its dependancies installed.

That tar file will then be called upon in the three following CI steps, extracted and used for each test / check.

Running the CI steps in parallel

Each of the CI steps I have defined — “pint”, “larastan” and “test-suite” — all require the “setup” step to have completed before running.

pint:
    name: Pint Check
    runs-on: ubuntu-latest
    needs: setup
    steps:
    - name: Download Setup Artifact
      uses: actions/download-artifact@v3
      with:
        name: setup-artifact
    - name: Extraction
      run: tar -xvf setup.tar
    - name: Running Pint
      run: ./vendor/bin/pint

This is because they all use the artifact that is created in that setup step. The artifact being the codebase with all dependancies in a testable state, ready to be extracted in each of the CI steps.

pint:
    name: Pint Check
    runs-on: ubuntu-latest
    needs: setup
    steps:
    - name: Download Setup Artifact
      uses: actions/download-artifact@v3
      with:
        name: setup-artifact
    - name: Extraction
      run: tar -xvf setup.tar
    - name: Running Pint
      run: ./vendor/bin/pint

Those three steps will be run in parallel as a default; there’s nothing we need to do there.

Using the example gist file as is, should result in a full passing suite.


Further Steps

That is the end of my starting a new Laravel project from fresh, but there are other steps that will inevitably come later on — not least the Continuous Delivery (deployment) of the application when the time arrises.

You could leverage the excellent Laravel Forge for your deployments — and I would actually recommend this approach.

However, I do have a weird interest in Kubernetes at the moment and so will be putting together a tutorial for deploying your Laravel Application to Kubernetes in Digital Ocean. Keep an eye out for that guide — I will advertise that post on my Twitter page when it goes live.

Given, When, Then — how I approach Test-driven development in Laravel

Laravel is an incredible PHP framework and the best starting point for pretty much any web-based application (if writing it in PHP, that is).

Along with it’s many amazing features, comes a beautiful framework from which to test what you are building.

For the longest time I cowered at the idea of writing automated tests for what I built. It was a way of working that was brought in by a previous workplace of mine and my brain fought against it for ages.

Since that time a few years ago I slowly came to like the idea of testing. Then over the past year or so I have grown to love it.

I have met some people that are incredibly talented developers, but for me made the prospect of automated testing both confusing and intimidating.

That was until I came across Adam Wathan’s excellent Test Driven Laravel course. He made testing immediately approachable and broke it down into three distinct phases (per test): “Given”, “When” and “Then”. Also known as “Arrange”, “Act” and “Assert”. I forget which phrase he used, but either way the idea is like this.

“Given” this environment

The first step is to set up the “world” in which the test should happen.

One example would be if you were building an API that would return PlayStation game data to you. In order to return games, there must be games there to return.

In Laravel we have factories that we can create for quickly creating test entries for our models. Here is an example of a Game model that uses its factory to create a game for us:

$game = Game::factory()->create([
    'title' => 'The Last of Us part 2',
    'developer' => 'Naughty Dog',
]);

“When” this thing I want to test happens

Here you would do the thing that you are testing.

Maybe that is sending some data to an API endpoint in your application. Or perhaps you are testing a single utility class can do a specific action, so you call the method on that class.

Here, let’s continue the idea of return games from an api call. We’ll use the $game variable from the previous example and access its ID to build our GET endpoint:

$response = $this->json('get', '/api/games/' . $game->id,);

Here the $response variable gets the response from the json get call, allowing you to later make assertions against it.

“Then” I should see this particular outcome

In this last step you would make assertions against what has happened. This could be checking if a record exists in a database with specific values, or asserting that an email got sent.

Basically anything you need to make sure happened, or didn’t happen, for you to be sure you are getting your desired functionality.

Let’s finish our game example by asserting that we got json back with the expected data. We do this by calling the appropriate method off of the $response variable from the previous example.

$response->assertJson([
    'title' => 'The Last of Us part 2',
    'developer' => 'Naughty Dog',
]);

The full example test code

$game = Game::factory()->create([
    'title' => 'The Last of Us part 2',
    'developer' => 'Naughty Dog',
]);
$response = $this->json('get', '/api/games/' . $game->id,);
$response->assertJson([
    'title' => 'The Last of Us part 2',
    'developer' => 'Naughty Dog',
]);

Much more to explore

There is so much to automated testing and I’m still relatively new to it all myself.

You can “fake” other things in your application in order to not run live things in tests. For example when testing emails are sent you don’t really want to be actually sending emails when you run your tests. Therefore you would “fake” the functionality of sending the mail.

I hope that this post has been an easy-to-follow intro to how I myself approach testing.

I have found that even as my tests have gotten more complex in certain situations, I still always stick to the same structural idea:

  1. Given this is the world my code lives in.
  2. When I perform this particular action.
  3. Then I should see this specific outcome.

Preview Laravel’s migrations with the pretend flag

Here is the command to preview your Laravel migrations without running them:

cd /your/project/root
php artisan migrate --pretend

Laravel’s migrations give us the power to easily version control our database schema creations and updates.

In a recent task at work, I needed to find out why a particular migration was failing.

This is when I discovered the simple but super-useful flag --pretend, which will show you the queries that Laravel will run against your database without actually running those migrations.

Migrating my website to Statamic

I love Laravel.

I also really like WordPress, for what it is. So when it came to originally putting my personal site together I just wanted to get a simple WordPress site together.

I have attempted to build my own website and blog in Laravel from scratch multiple times over the years. I even stuck with a build for a while but ultimately went back to WordPress.

My issue was only down to the fact that I wanted to write more in my own time and found I spent most my time tinkering.

But I really love Laravel.

So imagine my joy when I came across Statamic. Statamic is a CMS package that can be installed into a Laravel site and just works seamlessly alongside you Laravel code.

I am in the process of rebuilding my personal site and will be getting it live as soon as I can.

I think I will migrate my current site to a new domain, something like “davidpeach.me”, and then use the 4042302 technique to ensure my old posts are still found as I migrate the posts over.

I’m really looking forward to getting creative with Statamic and then layering on all of the excellent Laravel features as a way to learn as much, and refresh my mind, about my favourite framework.

If I start rebuilding my website in Laravel yet again, I really need to see it through and commit to it. If the last couple of weeks have taught me anything it’s that I love working with Laravel.

Let’s see how this goes.

How I would set up Laravel with Docker

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

I tried to avoid Docker for the longest time due to the ease of just running php artisan serve. However, when you have some dependancies that your site will rely on, Docker can be helpful — especially when having multiple developers — in getting up and running with the whole codebase easier.

This post assumes you have setup a basic Laravel project on a Linux computer, and have both Docker and Docker Compose installed locally.

What will this project use?

This is only a basic example to get up and running with the following dependancies. You can add more items to your docker-compose.yml file as you need to.

Note: whatever you choose to name each extra service in your docker-compose.yml file, use its key as the reference point in your .env file.

  • The main site codebase
  • A MySQL database
  • an NGINX webserver
  • PHP

docker-compose.yml

Have a file in the project root, named `docker-compose.yml

version: "3.3"

services:
  mysql:
    image: mysql:8.0
    restart: on-failure
    env_file:
      - .env
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
  nginx:
    image: nginx:1.15.3-alpine
    restart: on-failure
    volumes:
      - './public/:/usr/src/app'
      - './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
    ports:
      - 80:80
    env_file:
      - .env
    depends_on:
      - php
  php:
    build:
      context: .
      dockerfile: './docker/php/Dockerfile'
    restart: on-failure
    env_file:
      - .env
    user: ${LOCAL_USER}

Dockerfile

Have a Dockerfile located here: ./docker/php/Dockerfile. I keep it in a separate folder for tidiness.

# ./docker/php/Dockerfile
FROM php:7.2-fpm

RUN docker-php-ext-install pdo_mysql

RUN pecl install apcu-5.1.8
RUN docker-php-ext-enable apcu

RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
    && php -r "if (hash_file('SHA384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
    && php composer-setup.php --filename=composer \
    && php -r "unlink('composer-setup.php');" \
    && mv composer /usr/local/bin/composer

WORKDIR /usr/src/app

COPY ./ /usr/src/app

RUN PATH=$PATH:/usr/src/app/vendor/bin:bin

default.conf

Have a default.conf file for the project’s nginx container saved here: ./docker/nginx/default.conf

# ./docker/nginx/default.conf
server {
 server_name ~.*;

 location / {
     root /usr/src/app;

     try_files $uri /index.php$is_args$args;
 }

 location ~ ^/index\.php(/|$) {
     client_max_body_size 50m;

     fastcgi_pass php:9000;
     fastcgi_buffers 16 16k;
     fastcgi_buffer_size 32k;
     include fastcgi_params;
     fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
 }

 error_log /dev/stderr debug;
 access_log /dev/stdout;
}

Add the necessary variables to your .env file

There are some variables used in the docker-compose.yml file that need to be added to the .env file. These could be added directly, but this makes it more straightforward for other developers to customise their own setup.

MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=example
LOCAL_USER=1000:1000

The MYSQL_ROOT_PASSWORD and MYSQL_DATABASE are self-explanatory, but theLOCAL_USER variable refers to the user id and group id of the currently logged in person on the host machine. This normally defaults to 1000 for both user and group.

If your user and/or group ids happen to be different, just alter the variable value.

Note: find out your own ids by opening your terminal and typing id followed by enter. You should see something like the following:

uid=1000(david) gid=1000(david) groups=1000(david),4(adm),27(sudo),1001(rvm)

uid and gid are the numbers you need, for user and group respectively.

Run it

Run the following two commands separately then once they are finished head to http:localhost to view the running code.

Note: This setup uses port 80 so you may need to disable any local nginx / apache that may be running currently.

docker-compose build
docker-compose up -d

Any mistakes or issues, just email me.

Thanks for reading.

How to easily set a custom redirect in Laravel form requests

In Laravel you can create custom request classes where you can house the validation for any given route. If that validation then fails, Laravel’s default action is to redirect the visitor back to the previous page. This is commonly used for when a form is submitted incorrectly – The visitor will be redirected back to said form to correct the errors. Sometimes, however, you may wish to redirect the visitor to a different location altogether.

TL;DR (Too long; didn’t read)

At the top of your custom request class, add one of the following protected properties and give it your required value. I have given example values to demonstrate:

protected $redirect = '/custom-page'; // Any URL or path
protected $redirectRoute = 'pages.custom-page'; // The named route of the page
protected $redirectAction = 'PagesController@customPage'; // The controller action to use.

This will then redirect your visitor to that location should they fail any of the validation checks within your custom form request class.

Explaination

When you create a request class through the Laravel artisan command, it will create one that extends the base Laravel class Illuminate\Foundation\Http\FormRequest. Within this class the three protected properties listed above are initialised from line 33, but not set to a value.

Then further down the page of the base class, on line 127 at the time of writing, there is a protected method called getRedirectUrl. This method performs a series of checks for whether or not any of the three redirect properties have actually been set. The first one it finds to be set by you, in the order given above, is the one that will be used as the custom redirect location.

Here is that getRedirectUrl method for your convenience:

/**
* Get the URL to redirect to on a validation error.
*
* @return string
*/
protected function getRedirectUrl()
{
    $url = $this->redirector->getUrlGenerator();

    if ($this->redirect) {
        return $url->to($this->redirect);
    } elseif ($this->redirectRoute) {
        return $url->route($this->redirectRoute);
    } elseif ($this->redirectAction) {
        return $url->action($this->redirectAction);
    }

    return $url->previous();
}

Do you have any extra tips to add to this? Let me know in the comments below.

Thanks.

Laravel Blade push and stack

Laravel’s blade view compiler is second to none. I’ve used a couple of different templating engines and blade is by far my favourite.

Including Partials

The way in which we include partials of views within our main views is as follows:
@include(‘partials.my-first-partial’)
It will inject that partial’s content in the specified place.

Defining Sections

Within our views, we define “sections” with the following syntax:

@section(‘section_name’)

    The section’s content within here

@stop

And we can define as many sections as we need for our project.

When the same section is used in multiple places within one compilation

Imagine we have master template file as such:

// layouts.main.blade.php
<!doctype html>

...
@yield(‘partials.form’)
...

@yield(‘custom_scripts’)

Let’s suppose we have the following layout template that extends our main layout one and is including three partials. This example is a form template including its various inputs from separate partials. For my own website I have a different form for each of my post types and so I have the inputs in separate partials for easy reuse.

// partials.form.blade.php
@extends(‘layouts.main’)

<form>@include(‘parials.form-title’)
@include(‘parials.form-content’)
@include(‘parials.form-tags’)</form>

Let’s next suppose that in a couple of those partial input views you need to inject some custom scripting. This is a slightly contrived example, but it will illustrate the point.

// partials.form-content.blade.php
<textarea class="content" name="content"></textarea>

@section(‘custom_scripts’)
// dummy javascript as example
$(‘.content’).doSomething();
@stop
// partials.form-tags.blade.php
<select class="tags" name="tags">
<option value="tagone">Tag One</option>
<option value="tagtwo">Tag Two</option>
<option value="tagthree">Tag Three</option>
</select>

@section(‘custom_scripts’)
$(‘.tags’).doSomethingElse()
@stop

Now, when the form page gets compiled, only the first occurrence of the ‘custom_scripts’ section will be included.

So what if you needed to be able to define this section in chunks across partials?

Introducing Blade’s Push & Stack directives

To give this functionality, Laravel does in fact have two little-known directives called ‘push’ and ‘stack’.

They allow you to ‘stack up’ items across partials with the ‘push’ directive, which can then be echoed out together with the ‘stack’ directive.

Here’s the above form example but with ‘push’ and ‘stack’ used in place of ‘section’ and ‘yield’.

// layouts.main.blade.php
<!doctype html>

...
@yield(‘partials.form’)
...

@stack(‘custom_scripts’)

// partials.form-content.blade.php
<textarea class="content" name="content"></textarea>

@push(‘custom_scripts’)
// dummy javascript as example
$(‘.content’).doSomething();
@endpush
// partials.form-tags.blade.php
<select class="tags" name="tags">
<option value="tagone">Tag One</option>
<option value="tagtwo">Tag Two</option>
<option value="tagthree">Tag Three</option>
</select>

@push(‘custom_scripts’)
$(‘.tags’).doSomethingElse()
@endpush

This will now compile all uses of the @push(‘custom_scripts’) and echo them out as one wherever you call @stack(‘custom_scripts’)

When I was shown this technique by a mate at work, it blew my mind.

Have fun.