Setting up Elasticsearch and Kibana using Docker for local development

How to set up Kibana and Elasticsearch locally, within Docker containers.

Overview

Elasticsearch is a super-fast search query program. Kibana is a separate program that can be used for interacting with elasticsearch.

Here I am setting up both Elasticsearch and Kibana in their own single Docker Containers. I do this as a way to help keep my computer relatively free from being cluttered with programs. Not only that, but since the containers are their own separate self-contained boxes, it also makes it easy to upgrade the Elasticsearch version I am using at a later date.

Or even remove them entirely with minimal fuss.

Please note: I am using version 7.10.1 of both programs in the examples below. You can look at each program’s respective docker hub pages to target the exact version you require:

Just replace any uses of “7.10.1” below with your own version.

Creating and running containers for the services needed

Run the two following commands to download and run Elasticsearch locally:

# Download the Elasticsearch docker image to your computer
docker pull elasticsearch:7.10.1

# Create a local container with Elasticsearch running
docker run -d --name my_elasticsearch --net elasticnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.ml.enabled=false" elasticsearch:7.10.1

# Start the container
docker container start my_elasticsearch

And then run the two following commands to download and run Kibana locally:

# Download the Kibana docker image to your computer
docker pull kibana:7.10.1

# Create a local container with Kibana running
docker run -d --name my_kibana --net elasticnetwork -e ELASTICSEARCH_URL=http://elasticsearch:9200 -p 5601:5601 kibana:7.10.1

# Start the container
docker container start my_kibana

Accessing Kibana

Since kibana will be connecting to our Elasticsearch container, which it was told to use with the ELASTICSEARCH_URL=http://elasticsearch:9200 section of the Kibana create command, we really only need to use Kibana.

Kibana has it’s own Devtools for querying Elasticsearch, which so far has been enough for my own usecases.

head to http://localhost:5601 to access your own Kibana installation.

Note: You can send curl requests directly to your Elasticsearch from the terminal by targeting the http://127.0.0.1:9200 endpoint.

Deleting the containers

If you wish to remove Elasticsearch and/or Kibana from your computer, then enter the following commands into your terminal.

Using Docker for local development makes this a cinch.

# Stop the Elasticsearch container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_elasticsearch

# Delete the Elasticsearch container
docker container rm my_elasticsearch

# Stop the Kibana container if it is running
# (Use it's name you gave it in the "--name" argument as its handle)
docker container stop my_kibana

# Delete the Kibana container
docker container rm my_kibana

If you need to set up the two programs again, you can just use the create commands shown above to create them as you did originally.

Install MongoDB with Docker for local development

Pull the docker image for mongo down to your computer.

docker pull mongo

Run the mongo container in the background, isolated from the rest of your computer.

# Command explained below
docker run -d -p 27017:27017 --name mongodb mongo -v /data/db:/data/db

What I love about this approach is that I don’t start muddying up my computer installing new programs — especially if it’s just for the purposes of experimenting with new technologies.

The main run command explained:

  • “docker run -d” tells docker to run in detached mode, which means it will run in the background. Otherwise if we close that terminal it will stop execution of the program docker is running (mongo in this case).
  • “-p 27017:27017” maps your computer’s port number 27017 so it forwards its requests into the container using the same port. (I always forget which port represents the computer and which is the container)
  • “–name mongodb” just gives the container that will be created a nice name. Otherwise Docker will generate and random name.
  • “mongo” is just telling Docker which image to create.
  • “-v /data/db:/data/db” tells Docker to map the /data/db directory on your computer to the /data/db directory in the container. This will ensure that if you restart the container, you will retain the mongo db data.

Bulk converting large PS4 screenshot png images into 1080p jpg’s

A niche example of how I bulk convert my screenshots to make them more website-friendly.

I tend to have my screenshots set to the highest resolution when saving on my PlayStation 4.

However, when I upload to the screenshots area of this website, I don’t want the images to be that big — either in dimensions or file size.

This snippet is how I bulk convert those images ready for uploading. I use an Ubuntu 20.04 operating system when running this.

# Make sure ImageMagick is installed
sudo apt install imagemagick

# Run the command
mogrify -resize 1920x1080 -format jpg folder/*.png

You can change the widthxheight dimensions after the -resize flag for your own required size. As well as changing the required image format after the -format flag.

Updating PHP versions in Ubuntu 20.04

Installing an older PHP version and switching to it in Ubuntu.

For an older PHP project, I needed to install an older version of PHP. This is what I did to set that up.

Installing a different PHP version

sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install -y php7.1

Rebinding php to required version

Some of these binds are probably not need. I think the main ones, at least for my use case, were php and phar.

sudo update-alternatives --set php /usr/bin/php7.1
sudo update-alternatives --set phar /usr/bin/phar7.1
sudo update-alternatives --set phar.phar /usr/bin/phar.phar7.1
sudo update-alternatives --set phpize /usr/bin/phpize7.1
sudo update-alternatives --set php-config /usr/bin/php-config7.1

For some reason the --set flag stopped working, so I had to use:

sudo update-alternatives --config php
sudo update-alternatives --config phar

etc. And update each one with the terminal prompt options for each.

p.s. If using PHP-FPM, you could also set up different server conf files and point the FPM path to the version you need. My need was just because I was using the command line in the older project.

How I would set up Laravel with Docker

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

I tried to avoid Docker for the longest time due to the ease of just running php artisan serve. However, when you have some dependancies that your site will rely on, Docker can be helpful — especially when having multiple developers — in getting up and running with the whole codebase easier.

This post assumes you have setup a basic Laravel project on a Linux computer, and have both Docker and Docker Compose installed locally.

What will this project use?

This is only a basic example to get up and running with the following dependancies. You can add more items to your docker-compose.yml file as you need to.

Note: whatever you choose to name each extra service in your docker-compose.yml file, use its key as the reference point in your .env file.

  • The main site codebase
  • A MySQL database
  • an NGINX webserver
  • PHP

docker-compose.yml

Have a file in the project root, named `docker-compose.yml

version: "3.3"

services:
  mysql:
    image: mysql:8.0
    restart: on-failure
    env_file:
      - .env
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
  nginx:
    image: nginx:1.15.3-alpine
    restart: on-failure
    volumes:
      - './public/:/usr/src/app'
      - './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
    ports:
      - 80:80
    env_file:
      - .env
    depends_on:
      - php
  php:
    build:
      context: .
      dockerfile: './docker/php/Dockerfile'
    restart: on-failure
    env_file:
      - .env
    user: ${LOCAL_USER}

Dockerfile

Have a Dockerfile located here: ./docker/php/Dockerfile. I keep it in a separate folder for tidiness.

# ./docker/php/Dockerfile
FROM php:7.2-fpm

RUN docker-php-ext-install pdo_mysql

RUN pecl install apcu-5.1.8
RUN docker-php-ext-enable apcu

RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
    && php -r "if (hash_file('SHA384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
    && php composer-setup.php --filename=composer \
    && php -r "unlink('composer-setup.php');" \
    && mv composer /usr/local/bin/composer

WORKDIR /usr/src/app

COPY ./ /usr/src/app

RUN PATH=$PATH:/usr/src/app/vendor/bin:bin

default.conf

Have a default.conf file for the project’s nginx container saved here: ./docker/nginx/default.conf

# ./docker/nginx/default.conf
server {
 server_name ~.*;

 location / {
     root /usr/src/app;

     try_files $uri /index.php$is_args$args;
 }

 location ~ ^/index\.php(/|$) {
     client_max_body_size 50m;

     fastcgi_pass php:9000;
     fastcgi_buffers 16 16k;
     fastcgi_buffer_size 32k;
     include fastcgi_params;
     fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
 }

 error_log /dev/stderr debug;
 access_log /dev/stdout;
}

Add the necessary variables to your .env file

There are some variables used in the docker-compose.yml file that need to be added to the .env file. These could be added directly, but this makes it more straightforward for other developers to customise their own setup.

MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=example
LOCAL_USER=1000:1000

The MYSQL_ROOT_PASSWORD and MYSQL_DATABASE are self-explanatory, but theLOCAL_USER variable refers to the user id and group id of the currently logged in person on the host machine. This normally defaults to 1000 for both user and group.

If your user and/or group ids happen to be different, just alter the variable value.

Note: find out your own ids by opening your terminal and typing id followed by enter. You should see something like the following:

uid=1000(david) gid=1000(david) groups=1000(david),4(adm),27(sudo),1001(rvm)

uid and gid are the numbers you need, for user and group respectively.

Run it

Run the following two commands separately then once they are finished head to http:localhost to view the running code.

Note: This setup uses port 80 so you may need to disable any local nginx / apache that may be running currently.

docker-compose build
docker-compose up -d

Any mistakes or issues, just email me.

Thanks for reading.

Setting up my own Nextcloud (Version 16)

Setting up your very own Nextcloud server from scratch. This has been tested with version 15 and 16 of the software. Any questions, please do contact me.

Setting up your very own Nextcloud server from scratch. This has been tested with version 15 and 16 of the software. Any questions, please do contact me.

Updated on: 24th June 2019

Set up a new server (with Digital Ocean)

If you don’t have an account already, head to Digital Ocean and create a new account. Of course, you can use any provider that you want to – I just happen to use them and so can only give experience from that.

Login to your account.

Setup your SSH key

In the next step we will be creating your new droplet (server), and you will need an SSH Key to add to it. This allows for easy and secure access to your new droplet from your local computer, via your terminal1.

If you are going to use the Digital Ocean console terminal, skip down to ‘Create the new “Droplet”‘, as you wont need an ssh key.

Creating the key (if you haven’t already)

If you haven’t generated an SSH key pair before, open a fresh terminal window and enter the following:

ssh-keygen -t rsa

Press enter through all of the defaults to complete the creation.

Getting the contents of the public key

Type this to display your new public key:

cat ~/.ssh/id_rsa.pub

This will give you a long string of text starting with ssh-rsa and ending with something like yourname@your-computer.

Highlight the whole selection, including the start and end points mentioned, and right click and copy.

When you are creating your droplet below, you can select the New SSH Key button and paste your public key into the box it gives you. You will also need to give the key a name when you add it in Digital Ocean, but you can name it anything.

Then click the Add SSH Key and you’re done.

Create the new “Droplet”

Digital Ocean refers to each server as a droplet, going with the whole digital “ocean” theme.

Head to Create > Droplets and click the “One-click apps” tab. Then choose the following options in the selection (Or your own custom selection – just take into account the monthly cost of each option):

  • LAMP on 18.04
  • $15/Month (2GB / 60GB / 3TB Transfer)
  • Enable backups (not necessary but recommended)
  • London (Choose your closest / preferred location)
  • Add your SSH key (see above)
  • Optionally rename the hostname to something more readable

Once you have selected the above (or your own custom options) click create. After a few moments, your droplet will be ready to use.

Set your DNS

Got to your domain name provider, Hover in my case, and set up the subdomain for your nextcloud installation, using the I.P. address for your new droplet.

I’m assuming that you already have your own domain name, perhaps for your personal website / blog. In which case we are adding a subdomain to that (so https://nextcloud.yourdomain.co.uk, for example).

But there is nothing stopping you from buying a fresh domain and using it exclusively for your new Nextcloud (https://my-awesome-nextcloud.co.uk).

I will be continuing this guide, assuming that you are using a subdomain.

You will add it in the form of an A record. This is how I would add it in Hover:

  1. Select your own domain
  2. Choose edit > edit DNS
  3. Click Add A record on the DNS edit page
  4. Fill in the hostname as your desired subdomain for your Nextcloud. For example if you were having nextcloud.mydomain.co.uk, you would just enter nextcloud.
  5. Fill in the I.P. address as the I.P. address of your new Droplet in Digital Ocean.
  6. Click Add Record

Configuring the server

Install all the required programs for Nextcloud

First ssh into your new server:

ssh root@YOUR.IP.ADDRESS.HERE

When we chose to install the LAMP option when setting up the droplet, it installed Linux, Apache2, MySQL and PHP. However, there are still some extra dependencies that Nextcloud needs to run.
Let’s install those next:

apt-get update

apt-get install libapache2-mod-php7.2 php7.2-gd php7.2-json &&
apt-get install php7.2-mysql php7.2-curl php7.2-mbstring &&
apt-get install php7.2-common php7.2-intl php-imagick php7.2-xml &&
apt-get install php7.2-zip php7.2-ldap php7.2-imap  php7.2-gmp &&
apt-get install php7.2-apcu php7.2-redis php7.2-imagick ffmpeg unzip

Download and install the Nextcloud codebase

Please note that I am using version 15.0.0 in this example. However, when you read this you may have a new version available to you. I will try and keep this guide as up to date as possible.

# Download the codebase and the "checksum" file.
wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip
wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip.sha256

# Make sure that the codebase is genuine and hasn't been altered.
sha256sum  -c nextcloud-15.0.0.zip.sha256 < nextcloud-15.0.0.zip

# Move the unzipped codebase into the webserver directory.
unzip nextcloud-15.0.0.zip
cp -r nextcloud /var/www
chown -R www-data:www-data /var/www/nextcloud

Apache config example

nano /etc/apache2/sites-available/000-default.conf

An example apache config:

<VirtualHost *:80>
        ServerAdmin mail@yourdomain.co.uk
        DocumentRoot /var/www/nextcloud

        <Directory /var/www/nextcloud/>
            Options Indexes FollowSymLinks
            AllowOverride All
            Require all granted
        </Directory>

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        <IfModule mod_dir.c>
            DirectoryIndex index.php index.pl index.cgi index.html index.xhtml index.htm
        </IfModule>

RewriteEngine on
RewriteCond %{SERVER_NAME} =nextcloud.yourdomain.co.uk
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
a2enmod rewrite && a2enmod headers && a2enmod env && 
a2enmod dir && a2enmod mime && systemctl restart apache2

A quick mysql fix

In recent versions of MySQL, the way that the mysql root user connects to the database means that password authentication wont work. So firstly we need to alter that user to use password authentication.

apt install mysql-server
mysql

# In the mysql mode
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_secret_password';
FLUSH PRIVILEGES;
quit

SSL with Let’s Encrypt

apt install certbot
certbot --apache -d nextcloud.yourdomain.co.uk

You will then be asked some questions about your installation:

  • Email address (your… umm… email address :D)
  • Whether you agree to Lets Encrypt Terms of Service (Agree)
  • Whether to redirect HTTP traffic to HTTPS (choose Yes)

Let’s Encrypt will handle the registering of the apache settings for you new ssl to work. It uses the server name you entered in the 000-default.conf file earlier.

It will also create a new file that is used by Apache for the SSL. For me, this file was at /etc/apache2/sites-available/000-default-le-ssl.conf.

First Login!

Now go to https://nextcloud.yourdomain.co.uk and you should see your nice new shiny Nextcloud installation.

Creating the admin account

Fill in the fields for your desired name and password for the admin account. You can just use the admin account as your main account if you will be the only one using this Nextcloud. But you can give others access to this site with their own login details, if you wanted. But without the admin-level priviledges.

For the database fields, enter root as the username. Then for the password, use the one that you set in the previous mysql command above. For the database name choose whatever name you wish, as the installation will create it for you.

Click finish.

After a few moments time, your nextcloud instance should present you with the landing screen along with the welcome popup. Go ahead and read it and you could even install the app for your devices as it will suggest.

Finishing touches

If you click the cog icon in the top right of your screen, followed by settings in its dropdown, you will come to the main settings area. In the left-hand column, beneath the heading “Administration”, you should see the link for “Overview”. Click it.

Now you should see a bunch of security and setup warnings at the top of the page. This is nothing to worry about, it is simply telling you about some actions that are highly recommended to setup.

We will do that now. 🙂

The “Strict-Transport-Security” HTTP header is not set to at least “15552000” seconds. For enhanced security, it is recommended to enable HSTS as described in the security tips.

All that is needed to fix this first one, is a quick edit to the apache config file that Let’s Encrypt created for the installation.

nano /etc/apache2/sites-available/000-default-le-ssl.conf

And then add this following three lines within the <VirtualHost *:443> tag.

<IfModule mod_headers.c>
    Header always add Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
</IfModule>

And then reload apache:

systemctl reload apache2

Refreshing the settings page should see that warning disappear.

No memory cache has been configured. To enhance performance, please configure a memcache, if available.

Open up you Nextcloud config file:

nano /var/www/nextcloud/config/config.php

At the bottom of the config array, add the following line:

'memcache.local' => '\OC\Memcache\APCu',

Refresh your browser and that next warning should now vanish.

For future reference, you can always take a look in the sample Nextcloud config file at /var/www/nextcloud/config/config.sample.php. It will show you all available config options.

The PHP OPcache is not properly configured.

With this warning, Nextcloud should display some sample opcache code to paste over. This one caught me out as I couldn’t work out which ini file this example code should go.

After some trial and error, I discovered that for me, it was located in an opcache.ini file:

nano /etc/php/7.2/mods-available/opcache.ini

Then at the bottom of the file, I pasted the following:

opcache.enable=1
opcache.enable_cli=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1

Reload apache:

systemctl reload apache2

Some columns in the database are missing a conversion to big int.

I only actually came across this warning when I was creating a dummy Nextcloud installation for helping with writing this guide. You may not actually get it. But if you do, here’s the fix2:

sudo -u www-data php /var/www/nextcloud/occ db:convert-filecache-bigint

This will warn you that it could take hours to do its thing, depending on the number of files. However, due to us running it right after the installation, will not even take a second.

The PHP memory limit is below the recommended value of 512MB

To fix this, I just had to edit the following file:

nano /etc/php/7.2/apache2/php.ini

Then alter the next line to look like this:

memory_limit = 512M

Then restart apache:

service apache2 restart

All Done

Once you refresh the settings page once more, you should see a beautiful green tick with the message “All checks passed”.

Good feeling, isn’t it?

If for any reason you are still getting warnings, please dont hesitate to contact me. I’ll do my best to help. Email: mail@davidpeach.me. Alternatively you can head to the Nextcloud Documentation.

How to easily set a custom redirect in Laravel form requests

In Laravel you can create custom request classes where you can house the validation for any given route. If that validation then fails, Laravel’s default action is to redirect the visitor back to the previous page. This is commonly used for when a form is submitted incorrectly – The visitor will be redirected back to said form to correct the errors. Sometimes, however, you may wish to redirect the visitor to a different location altogether.

TL;DR (Too long; didn’t read)

At the top of your custom request class, add one of the following protected properties and give it your required value. I have given example values to demonstrate:

protected $redirect = '/custom-page'; // Any URL or path
protected $redirectRoute = 'pages.custom-page'; // The named route of the page
protected $redirectAction = 'PagesController@customPage'; // The controller action to use.

This will then redirect your visitor to that location should they fail any of the validation checks within your custom form request class.

Explaination

When you create a request class through the Laravel artisan command, it will create one that extends the base Laravel class Illuminate\Foundation\Http\FormRequest. Within this class the three protected properties listed above are initialised from line 33, but not set to a value.

Then further down the page of the base class, on line 127 at the time of writing, there is a protected method called getRedirectUrl. This method performs a series of checks for whether or not any of the three redirect properties have actually been set. The first one it finds to be set by you, in the order given above, is the one that will be used as the custom redirect location.

Here is that getRedirectUrl method for your convenience:

/**
* Get the URL to redirect to on a validation error.
*
* @return string
*/
protected function getRedirectUrl()
{
    $url = $this->redirector->getUrlGenerator();

    if ($this->redirect) {
        return $url->to($this->redirect);
    } elseif ($this->redirectRoute) {
        return $url->route($this->redirectRoute);
    } elseif ($this->redirectAction) {
        return $url->action($this->redirectAction);
    }

    return $url->previous();
}

Do you have any extra tips to add to this? Let me know in the comments below.

Thanks.

Laravel Blade push and stack

Laravel’s blade view compiler is second to none. I’ve used a couple of different templating engines and blade is by far my favourite.

Including Partials

The way in which we include partials of views within our main views is as follows:
@include(‘partials.my-first-partial’)
It will inject that partial’s content in the specified place.

Defining Sections

Within our views, we define “sections” with the following syntax:

@section(‘section_name’)

    The section’s content within here

@stop

And we can define as many sections as we need for our project.

When the same section is used in multiple places within one compilation

Imagine we have master template file as such:

// layouts.main.blade.php
<!doctype html>

...
@yield(‘partials.form’)
...

@yield(‘custom_scripts’)

Let’s suppose we have the following layout template that extends our main layout one and is including three partials. This example is a form template including its various inputs from separate partials. For my own website I have a different form for each of my post types and so I have the inputs in separate partials for easy reuse.

// partials.form.blade.php
@extends(‘layouts.main’)

<form>@include(‘parials.form-title’)
@include(‘parials.form-content’)
@include(‘parials.form-tags’)</form>

Let’s next suppose that in a couple of those partial input views you need to inject some custom scripting. This is a slightly contrived example, but it will illustrate the point.

// partials.form-content.blade.php
<textarea class="content" name="content"></textarea>

@section(‘custom_scripts’)
// dummy javascript as example
$(‘.content’).doSomething();
@stop
// partials.form-tags.blade.php
<select class="tags" name="tags">
<option value="tagone">Tag One</option>
<option value="tagtwo">Tag Two</option>
<option value="tagthree">Tag Three</option>
</select>

@section(‘custom_scripts’)
$(‘.tags’).doSomethingElse()
@stop

Now, when the form page gets compiled, only the first occurrence of the ‘custom_scripts’ section will be included.

So what if you needed to be able to define this section in chunks across partials?

Introducing Blade’s Push & Stack directives

To give this functionality, Laravel does in fact have two little-known directives called ‘push’ and ‘stack’.

They allow you to ‘stack up’ items across partials with the ‘push’ directive, which can then be echoed out together with the ‘stack’ directive.

Here’s the above form example but with ‘push’ and ‘stack’ used in place of ‘section’ and ‘yield’.

// layouts.main.blade.php
<!doctype html>

...
@yield(‘partials.form’)
...

@stack(‘custom_scripts’)

// partials.form-content.blade.php
<textarea class="content" name="content"></textarea>

@push(‘custom_scripts’)
// dummy javascript as example
$(‘.content’).doSomething();
@endpush
// partials.form-tags.blade.php
<select class="tags" name="tags">
<option value="tagone">Tag One</option>
<option value="tagtwo">Tag Two</option>
<option value="tagthree">Tag Three</option>
</select>

@push(‘custom_scripts’)
$(‘.tags’).doSomethingElse()
@endpush

This will now compile all uses of the @push(‘custom_scripts’) and echo them out as one wherever you call @stack(‘custom_scripts’)

When I was shown this technique by a mate at work, it blew my mind.

Have fun.

Bypassing Laravel’s CSRF Middleware on selected routes (from 5.1)

A handy way to have some of your routes skip the middleware for CSRF protection. Handy in some situations.

Laravel does a great job at protecting us from cross-site request forgeries – or C.S.R.F. for short.But sometimes you may not wish to have that layer present. Well with Laravel 5.1 you can very easily bypass this middleware, simply by populating an array in the following file:

app/Http/Middleware/VerifyCsrfToken.php

Within this class you can add a protected property — an array — called $except, which will tell Laravel to use this middleware except for the ones you specify here.

A complete example could be:

protected $except = [
    'ignore/this/url',
    'this/one/too',
    'and/this',
];

So for those three URLs, the CSRF middleware would be skipped.

My IndieWeb Setup

Please note: This is now out of date somewhat

This post is an outline of my IndieWeb setup. I describe some of the key plugins I have installed as well as third party services I use. All of which play nicely together for the IndieWeb.Built with WordPressMy personal site is built using WordPress and is a completely custom-built theme. WordPress?—?with its plethora of plugins and diverse, active community?—?provides one of the best platforms from which to build an IndieWeb site today – outside of rolling your own. When I first heard about the IndieWeb and found myself wanting to implement its principles on my own site, I went ahead and installed a plugin I was recommended?—?the IndieWeb Plugin?—?which links to a collection of other plugins that implement certain functionality to tie into those IndieWeb principles.Plugins I have installedYou don’t have to limit yourself to only the plugins I list here?—?I have just listed the ones that I myself have in use in regards to IndieWeb.IndieWeb PluginThis plugin, once installed, provides links to the recommended IndieWeb plugins?—?some of which I have listed below?—?that will help bring you into the IndieWeb. I have also provided direct links to the plugins themselves, or you can search in your admin area’s new plugin interface. I haven’t described them all yet as I am not 100% on what some of them do. As I gain an understanding of others I use, I will include them below. The ones that I have described below are the ones I actively use and understand what they do for me. Download the IndieWeb PluginNextScripts : Social Networks Auto-PosterThis is a great plugin for easily syndicating content out to many third-party silos. I use a custom fork of it with editsThe plugin creators have very kindly added in my code edits to allow twitter replies that create those tweets as replies within the Twitter timeline. Social Networks Auto-Poster?—?or SNAP for short?—?has a flexible way in which to template how and what to share to each silo when posting your content. For example in a small note you may send the full text content only, however for an article you have written you may want to send the post title and permalink. Download the SNAP PluginRel-Syndication for WordPressThis plugin works beautifully with the Social Networks Auto-Poster plugin. It will grab the SNAP plugin’s syndication data for a post, and append a list of links to that post’s content. It will also include the necessary microformats2 meta data in the links of those syndicated copies. Download Rel-Syndication for WordPressWordPress Webmention PluginThis sets up webmention support on the site. It allows me to receive webmentions and automagically creates them as comments to those posts. It will also parse the post content for posts I write for any links contained within, and send webmentions to those links. Download WordPress Webmention pluginIndieWeb Custom TaxonomyThis plugin adds an extra semantic layer to posts. It allows you to categorize your posts as being one of the following: “Bookmark”, “Favorite”, “Like”, “Reply”, “Repost” and “RSVP”. All of which are actions that are found on social networks. It also allows you to save a URL to which you may be replying to. The plugin will then pull in that URL to give a preview of the link with the post content. This shows the context to which your post content may be replying to. Download the IndieWeb Custom Taxonomy PluginWordPress Semantic LinkbacksWhen you receive webmentions* from a site?—?or from Bridgy if you use that (see below)?—?then those comments are added to the site. What the semantic linkbacks plugin does is generate semantic comments using microformats2. *Please Note: you will need to have the WordPress Webmention plugin installed for this plugin to recognize Webmentions. See above. Download the Semantic Linkbacks pluginOther Tools I useBridgyBridgy is absolutely bloody fantastic. It is a web app built by Ryan Barrett and automatically polls a number of third-party silos?—?whichever ones you choose to connect?—?and pulls in any links to your website and any new responses to posts you have sent. Once it pulls in links and responses, it will send them over to your website as webmentions. The WordPress Webmention Plugin?—?mentioned above?—?knows exactly what to do with them. There are a couple of configurations that you may need to do also?—?at least these are what I needed to do. Firstly when bridgy finds replies to a tweet that originated from your site it will need to know where that original link is on your website. For example if you just sent twitter a note with no links back, bridgy will need to look on your site for that original message. But how does it know where the post is? It looks for the post that has, within it’s h-entry markup, the syndication link to that twitter copy. If you are using the Rel-Syndication Plugin for WordPress, then your content should already have the syndication links appended to them. One thing I had to also do was to add a “feed” meta link into my site’s header, in order to tell bridgy where those links could be found on my site (as they were not on my homepage at the time). And that’s my IndieWeb setup. Like I said before, there are some other plugins that were recommended from the IndieWeb plugin. However I have not included these here as I am not yet confident in describing those just yet. Any questions about this setup or any other IndieWeb queries please ask. I’ll do my best to help.

WordPress “Must Use” Plugins Directory

The Issue

I wanted to build a site with some custom post types, taxonomies and the like. But what if somebody who has used my theme decided to try another theme later on? What guarantee do they have that those custom post types will be recognized by a future theme?

None — that’s what.

WordPress obviously has its plugin directory, which is designed for adding in pieces of functionality that should span across themes. But what if these are essential to the working of the site due to custom post types, taxonomies or shortcodes?

Either it can be a plugin — which could be accidentally deactivated, or added to the theme’s functions.php — which would need copying over to any future theme.

A Hidden Gem of a Solution

Well it turns out that there is a third option — a way for WordPress to auto load essential plugins irregardless of the theme being used. Simply by creating a directory in the ‘wp-content’ folder called ‘mu-plugins’ — which stands for “must use plugins” — and adding plugins as you would into the regular plugins folder, they will be auto loaded with any theme and just work.

These ‘must use’ plugins will then be viewable in the plugin admin page, in a new option called ‘must use’ — grouped with the other plugin page navigation items, ‘All’, ‘Active’ and ‘Inactive’.

‘Must use’ plugins can’t be deactivated in the admin — creating a nice separation of essential plugins for a particular user.

Thanks to Justin Tadlock for this piece of info.

WYSIWYGs will kill us all

So okay, maybe they wont literally kill us all, but you’ve got to admit — they can be pretty damn annoying. At least with ones I have used.

I’m not dissing wysiwygs — just what they get used for.

Firstly let me just say that I use a wysiwyg editor on my own blog and it does exactly what I need it to. I type text and it saves that text. I click to add a link and it adds a link. Brill!

For a blog, where writing long posts with the odd link and / or bolding of text, a wysiwyg really is great. But what I don’t believe in is using a wysiwyg for anything more than that.

Let me explain.

With a standard WordPress install, when you write a post or page, the content from the wysiwyg is spat out by the function the_content(). This displays all that content in one block within the page. Of course WordPress is — at its roots, a blogging platform (and a damn good one) so the ease of writing and publishing has always been at the forefront.

But what if our content is more than just a blog post? What if it’s an album or film review? Or specifications for a car? If we bundle all of the content into one block then we are a stepping away from what we need most nowadays, which is structured content.

The need for Structured Data

The need for structured data these days is more important then ever. With the whole multi device landscape, a site’s content needs to be able to display beautifully whereever it is viewed. We need to be able to control as much of the content in as much of a modular way as possible. Also, as a result of making content more modular, it means it can be a lot easier to group and sort that content.

For example, if we have a database of 500 albums with all of the content for each album in its own single block of content, it makes it extremely difficult to categorize those into various artists or genres. Whereas if it was set it up in a way that has separate fields for each content chunk — an artist field; a genre field; etc — then those albums could be easily sorted and categorized.

Karen McGrane gave a fantastic talk at An Event Apart, Boston in 2012, where she talked about the TV Guide who made a wise decision about their content structure back in the 1980s. She talked about how the TV Guide wrote out three different descriptions of the same programmes — One small, one medium and one long. This was before TV on demand or Tivo, where now that sort of structured content is invaluable for use across different devices and contexts.

I recommend you watch the video, but I wanted to include a great quote from that talk here:

…a clean base of presentation–independent, well–structured content that you have designed, from the start, with the intent that you may want it to go out and live on a wide variety of different platforms. In fact you know from the beginning this content is going to have to live in a variety of different places…

Karen McGrane An Event Apart, Boston, 2012

Example – content for a music album

All the content for an album in one chunk:

<-- Output Html of the_content() in a wordpress site --> 
<p>This Train features 12 songs from the angelic, sometimes haunting, voice of Chrysta Bell. It was produced by David Lynch and features songs written by them both, including the track 'Polish Poem' from the film 'Inland Empire'.</p> 
<p>Artist: Chrysta Bell</p> 
<p>Produced by : David Lynch</p> 
<p>Release Date: September 29, 2011</p> 
<p>Tracklisting:</p>
<ol>
<li>This Train</li>
<li>Right Down To You</li>
<li>I Die</li>
<li>Swing With Me</li>
<li>Angel Star</li>
<li>Friday Night Fly</li>
<li>Down By Babylon</li>
<li>Real Love</li>
<li>Bird of Flames</li>
<li>Polish Poem</li>
<li>The Truth Is</li>
<li>All the Things</li>
</ol>

So while the above html code is okay, it doesn’t allow for much flexibility.

What happens if — in a few years — the website gets a complete redesign? Imagine there are 500 albums in the site’s database. Well then we’ve got a problem. We can have a lovely new shiny design but the bulk of the content is already set in stone somewhat.

In the example above we have the track listing below the review. What if we wanted to flip that? Or better yet, what if the new design wanted the track listing removed? What do we do then? Use regular expressions on the html to remove the track listing from the content?

No.

Also, as mentioned in Karen McGrane’s talk, the content should be able to live in a variety of different places. What happens if the content is viewed through a smart watch, or internet fridge? Will we want all of that content handed to us in one chunk?

Probably not.

Maybe we want to display just the Album title and artist with the option of loading in the track listing if needed.

In the music album example above, we can see sensible chunks of content that could be easily separated into sensible content chunks:

  • Track Listing
  • Review / Description (perhaps differing versions for use in different contexts?)
  • Producer
  • Artist
  • Title
  • Release Date

Using separate input fields for each of these chunks, means that each part of the content is individually available and much more flexible in how and where this content can be displayed.

WordPress has custom fields that can be entered on a per page/post basis, as well as the ability to add custom post types. No doubt other CMSs have similar options available — I’m just not very clued up on those ones.

The use of these custom fields is definitely a step in the right direction, but what we need is for CMSs to be more content-modular at their core. Like I said before, I’m not clued up on CMSs — other than WordPress and OpenCart — so please don’t hunt me down and shoot me in the face if there are indeed CMSs out there already like this.

And if there are please let me know of some good ones.

Tar!

Inspiring this post

This post was originally going to be me snivel–bitching about wysiwygs. But I heard this podcast on the web ahead, which talks about content structure in great depth and it got me thinking about it in more detail than my initial post. The podcast also mentioned that talk by Karen McGrane.

On the evening of me writing this post, before publishing the following day, I travelled down to Milton Keynes Geek Night and saw five fantastic talks.

One of those talks was by Relly Annett-Baker called ‘Future Perfect Tense: creating good content for an imperfect web’. A lot of what Relly was talking about was stuff I had been thinking about that day — so that was nice.

I will link the recording of the talk when it goes online as it is well worth a listen.

HTML 5: Omitting unnecessary speech marks

In my last post I described how it is possible to cut down on your HTML filesize and save some time in your coding by omitting optional closing tags.

As a follow up to that post I thought I’d also describe another process to save even more on your filesize and a little bit more time still.

Please note however, these examples i’m describing are really micro-optimizations. For every project you make to be the best that it can be, I would strongly recommend looking into combining your scripts into one single file and minifying it. Minifying your CSS can also go a long way to improving speed. Those techniques, as well as others, are a seperate issue but are definiately worth your time in learning.

Back to the speech marks

A lot of us developers, me included, have a habit of wrapping up our attributes in speech marks, whether single or double, as in the following example:

<head>
  <meta charset="utf-8">
  <title>Example Title</title>
  <link rel="stylesheet" href="css/style.css">
</head>

The truth is, however, that in most cases you can leave the speech marks out completely, as in the following example:

<head>
  <meta charset=utf-8>
  <title>Example Title</title>
  <link rel=stylesheet href=css/style.css>
</head>

The browser will still render that correctly, and if you view the page source with the Chrome dev tools ‘inspect element’, you’ll see that the speech marks have in fact been put in for you!

Most Cases you say?

There is one situation where you will still need to use speech marks. This is when attributes have more than one value, or includes any white space. For Example:

<section>
  <span class="main-class secondary-class"></span>
  <img class=section-image src="images/image name with spaces.jpg">
</section>

So as a rule, when declaring attributes on html elements, you can omit all speech marks where there’s no white space contained. This is because the attribute ends when it hits the white space. I hope this helps you all to add an extra little bit of optimization into both your workflow and the size of your code.

If you have any of your own tips for code optimization, please share it in the comments section below. Thanks!

HTML5: On removing optional closing tags

Firstly, an example of some html…

<article>


  <p>Once upon a time there was a web programmer.</p>
  <p>He found that he wanted to cut down on his html filesize.</p>
  <p>He came up with a list of ideas. They were as follows:</p>


  <ul>
    <li>Minifying his code</li>
    <li>Cutting out content, (unacceptable!!!).</li>
    <li>Remove optional tags, speech marks, etc...</li>
  </ul>


  <p>On writing the last item in the list, he exclaimed, <em>That's amazing</em>, now to tell others...</p>


</article>

Lovely and semantic. Now, there’s nothing at all wrong with what I’ve written above. This article is all about showing you how there are certain things in your HTML that can be left out, without breaking any part of it.

You can actually omit many closing tags and still have it render exactly how you planned.

The example above, after removing optional closing tags:

<article>


  <p>Once upon a time there was a web programmer.
  <p>He found that he wanted to cut down on his html filesize.
  <p>He came up with a list of ideas. They were as follows:


  <ul>
    <li>Minifying his code
    <li>Cutting out content, (unacceptable!!!).
    <li>Remove optional tags, speech marks, etc...
  </ul>


  <p>On writing the last item in the list, he exclaimed, <em>That's amazing</em>, now to tell others...


</article>

The elements removed:

Paragraph closing tags can be omitted, because when the next one begins it knows that the last one has finished. Same with the list items also. Each list is begun with its <li>. But you’ll notice other closing tags have been kept.

The elements remaining:

The article still needs to be closed, as the browser will otherwise think the article is ongoing. The </ul> has also been kept, as it wraps the list, and stops the last list item from continuing onwards. And lastly in the example above, the </em> has remained, simply because, if left out, will continue to emphasize the rest of the paragraph.

Common Errors to be aware of:

There may be times when you will omit a closing tag, but find it may mess up your code. For Example:

<article>
  <h1>The heading is as expected. Yay!
  <p>But the paragraph inherits styles from the <h1>, boo!
</article>

The above happens, quite simply, because the browser doesn’t know that the <h1> should close. So as a result the paragraph is treated as though it is ‘wrapped within’ the <h1></h1> tags. The <h1> in the example above is closed by the browser, when it reaches the end of its parent continer, the <article> And so ends up rendering as the following:

<article>
  <h1>The heading is as expected. Yay!
    <p>But the paragraph inherits styles from the <h1>, boo!</p>
  </h1>
</article>

Combating this, however, is easy. See the following:

<article>
  <header>
    <h1>The heading is now contained within both header tags that we close in the code...
  </header>
  <p>...and so now the heading styles wont bleed into this paragraph as they are contained in the <header>
</article>

By way of summary

At first it can feel odd omitting these optional closing tags, but it will, in the long run, save you both time and those precious bytes. You just need to make sure you pick your times carefully. Generally, the rule I try to stick to is: if its a container with more than one type of child element (ie. article, section, div, nav etc.) close them off in your code. But if they are single entities, such as paragraphs, headings(of same number), lists etc, that are immediately preceded by the same tags, and have no differing siblings, omit the closing tag.

It will take a bit of getting used to, but if you nest your code well you will quickly begin to notice where you can omit closing tags.

Developer inspect tools can help

I am a huge fan and regular user of the chrome developer tools. you can use its ‘inspect element’ feature to test out your code to see where the tags are being closed by the browser. Just write a paragraph without the closing tag and view it in the inspector. When you see it in the flesh and get used to how the browser renders your code, it will give you more confidence to omit those optional tags.

In Google Chrome, right click any part of the web page, and you should see the option to ‘inspect element’ towards the bottom.

CSS… how to use sass and compass: Part Two

Please note: for the sake of brevity, when using sass and compass, I refer to them both as compass.

Following on from our previous tutorial, I would like to introduce you to @import. @import is a css feature that will allow you to import stylesheets into one another. For example:

@import "normalize.css"; 
body{
    background:lightblue;
    width:1000px;
    margin:auto;
}

This will bring in ‘normalize.css’ from the same folder as the stylesheet itself, and then apply the body properties.

With this, however, it will grab each @import on page load and add unnecessary time to your page download speed. In this part of my compass tutorial, I will show you how to incorporate @import and use compass to compile them all into one stylesheet during production.

How styles get compiled

Remember from the last part of the tutorial when we changed the style.scss file, and it updated our style.css one? Well we could in fact have any number of files, with different names, and they would each update in the same way. So if you had a base.scss, it would compile to base.css, typography.scss would update typography.css, and so on. You need not create the .css version either, as compass will do this for you automatically if it can’t find one when it compiles.

The way in which it compiles is defined in the config.rb file. As mentioned in the previous post, I highly recommend you set it to ‘:compressed’ before uploading to your server, as this will improve your page loading speed. The reason for this is that it removes any and all comments and white space.

Example Stylesheet setup

Below I have listed the default structure of my stylesheets. Create them as .scss files in your sass folder and please note the underscores, as these are needed. You wont be needing any .css versions. The only css file you will need is style.css.

I recommend you save a copy of the whole project directory when you have these stylesheets setup. That way you will have a nice starting off point for each of your projects.

  • _normalize.scss
  • _helpers.scss
  • _print.scss

Remember that stylesheet I advised you to temporarily save to your desktop? Well we are now going to bring that back in. If you open it in your code editor, you will see it is split into three main areas (not including the media queries part). The main code up top is the normalize. Then you have useful helper classes, and finally a set of print styles.

Copy and paste each part of the stylesheet into their respective .scss files.

Now re-open your style.scss and delete any previous code out, and write the following:

@import "normalize";
@import "helpers";
@import "print";

Save it. Notice how I have not included the underscore or the ‘.scss’ extension, as these are not required when importing them.

If it isn’t already running, start compass off watching your project as shown in the previous part. Compass should detect the changes to style.scss straight away and compile to its css equivalent. So now on re-opening style.css, you should see how normalize, helpers and print have all been pulled in from their imports on style.scss

What this now means is that you could theoretically split your styles into as many separate sheets as your wanted. As long as you remember to @import them into style.scss, then compass will see any changes to any of them and compile them all together into the one style.css. See, I told you they were awesome.

I encourage you to go now and experiment with splitting up your own stylesheets into manageable chunks. You will soon find a balance that suits your own tastes.

Thank you for reading the tutorials this far. I hope these first two parts have given you a good start with using sass and compass, although I did say I’d only refer to them as compass… Darn it! Oh well.

I realize this isn’t the most exciting of the features available to them. After all, I haven’t even mentioned mixins or even nesting (coming up in next part), but for me at least this has been the most indespensible of the features I’ve learnt so far.

I hope this helps you get up and running with Sass and Compass with ease. Let me know how you get on. Thanks.