Category: Programming

Linux, Laravel, PHP. My notes and mini-guides regarding development-related things.

  • 📂

    Preview Laravel’s migrations with the pretend flag

    Here is the command to preview your Laravel migrations without running them:

    cd /your/project/root
    php artisan migrate --pretend

    Laravel’s migrations give us the power to easily version control our database schema creations and updates.

    In a recent task at work, I needed to find out why a particular migration was failing.

    This is when I discovered the simple but super-useful flag --pretend, which will show you the queries that Laravel will run against your database without actually running those migrations.


  • 📂

    Giving a flatpak program access to home directory on Linux

    List out all of your installed Flatpaks and copy the “Application ID” for the Flatpak you want to give home directory access to.

    $ flatpak list

    Let’s assume we want to give the program “Insomnia” access to our home directory when it is used.

    The second column is the Application ID.

    The application ID for Insomnia is rest.insomnia.Insomnia.

    To give Insomnia access to your home directory, run the following:

    flatpak override --user --filesystem=home rest.insomnia.Insomnia

    Notes

    My knowledge of Flatpaks is limited so apologies if I end up being incorrect here.

    Flatpak’ed programs are self-contained installations that are sheltered from the system they are installed on. (Linux / security geeks may need to correct me here).

    By default, they don’t have access to the filesystem of your computer.

    I needed to give my own installation of Insomnia access to my system (just the home directory in my case) so that I could upload a file to it. The command above gives me that result.

    Other online tutorials

    There are some tutorials I’ve seen online that mention similar solutions, except using sudo and not including the --user flag. This didn’t give me the result I was needing.

    You see, without the --user flag, the command will try to update the Flatpak’s global configuration — which is why it needs sudo privileges.

    But by using the --user flag, we are only affecting the configuration for the current user, and so the sudo is not needed.


  • 📂

    Setting up Elasticsearch and Kibana using Docker for local development

    Overview

    Elasticsearch is a super-fast search query program. Kibana is a separate program that can be used for interacting with elasticsearch.

    Here I am setting up both Elasticsearch and Kibana in their own single Docker Containers. I do this as a way to help keep my computer relatively free from being cluttered with programs. Not only that, but since the containers are their own separate self-contained boxes, it also makes it easy to upgrade the Elasticsearch version I am using at a later date.

    Or even remove them entirely with minimal fuss.

    Please note: I am using version 7.10.1 of both programs in the examples below. You can look at each program’s respective docker hub pages to target the exact version you require:

    Just replace any uses of “7.10.1” below with your own version.

    Creating and running containers for the services needed

    Run the two following commands to download and run Elasticsearch locally:

    # Download the Elasticsearch docker image to your computer
    docker pull elasticsearch:7.10.1
    
    # Create a local container with Elasticsearch running
    docker run -d --name my_elasticsearch --net elasticnetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.ml.enabled=false" elasticsearch:7.10.1
    
    # Start the container
    docker container start my_elasticsearch

    And then run the two following commands to download and run Kibana locally:

    # Download the Kibana docker image to your computer
    docker pull kibana:7.10.1
    
    # Create a local container with Kibana running
    docker run -d --name my_kibana --net elasticnetwork -e ELASTICSEARCH_URL=http://elasticsearch:9200 -p 5601:5601 kibana:7.10.1
    
    # Start the container
    docker container start my_kibana

    Accessing Kibana

    Since kibana will be connecting to our Elasticsearch container, which it was told to use with the ELASTICSEARCH_URL=http://elasticsearch:9200 section of the Kibana create command, we really only need to use Kibana.

    Kibana has it’s own Devtools for querying Elasticsearch, which so far has been enough for my own usecases.

    head to http://localhost:5601 to access your own Kibana installation.

    Note: You can send curl requests directly to your Elasticsearch from the terminal by targeting the http://127.0.0.1:9200 endpoint.

    Deleting the containers

    If you wish to remove Elasticsearch and/or Kibana from your computer, then enter the following commands into your terminal.

    Using Docker for local development makes this a cinch.

    # Stop the Elasticsearch container if it is running
    # (Use it's name you gave it in the "--name" argument as its handle)
    docker container stop my_elasticsearch
    
    # Delete the Elasticsearch container
    docker container rm my_elasticsearch
    
    # Stop the Kibana container if it is running
    # (Use it's name you gave it in the "--name" argument as its handle)
    docker container stop my_kibana
    
    # Delete the Kibana container
    docker container rm my_kibana

    If you need to set up the two programs again, you can just use the create commands shown above to create them as you did originally.


  • 📂

    Install MongoDB with Docker for local development

    Pull the docker image for mongo down to your computer.

    docker pull mongo

    Run the mongo container in the background, isolated from the rest of your computer.

    # Command explained below
    docker run -d -p 27017:27017 --name mongodb mongo -v /data/db:/data/db

    What I love about this approach is that I don’t start muddying up my computer installing new programs — especially if it’s just for the purposes of experimenting with new technologies.

    The main run command explained:

    • “docker run -d” tells docker to run in detached mode, which means it will run in the background. Otherwise if we close that terminal it will stop execution of the program docker is running (mongo in this case).
    • “-p 27017:27017” maps your computer’s port number 27017 so it forwards its requests into the container using the same port. (I always forget which port represents the computer and which is the container)
    • “–name mongodb” just gives the container that will be created a nice name. Otherwise Docker will generate and random name.
    • “mongo” is just telling Docker which image to create.
    • “-v /data/db:/data/db” tells Docker to map the /data/db directory on your computer to the /data/db directory in the container. This will ensure that if you restart the container, you will retain the mongo db data.


  • 📂

    Fixing my local development file / folder permissions

    sudo find . -type d -exec chmod g+rwx {} +
    sudo find . -type f -exec chmod g+rw {} +

  • 📂

    Bulk converting large PS4 screenshot png images into 1080p jpg’s

    I tend to have my screenshots set to the highest resolution when saving on my PlayStation 4.

    However, when I upload to the screenshots area of this website, I don’t want the images to be that big — either in dimensions or file size.

    This snippet is how I bulk convert those images ready for uploading. I use an Ubuntu 20.04 operating system when running this.

    # Make sure ImageMagick is installed
    sudo apt install imagemagick
    
    # Run the command
    mogrify -resize 1920x1080 -format jpg folder/*.png

    You can change the widthxheight dimensions after the -resize flag for your own required size. As well as changing the required image format after the -format flag.


  • 📂

    Updating PHP versions in Ubuntu 20.04

    For an older PHP project, I needed to install an older version of PHP. This is what I did to set that up.

    Installing a different PHP version

    sudo add-apt-repository ppa:ondrej/php
    sudo apt-get update
    sudo apt-get install -y php7.1

    Rebinding php to required version

    Some of these binds are probably not need. I think the main ones, at least for my use case, were php and phar.

    sudo update-alternatives --set php /usr/bin/php7.1
    sudo update-alternatives --set phar /usr/bin/phar7.1
    sudo update-alternatives --set phar.phar /usr/bin/phar.phar7.1
    sudo update-alternatives --set phpize /usr/bin/phpize7.1
    sudo update-alternatives --set php-config /usr/bin/php-config7.1

    For some reason the --set flag stopped working, so I had to use:

    sudo update-alternatives --config php
    sudo update-alternatives --config phar
    
    etc. And update each one with the terminal prompt options for each.

    p.s. If using PHP-FPM, you could also set up different server conf files and point the FPM path to the version you need. My need was just because I was using the command line in the older project.


  • 📂

    How I would set up Laravel with Docker

    This is a quick brain dump for myself to remember how I set up Laravel with Docker. Hopefully it can help others out also.

    I tried to avoid Docker for the longest time due to the ease of just running php artisan serve. However, when you have some dependancies that your site will rely on, Docker can be helpful — especially when having multiple developers — in getting up and running with the whole codebase easier.

    This post assumes you have setup a basic Laravel project on a Linux computer, and have both Docker and Docker Compose installed locally.

    What will this project use?

    This is only a basic example to get up and running with the following dependancies. You can add more items to your docker-compose.yml file as you need to.

    Note: whatever you choose to name each extra service in your docker-compose.yml file, use its key as the reference point in your .env file.

    • The main site codebase
    • A MySQL database
    • an NGINX webserver
    • PHP

    docker-compose.yml

    Have a file in the project root, named `docker-compose.yml

    version: "3.3"
    
    services:
      mysql:
        image: mysql:8.0
        restart: on-failure
        env_file:
          - .env
        environment:
          MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
          MYSQL_DATABASE: ${MYSQL_DATABASE}
      nginx:
        image: nginx:1.15.3-alpine
        restart: on-failure
        volumes:
          - './public/:/usr/src/app'
          - './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
        ports:
          - 80:80
        env_file:
          - .env
        depends_on:
          - php
      php:
        build:
          context: .
          dockerfile: './docker/php/Dockerfile'
        restart: on-failure
        env_file:
          - .env
        user: ${LOCAL_USER}

    Dockerfile

    Have a Dockerfile located here: ./docker/php/Dockerfile. I keep it in a separate folder for tidiness.

    # ./docker/php/Dockerfile
    FROM php:7.2-fpm
    
    RUN docker-php-ext-install pdo_mysql
    
    RUN pecl install apcu-5.1.8
    RUN docker-php-ext-enable apcu
    
    RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
        && php -r "if (hash_file('SHA384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
        && php composer-setup.php --filename=composer \
        && php -r "unlink('composer-setup.php');" \
        && mv composer /usr/local/bin/composer
    
    WORKDIR /usr/src/app
    
    COPY ./ /usr/src/app
    
    RUN PATH=$PATH:/usr/src/app/vendor/bin:bin
    

    default.conf

    Have a default.conf file for the project’s nginx container saved here: ./docker/nginx/default.conf

    # ./docker/nginx/default.conf
    server {
     server_name ~.*;
    
     location / {
         root /usr/src/app;
    
         try_files $uri /index.php$is_args$args;
     }
    
     location ~ ^/index\.php(/|$) {
         client_max_body_size 50m;
    
         fastcgi_pass php:9000;
         fastcgi_buffers 16 16k;
         fastcgi_buffer_size 32k;
         include fastcgi_params;
         fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
     }
    
     error_log /dev/stderr debug;
     access_log /dev/stdout;
    }

    Add the necessary variables to your .env file

    There are some variables used in the docker-compose.yml file that need to be added to the .env file. These could be added directly, but this makes it more straightforward for other developers to customise their own setup.

    MYSQL_ROOT_PASSWORD=root
    MYSQL_DATABASE=example
    LOCAL_USER=1000:1000
    

    The MYSQL_ROOT_PASSWORD and MYSQL_DATABASE are self-explanatory, but theLOCAL_USER variable refers to the user id and group id of the currently logged in person on the host machine. This normally defaults to 1000 for both user and group.

    If your user and/or group ids happen to be different, just alter the variable value.

    Note: find out your own ids by opening your terminal and typing id followed by enter. You should see something like the following:

    uid=1000(david) gid=1000(david) groups=1000(david),4(adm),27(sudo),1001(rvm)

    uid and gid are the numbers you need, for user and group respectively.

    Run it

    Run the following two commands separately then once they are finished head to http:localhost to view the running code.

    Note: This setup uses port 80 so you may need to disable any local nginx / apache that may be running currently.

    docker-compose build
    docker-compose up -d

    Any mistakes or issues, just email me.

    Thanks for reading.


  • 📂

    Setting up my own Nextcloud (Version 16)

    Setting up your very own Nextcloud server from scratch. This has been tested with version 15 and 16 of the software. Any questions, please do contact me.

    Updated on: 24th June 2019

    Set up a new server (with Digital Ocean)

    If you don’t have an account already, head to Digital Ocean and create a new account. Of course, you can use any provider that you want to – I just happen to use them and so can only give experience from that.

    Login to your account.

    Setup your SSH key

    In the next step we will be creating your new droplet (server), and you will need an SSH Key to add to it. This allows for easy and secure access to your new droplet from your local computer, via your terminal1.

    If you are going to use the Digital Ocean console terminal, skip down to ‘Create the new “Droplet”‘, as you wont need an ssh key.

    Creating the key (if you haven’t already)

    If you haven’t generated an SSH key pair before, open a fresh terminal window and enter the following:

    ssh-keygen -t rsa

    Press enter through all of the defaults to complete the creation.

    Getting the contents of the public key

    Type this to display your new public key:

    cat ~/.ssh/id_rsa.pub

    This will give you a long string of text starting with ssh-rsa and ending with something like yourname@your-computer.

    Highlight the whole selection, including the start and end points mentioned, and right click and copy.

    When you are creating your droplet below, you can select the New SSH Key button and paste your public key into the box it gives you. You will also need to give the key a name when you add it in Digital Ocean, but you can name it anything.

    Then click the Add SSH Key and you’re done.

    Create the new “Droplet”

    Digital Ocean refers to each server as a droplet, going with the whole digital “ocean” theme.

    Head to Create > Droplets and click the “One-click apps” tab. Then choose the following options in the selection (Or your own custom selection – just take into account the monthly cost of each option):

    • LAMP on 18.04
    • $15/Month (2GB / 60GB / 3TB Transfer)
    • Enable backups (not necessary but recommended)
    • London (Choose your closest / preferred location)
    • Add your SSH key (see above)
    • Optionally rename the hostname to something more readable

    Once you have selected the above (or your own custom options) click create. After a few moments, your droplet will be ready to use.

    Set your DNS

    Got to your domain name provider, Hover in my case, and set up the subdomain for your nextcloud installation, using the I.P. address for your new droplet.

    I’m assuming that you already have your own domain name, perhaps for your personal website / blog. In which case we are adding a subdomain to that (so https://nextcloud.yourdomain.co.uk, for example).

    But there is nothing stopping you from buying a fresh domain and using it exclusively for your new Nextcloud (https://my-awesome-nextcloud.co.uk).

    I will be continuing this guide, assuming that you are using a subdomain.

    You will add it in the form of an A record. This is how I would add it in Hover:

    1. Select your own domain
    2. Choose edit > edit DNS
    3. Click Add A record on the DNS edit page
    4. Fill in the hostname as your desired subdomain for your Nextcloud. For example if you were having nextcloud.mydomain.co.uk, you would just enter nextcloud.
    5. Fill in the I.P. address as the I.P. address of your new Droplet in Digital Ocean.
    6. Click Add Record

    Configuring the server

    Install all the required programs for Nextcloud

    First ssh into your new server:

    ssh root@YOUR.IP.ADDRESS.HERE

    When we chose to install the LAMP option when setting up the droplet, it installed Linux, Apache2, MySQL and PHP. However, there are still some extra dependencies that Nextcloud needs to run.
    Let’s install those next:

    apt-get update
    
    apt-get install libapache2-mod-php7.2 php7.2-gd php7.2-json &&
    apt-get install php7.2-mysql php7.2-curl php7.2-mbstring &&
    apt-get install php7.2-common php7.2-intl php-imagick php7.2-xml &&
    apt-get install php7.2-zip php7.2-ldap php7.2-imap  php7.2-gmp &&
    apt-get install php7.2-apcu php7.2-redis php7.2-imagick ffmpeg unzip

    Download and install the Nextcloud codebase

    Please note that I am using version 15.0.0 in this example. However, when you read this you may have a new version available to you. I will try and keep this guide as up to date as possible.

    # Download the codebase and the "checksum" file.
    wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip
    wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip.sha256
    
    # Make sure that the codebase is genuine and hasn't been altered.
    sha256sum  -c nextcloud-15.0.0.zip.sha256 < nextcloud-15.0.0.zip
    
    # Move the unzipped codebase into the webserver directory.
    unzip nextcloud-15.0.0.zip
    cp -r nextcloud /var/www
    chown -R www-data:www-data /var/www/nextcloud

    Apache config example

    nano /etc/apache2/sites-available/000-default.conf

    An example apache config:

    <VirtualHost *:80>
            ServerAdmin mail@yourdomain.co.uk
            DocumentRoot /var/www/nextcloud
    
            <Directory /var/www/nextcloud/>
                Options Indexes FollowSymLinks
                AllowOverride All
                Require all granted
            </Directory>
    
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
    
            <IfModule mod_dir.c>
                DirectoryIndex index.php index.pl index.cgi index.html index.xhtml index.htm
            </IfModule>
    
    RewriteEngine on
    RewriteCond %{SERVER_NAME} =nextcloud.yourdomain.co.uk
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
    </VirtualHost>
    a2enmod rewrite && a2enmod headers && a2enmod env && 
    a2enmod dir && a2enmod mime && systemctl restart apache2

    A quick mysql fix

    In recent versions of MySQL, the way that the mysql root user connects to the database means that password authentication wont work. So firstly we need to alter that user to use password authentication.

    apt install mysql-server
    mysql
    
    # In the mysql mode
    ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_secret_password';
    FLUSH PRIVILEGES;
    quit

    SSL with Let’s Encrypt

    apt install certbot
    certbot --apache -d nextcloud.yourdomain.co.uk

    You will then be asked some questions about your installation:

    • Email address (your… umm… email address :D)
    • Whether you agree to Lets Encrypt Terms of Service (Agree)
    • Whether to redirect HTTP traffic to HTTPS (choose Yes)

    Let’s Encrypt will handle the registering of the apache settings for you new ssl to work. It uses the server name you entered in the 000-default.conf file earlier.

    It will also create a new file that is used by Apache for the SSL. For me, this file was at /etc/apache2/sites-available/000-default-le-ssl.conf.

    First Login!

    Now go to https://nextcloud.yourdomain.co.uk and you should see your nice new shiny Nextcloud installation.

    Creating the admin account

    Fill in the fields for your desired name and password for the admin account. You can just use the admin account as your main account if you will be the only one using this Nextcloud. But you can give others access to this site with their own login details, if you wanted. But without the admin-level priviledges.

    For the database fields, enter root as the username. Then for the password, use the one that you set in the previous mysql command above. For the database name choose whatever name you wish, as the installation will create it for you.

    Click finish.

    After a few moments time, your nextcloud instance should present you with the landing screen along with the welcome popup. Go ahead and read it and you could even install the app for your devices as it will suggest.

    Finishing touches

    If you click the cog icon in the top right of your screen, followed by settings in its dropdown, you will come to the main settings area. In the left-hand column, beneath the heading “Administration”, you should see the link for “Overview”. Click it.

    Now you should see a bunch of security and setup warnings at the top of the page. This is nothing to worry about, it is simply telling you about some actions that are highly recommended to setup.

    We will do that now. 🙂

    The “Strict-Transport-Security” HTTP header is not set to at least “15552000” seconds. For enhanced security, it is recommended to enable HSTS as described in the security tips.

    All that is needed to fix this first one, is a quick edit to the apache config file that Let’s Encrypt created for the installation.

    nano /etc/apache2/sites-available/000-default-le-ssl.conf

    And then add this following three lines within the <VirtualHost *:443> tag.

    <IfModule mod_headers.c>
        Header always add Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    </IfModule>

    And then reload apache:

    systemctl reload apache2

    Refreshing the settings page should see that warning disappear.

    No memory cache has been configured. To enhance performance, please configure a memcache, if available.

    Open up you Nextcloud config file:

    nano /var/www/nextcloud/config/config.php

    At the bottom of the config array, add the following line:

    'memcache.local' => '\OC\Memcache\APCu',

    Refresh your browser and that next warning should now vanish.

    For future reference, you can always take a look in the sample Nextcloud config file at /var/www/nextcloud/config/config.sample.php. It will show you all available config options.

    The PHP OPcache is not properly configured.

    With this warning, Nextcloud should display some sample opcache code to paste over. This one caught me out as I couldn’t work out which ini file this example code should go.

    After some trial and error, I discovered that for me, it was located in an opcache.ini file:

    nano /etc/php/7.2/mods-available/opcache.ini

    Then at the bottom of the file, I pasted the following:

    opcache.enable=1
    opcache.enable_cli=1
    opcache.interned_strings_buffer=8
    opcache.max_accelerated_files=10000
    opcache.memory_consumption=128
    opcache.save_comments=1
    opcache.revalidate_freq=1

    Reload apache:

    systemctl reload apache2

    Some columns in the database are missing a conversion to big int.

    I only actually came across this warning when I was creating a dummy Nextcloud installation for helping with writing this guide. You may not actually get it. But if you do, here’s the fix2:

    sudo -u www-data php /var/www/nextcloud/occ db:convert-filecache-bigint

    This will warn you that it could take hours to do its thing, depending on the number of files. However, due to us running it right after the installation, will not even take a second.

    The PHP memory limit is below the recommended value of 512MB

    To fix this, I just had to edit the following file:

    nano /etc/php/7.2/apache2/php.ini

    Then alter the next line to look like this:

    memory_limit = 512M

    Then restart apache:

    service apache2 restart

    All Done

    Once you refresh the settings page once more, you should see a beautiful green tick with the message “All checks passed”.

    Good feeling, isn’t it?

    If for any reason you are still getting warnings, please dont hesitate to contact me. I’ll do my best to help. Email: mail@davidpeach.me. Alternatively you can head to the Nextcloud Documentation.


  • 📂

    How to easily set a custom redirect in Laravel form requests

    In Laravel you can create custom request classes where you can house the validation for any given route. If that validation then fails, Laravel’s default action is to redirect the visitor back to the previous page. This is commonly used for when a form is submitted incorrectly – The visitor will be redirected back to said form to correct the errors. Sometimes, however, you may wish to redirect the visitor to a different location altogether.

    TL;DR (Too long; didn’t read)

    At the top of your custom request class, add one of the following protected properties and give it your required value. I have given example values to demonstrate:

    protected $redirect = '/custom-page'; // Any URL or path
    protected $redirectRoute = 'pages.custom-page'; // The named route of the page
    protected $redirectAction = 'PagesController@customPage'; // The controller action to use.
    

    This will then redirect your visitor to that location should they fail any of the validation checks within your custom form request class.

    Explaination

    When you create a request class through the Laravel artisan command, it will create one that extends the base Laravel class Illuminate\Foundation\Http\FormRequest. Within this class the three protected properties listed above are initialised from line 33, but not set to a value.

    Then further down the page of the base class, on line 127 at the time of writing, there is a protected method called getRedirectUrl. This method performs a series of checks for whether or not any of the three redirect properties have actually been set. The first one it finds to be set by you, in the order given above, is the one that will be used as the custom redirect location.

    Here is that getRedirectUrl method for your convenience:

    /**
    * Get the URL to redirect to on a validation error.
    *
    * @return string
    */
    protected function getRedirectUrl()
    {
        $url = $this->redirector->getUrlGenerator();
    
        if ($this->redirect) {
            return $url->to($this->redirect);
        } elseif ($this->redirectRoute) {
            return $url->route($this->redirectRoute);
        } elseif ($this->redirectAction) {
            return $url->action($this->redirectAction);
        }
    
        return $url->previous();
    }
    
    

    Do you have any extra tips to add to this? Let me know in the comments below.

    Thanks.