Tag: servers

  • 📂

    Backing up Docker volume data to Digital Ocean spaces with encryption

    Backups are a must for pretty much anything digital. And automating those backups make life so much easier for you, should you lose your data.

    My use case

    My own use case is to backup the data on my home server, since these are storing my music collection and my family’s photos and documents.

    All of the services on my home server are installed with Docker, with all of the data in separate Docker Volumes. This means I should only need to back those folders that get mounted into the containers, since the services themselves could be easily re-deployed.

    I also want this data to be encrypted, since I will be keeping both an offline local copy, as well as storing a copy in a third party cloud provider (Digital Ocean spaces).

    Setting up s3cmd

    S3cmd is a command line utility for interacting with an S3-compliant storage system.

    It will enable me to send a copy of my data to my Digital Ocean Spaces account, encrypting it before hand.

    Install s3cmd

    The official installation instructions for s3cmd can be found on the Github repository.

    For Arch Linux I used:

    Bash
    sudo pacman -S s3cmd

    And for my home server, which is running Ubuntu Server, I installed it via Python’s package manager, “pip”:

    Bash
    sudo pip install s3cmd

    Configuring s3cmd

    Once installed, the first step is to run through the configuration steps with this command:

    Bash
    s3cmd --configure

    Then answer the questions that is asks you.

    You’ll need these items to complete the steps:

    • Access Key (for digital ocean api)
    • Secret Key (for digital ocean api)
    • S3 endpoint (e.g. lon1.digitaloceanspaces.com)
    • DNS-style (I use %(bucket)s.ams3.digitaloceanspaces.com)
    • Encryption password (remember this as you’ll need it for whenever you need to decrypt your data)

    The other options should be fine as their default values.

    Your configuration will be stored as a plain text file at ~/.s3cmd. This includes that encryption password.

    Automation script for backing up docker volume data

    Since all of the data I actually care about on my server will be in directories that get mounted into docker containers, I only need to compress and encrypt those directories for backing up.

    If ever I need to re-install my server I can just start all of the fresh docker containers, then move my latest backups to the correct path on the new server.

    Here is my bash script that will archive, compress and push my data to backup over to Digital Ocean spaces (encrypting it via GPG before sending it).

    I have added comments above each section to try and make it more clear as to what each step is doing:

    Bash
    #!/usr/bin/bash
    
    ## Root directory where all my backups are kept.
    basepath="/home/david/backups"
    
    ## Variables for use below.
    appname="nextcloud"
    volume_from="nextcloud-aio-nextcloud"
    container_path="/mnt/ncdata"
    
    ## Ensure the backup folder for the service exists.
    mkdir -p "$basepath"/"$appname"
    
    ## Get current timestamp for backup naming.
    datetime=$(date +"%Y-%m-%d-%H-%M-%S")
    
    ## Start a new ubuntu container, mounting all the volumes from my nextcloud container 
    ## (I use Nextcloud All in One, so my Nextcloud service is called "nextcloud-aio-nextcloud")
    ## Also mount the local "$basepath"/"$appname" to the ubuntu container's "/backups" path.
    ## Once the ubuntu container starts it will run the tar command, creating the tar archive from 
    ## the contents of the "$container_path", which is from the Nextcloud volume I mounted with 
    ## the --volumes-from flag.
    docker run \
    --rm \ 
    --volumes-from "$volume_from" \
    -v "$basepath"/"$appname":/backups \
    ubuntu \
    tar cvzf /backups/"$appname"-data-"$datetime".tar.gz "$container_path"
    
    ## Now I use the s3cmd command to move that newly-created 
    ## backup tar archive to my Digital Ocean spaces.
    s3cmd -e put \
      "$basepath"/"$appname"/"$appname"-data-"$datetime".tar.gz \
      s3://scottie/"$appname"/
    

    Automating the backup with a cronjob

    Cron jobs are a way to automate any tasks you want to on a Linux system.

    You can have fine-grained control over how often you want to run a task.

    Although work with Linux’s cron scheduler is out of the context of this guide, I will share the setting I have for my Nextcloud backup, and a brief explanation of its configuration.

    The command to edit what cron jobs are running on a Linux system, Ubuntu in my case, is:

    Bash
    crontab -e

    This will open up a temporary file to edit, which will get written to the actual cron file when saved — provided it is syntactically correct.

    This is the setting I have in mine for my Nextcloud backup (it should all be on a single line):

    Bash
    10 3 * * 1,4 /home/david/backup-nextcloud >> /home/david/backups/backup-nextcloud.log

    The numbers and asterisks are telling cron when the given command should run:

    Plaintext
    10th minute
    3rd Hour
    * Day of month (not relevant here)
    * Month (not relevant here)
    1st,4th Day of the Week (Monday and Thursday)

    So my configuration there says it will run the /home/david/backup-nextcloud command every Monday and Thursday at 3:10am. It will then pipe the command’s output into my log file for my Nextcloud backups.

    Decrypting your backups

    Download the file from your Digital Ocean spaces account.

    Go into the directory it is downloaded to and run the file command on the archive:

    Bash
    # For example
    file nextcloud-data-2023-11-17-03-10-01.tar.gz
    
    # You should get something like the following feedback:
    nextcloud-data-2023-11-17-03-10-01.tar.gz: GPG symmetrically encrypted data (AES256 cipher)

    You can decrypt the archive with the following command:

    Bash
    gpg --decrypt nextcloud-data-2023-11-17-03-10-01.tar.gz > nextcloud-backup.tar.gz

    When you are prompted for a passphrase, enter the one you set up when configuring the s3cmd command previously.

    You can now extract the archive and see your data:

    Bash
    tar -xzvf nextcloud-backup.tar.gz

    The archive will be extracted into the current directory.


  • 📂

    I’m now running pi-hole through my Raspberry Pi 2b.

    It’s both amazing and depressing just how many trackers are being blocked by it. I even noticed a regular ping being made to an Amazon endpoint exactly every 10 minutes.

    I will try and write up my set up soon, which is a mix of setting up the Raspberry Pi and configuring my home router.


    I’ve also managed to finally get a home server running again – using Ubuntu Server LTS.

    My plan on my server is to just install services I want to self-host using docker. Docker being the only program I’ve installed on the machine itself.

    So far I have installed the following:

    • Home Assistant — On initial playing with this I have decided that it’s incredible. Connected to my LG TV and lets me control it from the app / laptop.
    • Portainer — A graphical way to interact with my docker containers on the server.