| json

Our backup approach

In the previous post, I presented my criteria for good backups and outlined some implementation approaches. In this post I'll provide a step-by-step description for setting up a backup process for a Nextcloud Server that will fulfill all those criteria.

Feel free to chose and pick and adjust my approach to your own system.

You can also skip directly to the final code if you're fluent in bash and don't care for the reasoning behind the individual steps.

Step 1: reliable zero downtime snapshots

A typical Nextcloud backup will likely look like this:

  1. Turn maintenance mode on
  2. dump/backup the database
  3. copy the user files and the nextcloud installation directory to the backup target location
  4. Turn maintenance mode back off

Enabling maintenance mode serves to ensure, that all the things we are including in our backup are in sync and thus represent a consistent state of our service at a specific point in time. Otherwise, we might run into issues when restoring the backup - for instance files being present in the filesystem but not known to the database (and thus not visible to Nextcloud).

Unfortunately, we have ruled out using maintenance mode, because we don't want any downtime during our backup process, so we need another solution.

To accomplish consistent zero downtime backups, we will create synchronized snapshots of all the things that need to be included in a typical Nextcloud backup: the users' files, the installation directory and the database. For this purpose, we will rely on the snapshot feature of our filesystem and on so-called single-transaction copies for our database. This means, that we must choose a database and filesystems for our server, to include those features.

Popular modern filesystem types like BTRFS, ZFS, XFS, LVM and many VM storage solutions provide snapshot capabilities. Filesystem snapshots are read-only copies of a directory or volume that are usually created nearly instantly and take little to no additional storage space. This guide will give examples for BTRFS and LVM (more might be added in the future).

Most modern transactional database should provide options to do single-transaction copies of the database. I will give examples on how to achieve this for mariadb/mysql and postgresql.

Is any of this applicable to containerized deployments (e.g. Nextcloud All-in-One which is based on Docker)?

Yes, you just need to make sure that the volumes which contain your Nextcloud installation directory and your user files are backed by compatible storage - e.g. mounting a btrfs subvolume under the respective paths.

You will also have to prepend some of the commands in this guide with whatever is necessary to run a command in you container. For example for docker, you would need: docker exec <name-of-your-container> <original-command>. Other commands, especially filesystem operations, always need to be executed on the host system.

Taking filesystem snapshots

Let's walk us through the process of creating filesystem snapshots for our user files directory and Nextcloud installation directory.

BTRFS LVM

This requires the nextcloud installation directory (assumed to be at /var/www/nextcloud) or respectively the user files directory (assumed to be at /mnt/data) to be a BTRFS subvolume (created with btrfs subvolume create <path>).

# Path to the btrfs subvolume containing your NC files directory
USER_FILES_VOLUME=/mnt/data/nc_files
# Path where the (temporary) btrfs snapshot for the NC files directory should be created
USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"

# Path to the btrfs subvolume containing your NC installation directory
NC_INSTALL_VOLUME=/var/www/nextcloud
# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
btrfs subvolume snapshot \
  "$USER_FILES_VOLUME" "$USER_FILES_SNAPSHOT_PATH"
btrfs subvolume snapshot \
  "$NC_INSTALL_VOLUME" "$NC_INSTALL_SNAPSHOT_PATH"

This requires the nextcloud installation directory or respectively the user files directory to be a LVM logical volume.

# Adjust to match the LVM volume group containing your nc installation directory
USER_FILES_VG="lvm-1"
# Adjust to match the logical volume name containing your nc installation directory
USER_FILES_LV="nc-files"
# Arbitrary name for the (temporary) lvm snapshot
USER_FILES_SNAPSHOT_NAME="nc-backup-files"

# Adjust to match the LVM volume group containing your nc installation directory
NC_INSTALL_VG="lvm-1"
# Adjust to match the logical volume name containing your nc installation directory
NC_INSTALL_VOLUME="nextcloud"
# Arbitrary name for the (temporary) lvm snapshot
NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
lvcreate -s \
  --name "${USER_FILES_SNAPSHOT_NAME}" \
  "${USER_FILES_VG}/${USER_FILES_LV}"
lvcreate -s \
  --name "$NC_INSTALL_SNAPSHOT_NAME" \
  "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}"

Taking a database snapshot

For creating our database snapshot, we will keep things simple and use the trusty sql dump utilities available for our reference databases mariadb and postgres. An sql dump is basically a set of SQL instructions that can be used to recreate the database in its current state.

Following you will find code snippets for creating those dumps:

PostgreSQL MariaDB / MySQL
SQL_DUMP_TARGET_DIR=/tmp/nc_backup/

DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "$HOME/.pgpass"
pg_dump > "${SQL_DUMP_TARGET_DIR}/db.sql"
SQL_DUMP_TARGET_DIR=/tmp/nc_backup/

DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "$HOME/.my.cnf"
mysqldump \
  --host="${DB_HOST}" \
  --port="${DB_PORT}" \
  --single-transaction \
  --skip-lock-tables \
  "${DB_NAME}" > "${SQL_DUMP_TARGET_DIR}/db.sql"

Note: We need the parameter --single-transaction to ensure a "real" snapshot (a single transaction copy to be precise) that is unaffected by ongoing writes to the DB. We add --skip-lock-tables as well, to improve performance (not locking tables is okay here, because we're just reading from the DB).

Now that we can create both, filesystem snapshots and an sql dump, we are capable of creating a consistent point-in-time snapshot of our Nextcloud instance with zero downtime (well, actually 3 synchronized snapshots, but we'll get to that). Just make sure, to either trigger all three operations at the same time or perform the filesystem snapshots first (since they are super fast, that's basically the same thing). It's important that all 3 snapshots refer to the same point in time.

Merging the snapshots into one directory

Right now, we have 3 individual snapshots lying around. They might even be located on separate disks. For the next steps we want one aggregated view on them, which we will achieve using bind mounts for btrfs (or other filesystems) and normal mounts for LVM. Bind mounts are a Linux feature which allows us to mount one directory to another path in our filesystem.

So, using the same variables as above, the code for this would look like follows:

BTRFS LVM
mount --bind -o ro \
  "$NC_INSTALL_SNAPSHOT_PATH" \
  "$SQL_DUMP_TARGET_DIR/nextcloud"
mount --bind -o ro \
  "$USER_FILES_SNAPSHOT_PATH" \
  "$SQL_DUMP_TARGET_DIR/nc_files"
# Activate the LVM snapshot so we can mount it
lvchange -ay -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
# Mount the snapshot read-only inside $SQL_DUNMP_TARGET_DIR with ownership forced to the user that owns Nextcloud runs as
mount -o "ro,uid=33,gid=33" \
  "/dev/${USER_FIELS_VG}/${USER_FILES_SNAPSHOT_NAME}" \
  "${SQL_DUMP_TARGET_DIR}/nextcloud"
# Activate the LVM snapshot so we can mount it
lvchange -ay -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
# Mount the snapshot read-only inside $SQL_DUNMP_TARGET_DIR with ownership forced to the user that owns Nextcloud runs as
mount -o "ro,uid=33,gid=33" \
  "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" \
  "${SQL_DUMP_TARGET_DIR}/nextcloud"

The result is a directory (/tmp/nc_backup) which looks like it contains all filesystem snapshots and the sql dump:

/tmp/nc_backup
├── nextcloud  # nextcloud installation directory
├── nc_files   # user files directory
└── db.sql     # sql dump of the database

Step 2: Providing Backup Storage

Choosing our remote storage location

As we have established, we need our backups to not be affected by a number of local hazards, including hardware failure or fires. Therefore, we need to store them somewhere else. Because we also want ransomware protection, we need a storage type with one of two options:

Either

These features (which I will explain in more detail later) will allow us to prevent attackers (e.g. ransomware) from destroying our backups.

In this guide, we will focus on object locking to ensure ransomware protection. If you want to use restricted access permissions, please refer to the Kopia documentation.

Because we need those features, we can't go with many of the storage options offered by Kopia and are left with something called "object storage" - basically a special kind of storage often found in cloud computing. Apart from big cloud providers, this kind of storage is so ubiquitous today, that you will find it offered by numerous providers and you can even self-host it. Because we will encrypt the backups you don't need to have a high level of trust in your provider, though (apart from trust in their capability of not losing your data, of course). Here are some options:

*Note: All options marked with S3 compatible will share the same code snippets under the name "S3 compatible" going forward.

Setting up one of the self-hosting options is out of scope for this guide, however, I will provide instructions for AWS S3, Google Cloud Storage and Min-IO going forward (maybe I'll add Ceph at a later point).

For reference though: My personal storage server is formed by a RaspberryPi 5 8GB running Min-IO and coupled with a JBOD system that contains 4 8TB disks.

Configuring a bucket

So now, that you have chosen (or become!) a storage provider, you need to create a bucket (which is comparable to the concept of directories in traditional file systems, although they can't be nested). If you need object locking than you need to make sure it is supported by your provider and enabled for your bucket.

Now you need to create credentials for your bucket. Depending on your storage provider that can look differently.

S3 compatible Google Cloud Storage

For S3 compatible providers you need to create user and a policy and potentially an access key.

Assuming your bucket is named kopia, here is what a policy could look like:

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Action": [
    "s3:*"
   ],
   "Resource": [
    "arn:aws:s3:::kopia",
    "arn:aws:s3:::kopia/*"
   ]
  },
  {
   "Effect": "Deny",
   "Action": [
    "s3:DeleteBucket",
    "s3:DeleteBucketPolicy"
   ],
   "Resource": [
    "arn:aws:s3:::kopia",
    "arn:aws:s3:::kopia/*"
   ]
  }
 ]
}

To apply (using the Min-IO cli (mcli)):

  1. Save the policy as kopiawriter.json
  2. Edit it to match your Abucket name
  3. Now execute the following commands:
# Configure your S3 storage
mc alias set backupserver <HOSTNAME> <ACCESS_KEY> <SECRET_KEY>
# Create a new user
mc admin user add backupserver kopia <kopia-login-secret>
# Add the policy to your S3 server
mcli admin policy add backupserver kopiawriter kopiawriter.json
# Attach the policy to the kopia user
mcli admin policy set backupserver kopiawriter user=kopia

Now you can access the bucket with accesskey=kopia and secretaccesskey=<kopia-login-secret>.

If you want to use Google Cloud Storage, I recommend setting up a service account that you will then grant access to your google storage bucket. Finally, you will need to create a service account key for your service account that you will later use to connect the storage.

For this guide, I won't go into more detail on those steps. Please follow the linked official documentation instead.

Step 3: Connecting our Storage

By now, we have a snapshot of our instance and we have some remote storage for our backups. Now let's address getting our backup there.

Introducing Kopia

To turn our instance snapshot into a backup that ticks all the boxes from our backup criteria, we will need a specialized backup tool. There are a number of options available, notably Restic, Borg and Kopia, which should all cover most of our needs. Because I have the most experience with it and use it for my own backups, we're going with Kopia in this guide (feel free to supply information on how to do the same with another tool and I'll consider adding it).

Why do we need a specialized tool? While we could implement some of its features ourselves, this would be more error-prone and very complex to achieve. Some features, like error correction and ransomware protection are especially infeasible to solve by hand.

For which features do we use Kopia? Notable features of Kopia that we rely on are:

Before proceeding, please make sure you have Kopia installed on your server as per the installation instructions.

Writing the backup

Now, that we have our backup tool ready, we're going take our instance snapshot and write it to our object storage.

Let's first setup a Kopia Repository .

S3 compatible Google CLoud Storage

Use the following commands to initialize a Kopia repository. Adjust the variables ENDPOINT,BUCKET_NAME, ACCESS_KEY, SECRET_ACCESS_KEY, PREFIX according to your S3 storage.

For all available paremeters, refer to the kopia documentation.

ENDPOINT="s3.amazonaws.com" # The endpoint where to find your S3 compatible server (including port, excluding protocol)
BUCKET_NAME="my-backup-bucket" # The name of your S3 bucket
ACCESS_KEY="my-s3-user" # Your access key could be your user name or an ID attached to a specific credential (depending on your provider and configuration)
SECRET_ACCESS_KEY="my-secret-s3-password" # The secret part of your access key - could be your user password or a secret attached to a specific credential (depending on your provider and configuration
PREFIX="/" # Path prefix for all objects written by Kopia
kopia repository create s3 \
  --endpoint "$ENDPOINT" \
  --bucket "$BUCKET_NAME" \
  --access-key "$ACCESS_KEY" \
  --secret-access-key "$SECRET_ACCESS_KEY" \
  --prefix "$PREFIX" \

Use the following commands to initialize a Kopia repository. Adjust the variables BUCKET_NAME, SERVICE_ACCOUNT_CREDENTIALS_FILES, PREFIX according to your setup.

For all available paremeters, refer to the kopia documentation.

BUCKET_NAME="my-backup-bucket" # The name of your Google bucket
SERVICE_ACCOUNT_CREDENTIALS_FILE="/path/to/credentials.json" # You can skip the --credentials-file parameter to use your default google credentials
PREFIX="/" # Path prefix for all objects written by Kopia
kopia repository create gcs \
  --bucket "$BUCKET_NAME" \
  --credentials-file "${SERVICE_ACCOUNT_CREDENTIALS_FILE}" \
  --prefix "$PREFIX" \

Now we need to actually enable the use of object locks in Kopia:

kopia maintenance set --extend-object-locks true

And adjust our Kopia global policy (we could also do this per repository, but since we only have one, this is fine). Here we setup how many backups to retain and the type of compression to use. You can find the full list of parameters in the Kopia documentation.

kopia policy set \
  --compression zstd \
  --compression-min-size 100K \
  --keep-annual 2 \
  --keep-monthly 24 \
  --keep-weekly 8 \
  --keep-daily 14 \
  --keep-hourly 48

All set! We're finally ready to actually transfer the snapshot to the backup storage:

kopia snapshot create \
  --parallel=4 \
  --tags "job:nc-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes" \
  "${SQL_DUMP_TARGET_DIR}"

Kopia does a lot of things for us. Apart from the things that we have configured explicitly (like retention, encryption, compression) we get very efficient deduplication, meaning we only need space for data that is actually new. That's also, why we don't compress our database dump - because that way we can make use of the more efficient compression and deduplication Kopia provides.

One particularly nice thing about Kopia backups is, that the progress of a partially transferred backup that got aborted due to e.g. network issues will be picked up during the next try, because Kopia still stores all the data blocks that were transferred and never transfers the same data block again (because of deduplication). In other words: If we need 10 attempts to create our initial backup, because we have a large amount of data, we will still make progress during every attempt and end up with a complete backup at the end.

Step 4: Cleanup

Lastly, we can cleanup the snapshots, we don't need it anymore. Here's a script snippet to do that depending on the filesystem used:

BTRFS LVM
# Unmount our --bind mounts
umount "${SQL_DUMP_TARGET_DIR}/nextcloud"
umount "${SQL_DUMP_TARGET_DIR}/nc_files"

# Remove the snapshot directory (including the db.sql)
rm -rf "${SQL_DUMP_TARGET_DIR}"
# Remove the filesystem snapshots
btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
# Unmount our snapshot mounts
umount "${SQL_DUMP_TARGET_DIR}/nextcloud"
umount "${SQL_DUMP_TARGET_DIR}/nc_files"

# Remove the snapshot directory (including the db.sql)
rm -rf "${SQL_DUMP_TARGET_DIR}"
# Remove the filesystem snapshots
lvchange -an -K "${NC_INSTALL_VOLGROUP}/${NC_INSTALL_SNAPSHOT}"
lvremove "/dev/${NC_INSTALL_VOLGROUP}/${NC_INSTALL_SNAPSHOT}"

lvchange -an -K "${NC_FILES_VOLGROUP}/${NC_FILES_SNAPSHOT}"
lvremove "/dev/${NC_FILES_VOLGROUP}/${NC_FILES_SNAPSHOT}"

Putting it all together

Now that we know how to create a backup, let's automate the process and run it on a schedule. For your convenience, here is a script, that you can drop in /etc/cron.hourly/ or use in a systemd service unit with corresponding systemd timer.

Setup Kopia Repository

If you haven't done it already, execute the following snippet (as root) to configure your Kopia Repository. Select the type of remote repository that matches your setup.

S3 compatible Google Cloud Storage After configuring the options to match your backup storage setup, run this script snippet as root to setup your Kopia repository.

# You will be asked for your access key and access key secret.
# Your access key could be your user name or an ID attached to a specific credential (depending on your provider and configuration)
# The secret part of your access key - could be your user password or a secret attached to a specific credential (depending on your provider and configuration
read -rsp 'Enter S3 Access Key ID: ' ACCESS_KEY \
  && read -rsp $'\n''Enter S3 Secret Access Key: ' SECRET_ACCESS_KEY \
  && kopia repository create s3 \
  --endpoint "[[ENDPOINT]]" \
  --bucket "[[BUCKET_NAME]]" \
  --access-key "${ACCESS_KEY?}" \
  --secret-access-key "${SECRET_ACCESS_KEY?}" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true
After configuring the options to match your backup storage setup, run this script snippet as root to setup your Kopia repository.

Google Default Credentials Google Service Account key file
kopia repository create gcs \
  --bucket "[[BUCKET_NAME]]" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true
kopia repository create gcs \
  --bucket "[[BUCKET_NAME]]" \
  --credentials "[[GOOGLE_CREDENTIALS_FILE]]" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true

Setup the backup script

Backup Script

Following, configure the options to match your setup and then copy or download the backup script. Afterward, adjust the configuration section at the top of the script (mostly paths to your Nextcloud setup) and run it as root. The backup script will use your default Kopia repository (which you have initialized in the previous step).

Note: Make sure to save the script with permissions 0700 or 0500 to avoid other users from reading the credentials inside! (Or move them outside the script, e.g. inside systemd credentials).

S3 or S3 compatible Google cloud Storage BTRFS LVM BTRFS LVM MariaDB PostgreSQL
nextcloud-backup.sh Download Download Download Download Download Download Download Download
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25# Path to the btrfs subvolume containing your NC files directory
26USER_FILES_VOLUME="/mnt/data/ncdata"
27# Path where the (temporary) btrfs snapshot for the NC files directory should be created
28USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
29# DB dump config
30# Path where to temporarily store database dump (should have enough free space)
31SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
32# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
33# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
34# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
35# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
36# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
37# - For postgresql, no adjustemnts are needed
38
39# Configure the connection options for your database:
40DB_USER="nextcloud"
41DB_NAME="nextcloud"
42DB_PASSWORD="password"
43DB_HOST="127.0.0.1"
44DB_PORT="5432"
45
46# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
47MYCNF_PATH="$HOME/.my.cnf"
48install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
49SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" --single-transaction --skip-lock-tables "${DB_NAME}")
50
51
52# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
53KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
54
55### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
56NC_INSTALL_FSTYPE="btrfs"
57
58USER_FILES_FSTYPE="btrfs"
59
60
61# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
62NC_OWNER_UID=33
63NC_OWNER_GID=33
64
65# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
66USER_FILES_OWNER_UID=33
67USER_FILES_OWNER_GID=33
68
69# Configure path where all parts of the backup will be mapped to before transferring with Kopia
70BACKUP_ROOT="/run/nc-backup"
71
72
73### You usually should not need to touch the following variables
74
75_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
76_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
77_DB_BIND="${BACKUP_ROOT}/db"
78
79# file for keeping track of running backup operations
80_PID_FILE="/var/run/nc-backup.pid"
81
82
83# Utility function for logging to both the log file and stdout
84log() {
85 local msg message
86
87 fwd=(echo)
88 if [[ -n "$LOG_FILE" ]]
89 then
90 fwd=(tee -a "$LOG_FILE")
91 fi
92
93 if [[ -z "${1:-}" ]]
94 then
95 while read -r message
96 do
97 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
98 done
99 return
100 fi
101
102 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
103}
104
105# Utility function for cleaning up snapshots, pid file, etc.
106cleanup() {
107 set +eu
108 echo "Cleanup..."
109
110 rm -rf "${PGPASS_PATH}"
111
112
113 umount "$_DB_BIND"
114 umount "$_NC_INSTALL_BIND"
115 umount "$_USER_FILES_BIND"
116
117 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
118 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
119
120 rm -rf "$SQL_DUMP_TARGET_DIR";
121 # Cleanup NC install dir snapshot
122
123 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
124
125 # Cleanup NC files snapshot
126
127 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
128
129
130 # Remove PID file
131 rm "$_PID_FILE"
132 echo "Done."
133}
134
135
136
137## Detect whether a backup is still running (and abort in this case)
138# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
139if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
140then
141 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
142 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
143 then
144 log "Detected backup process going for over 10 hours, killing it..."
145 kill -9 "$(cat "$_PID_FILE")"
146 else
147 log "Ongoing backup process detected. Aborting..."
148 exit 0
149 fi
150fi
151
152# cleanup leftovers from last run if any
153cleanup > /dev/null 2>&1 ||:
154
155# Save current process id to pid file
156echo "$$" > "$_PID_FILE"
157
158# cleanup will be run at the end of the script whether it fails or succeeds
159trap 'cleanup 2>&1 | log' EXIT
160
161# Make sure directories exist
162mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
163
164timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
165log "Starting Nextcloud Snapshot at $timestamp"
166
167# Snapshot disks
168log "Create snapshots..."
169
170btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
171
172
173btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
174
175log "Done."
176
177# Dump DB
178log "Create DB dump..."
179"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
180log "Done."
181
182# Bind mount all individual snapshots into a common parent directory
183log "Create bind mounts..."
184mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
185mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
186mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
187log "Done."
188
189# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
190log "Create kopia snapshot..."
191# Temporary log file as a workaround to pass kopia logs to our log function
192kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
193tail -f "$kopia_logfile" &
194kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
195cat "${kopia_logfile%.log}"*.log | log > /dev/null
196log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25# Path to the btrfs subvolume containing your NC files directory
26USER_FILES_VOLUME="/mnt/data/ncdata"
27# Path where the (temporary) btrfs snapshot for the NC files directory should be created
28USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
29# DB dump config
30# Path where to temporarily store database dump (should have enough free space)
31SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
32# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
33# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
34# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
35# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
36# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
37# - For postgresql, no adjustemnts are needed
38
39# Configure the connection options for your database:
40DB_USER="nextcloud"
41DB_NAME="nextcloud"
42DB_PASSWORD="password"
43DB_HOST="127.0.0.1"
44DB_PORT="5432"
45
46
47# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
48PGPASS_PATH="${HOME}/.pgpass"
49install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
50SQLDUMP_CMD=(pg_dump "${DB_NAME}")
51
52# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
53KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
54
55### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
56NC_INSTALL_FSTYPE="btrfs"
57
58USER_FILES_FSTYPE="btrfs"
59
60
61# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
62NC_OWNER_UID=33
63NC_OWNER_GID=33
64
65# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
66USER_FILES_OWNER_UID=33
67USER_FILES_OWNER_GID=33
68
69# Configure path where all parts of the backup will be mapped to before transferring with Kopia
70BACKUP_ROOT="/run/nc-backup"
71
72
73### You usually should not need to touch the following variables
74
75_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
76_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
77_DB_BIND="${BACKUP_ROOT}/db"
78
79# file for keeping track of running backup operations
80_PID_FILE="/var/run/nc-backup.pid"
81
82
83# Utility function for logging to both the log file and stdout
84log() {
85 local msg message
86
87 fwd=(echo)
88 if [[ -n "$LOG_FILE" ]]
89 then
90 fwd=(tee -a "$LOG_FILE")
91 fi
92
93 if [[ -z "${1:-}" ]]
94 then
95 while read -r message
96 do
97 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
98 done
99 return
100 fi
101
102 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
103}
104
105# Utility function for cleaning up snapshots, pid file, etc.
106cleanup() {
107 set +eu
108 echo "Cleanup..."
109
110 rm -rf "${PGPASS_PATH}"
111
112
113 umount "$_DB_BIND"
114 umount "$_NC_INSTALL_BIND"
115 umount "$_USER_FILES_BIND"
116
117 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
118 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
119
120 rm -rf "$SQL_DUMP_TARGET_DIR";
121 # Cleanup NC install dir snapshot
122
123 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
124
125 # Cleanup NC files snapshot
126
127 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
128
129
130 # Remove PID file
131 rm "$_PID_FILE"
132 echo "Done."
133}
134
135
136
137## Detect whether a backup is still running (and abort in this case)
138# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
139if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
140then
141 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
142 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
143 then
144 log "Detected backup process going for over 10 hours, killing it..."
145 kill -9 "$(cat "$_PID_FILE")"
146 else
147 log "Ongoing backup process detected. Aborting..."
148 exit 0
149 fi
150fi
151
152# cleanup leftovers from last run if any
153cleanup > /dev/null 2>&1 ||:
154
155# Save current process id to pid file
156echo "$$" > "$_PID_FILE"
157
158# cleanup will be run at the end of the script whether it fails or succeeds
159trap 'cleanup 2>&1 | log' EXIT
160
161# Make sure directories exist
162mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
163
164timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
165log "Starting Nextcloud Snapshot at $timestamp"
166
167# Snapshot disks
168log "Create snapshots..."
169
170btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
171
172
173btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
174
175log "Done."
176
177# Dump DB
178log "Create DB dump..."
179"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
180log "Done."
181
182# Bind mount all individual snapshots into a common parent directory
183log "Create bind mounts..."
184mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
185mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
186mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
187log "Done."
188
189# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
190log "Create kopia snapshot..."
191# Temporary log file as a workaround to pass kopia logs to our log function
192kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
193tail -f "$kopia_logfile" &
194kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
195cat "${kopia_logfile%.log}"*.log | log > /dev/null
196log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
26USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
45MYCNF_PATH="$HOME/.my.cnf"
46install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
47SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" --single-transaction --skip-lock-tables "${DB_NAME}")
48
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="btrfs"
55
56USER_FILES_FSTYPE="lvm"
57# NC files LVM snapshot name
58USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
59USER_FILES_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nc-files"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# file for keeping track of running backup operations
81_PID_FILE="/var/run/nc-backup.pid"
82
83
84# Utility function for logging to both the log file and stdout
85log() {
86 local msg message
87
88 fwd=(echo)
89 if [[ -n "$LOG_FILE" ]]
90 then
91 fwd=(tee -a "$LOG_FILE")
92 fi
93
94 if [[ -z "${1:-}" ]]
95 then
96 while read -r message
97 do
98 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
99 done
100 return
101 fi
102
103 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
104}
105
106# Utility function for cleaning up snapshots, pid file, etc.
107cleanup() {
108 set +eu
109 echo "Cleanup..."
110
111 rm -rf "${PGPASS_PATH}"
112
113
114 umount "$_DB_BIND"
115 umount "$_NC_INSTALL_BIND"
116 umount "$_USER_FILES_BIND"
117
118 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
119 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
120
121 rm -rf "$SQL_DUMP_TARGET_DIR";
122 # Cleanup NC install dir snapshot
123
124 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
125
126 # Cleanup NC files snapshot
127
128 umount "${USER_FILES_SNAPSHOT_PATH}"
129 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
130 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
131 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
132
133
134 # Remove PID file
135 rm "$_PID_FILE"
136 echo "Done."
137}
138
139
140# Utility function for mounting lvm snapshot to a mountpoint
141mount_snapshot_lvm() {
142 local snapshot mountpoint ownership
143 snapshot="${1?}"
144 mountpoint="${2?}"
145 mkdir -p "${mountpoint}"
146 lvchange -ay -K "${snapshot}"
147 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
148}
149
150
151## Detect whether a backup is still running (and abort in this case)
152# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
153if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
154then
155 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
156 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
157 then
158 log "Detected backup process going for over 10 hours, killing it..."
159 kill -9 "$(cat "$_PID_FILE")"
160 else
161 log "Ongoing backup process detected. Aborting..."
162 exit 0
163 fi
164fi
165
166# cleanup leftovers from last run if any
167cleanup > /dev/null 2>&1 ||:
168
169# Save current process id to pid file
170echo "$$" > "$_PID_FILE"
171
172# cleanup will be run at the end of the script whether it fails or succeeds
173trap 'cleanup 2>&1 | log' EXIT
174
175# Make sure directories exist
176mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
177
178timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
179log "Starting Nextcloud Snapshot at $timestamp"
180
181# Snapshot disks
182log "Create snapshots..."
183
184btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
185
186
187lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
188mount_snapshot_lvm "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
189
190log "Done."
191
192# Dump DB
193log "Create DB dump..."
194"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
195log "Done."
196
197# Bind mount all individual snapshots into a common parent directory
198log "Create bind mounts..."
199mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
200mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
201mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
202log "Done."
203
204# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
205log "Create kopia snapshot..."
206# Temporary log file as a workaround to pass kopia logs to our log function
207kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
208tail -f "$kopia_logfile" &
209kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
210cat "${kopia_logfile%.log}"*.log | log > /dev/null
211log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
26USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44
45# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
46PGPASS_PATH="${HOME}/.pgpass"
47install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
48SQLDUMP_CMD=(pg_dump "${DB_NAME}")
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="btrfs"
55
56USER_FILES_FSTYPE="lvm"
57# NC files LVM snapshot name
58USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
59USER_FILES_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nc-files"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# file for keeping track of running backup operations
81_PID_FILE="/var/run/nc-backup.pid"
82
83
84# Utility function for logging to both the log file and stdout
85log() {
86 local msg message
87
88 fwd=(echo)
89 if [[ -n "$LOG_FILE" ]]
90 then
91 fwd=(tee -a "$LOG_FILE")
92 fi
93
94 if [[ -z "${1:-}" ]]
95 then
96 while read -r message
97 do
98 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
99 done
100 return
101 fi
102
103 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
104}
105
106# Utility function for cleaning up snapshots, pid file, etc.
107cleanup() {
108 set +eu
109 echo "Cleanup..."
110
111 rm -rf "${PGPASS_PATH}"
112
113
114 umount "$_DB_BIND"
115 umount "$_NC_INSTALL_BIND"
116 umount "$_USER_FILES_BIND"
117
118 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
119 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
120
121 rm -rf "$SQL_DUMP_TARGET_DIR";
122 # Cleanup NC install dir snapshot
123
124 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
125
126 # Cleanup NC files snapshot
127
128 umount "${USER_FILES_SNAPSHOT_PATH}"
129 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
130 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
131 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
132
133
134 # Remove PID file
135 rm "$_PID_FILE"
136 echo "Done."
137}
138
139
140# Utility function for mounting lvm snapshot to a mountpoint
141mount_snapshot_lvm() {
142 local snapshot mountpoint ownership
143 snapshot="${1?}"
144 mountpoint="${2?}"
145 mkdir -p "${mountpoint}"
146 lvchange -ay -K "${snapshot}"
147 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
148}
149
150
151## Detect whether a backup is still running (and abort in this case)
152# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
153if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
154then
155 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
156 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
157 then
158 log "Detected backup process going for over 10 hours, killing it..."
159 kill -9 "$(cat "$_PID_FILE")"
160 else
161 log "Ongoing backup process detected. Aborting..."
162 exit 0
163 fi
164fi
165
166# cleanup leftovers from last run if any
167cleanup > /dev/null 2>&1 ||:
168
169# Save current process id to pid file
170echo "$$" > "$_PID_FILE"
171
172# cleanup will be run at the end of the script whether it fails or succeeds
173trap 'cleanup 2>&1 | log' EXIT
174
175# Make sure directories exist
176mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
177
178timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
179log "Starting Nextcloud Snapshot at $timestamp"
180
181# Snapshot disks
182log "Create snapshots..."
183
184btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
185
186
187lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
188mount_snapshot_lvm "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
189
190log "Done."
191
192# Dump DB
193log "Create DB dump..."
194"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
195log "Done."
196
197# Bind mount all individual snapshots into a common parent directory
198log "Create bind mounts..."
199mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
200mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
201mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
202log "Done."
203
204# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
205log "Create kopia snapshot..."
206# Temporary log file as a workaround to pass kopia logs to our log function
207kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
208tail -f "$kopia_logfile" &
209kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
210cat "${kopia_logfile%.log}"*.log | log > /dev/null
211log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23# Path to the btrfs subvolume containing your NC files directory
24USER_FILES_VOLUME="/mnt/data/ncdata"
25# Path where the (temporary) btrfs snapshot for the NC files directory should be created
26USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
45MYCNF_PATH="$HOME/.my.cnf"
46install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
47SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" --single-transaction --skip-lock-tables "${DB_NAME}")
48
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="lvm"
55# NC install LVM snapshot name
56NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
57NC_INSTALL_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nextcloud"
58
59USER_FILES_FSTYPE="btrfs"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# file for keeping track of running backup operations
81_PID_FILE="/var/run/nc-backup.pid"
82
83
84# Utility function for logging to both the log file and stdout
85log() {
86 local msg message
87
88 fwd=(echo)
89 if [[ -n "$LOG_FILE" ]]
90 then
91 fwd=(tee -a "$LOG_FILE")
92 fi
93
94 if [[ -z "${1:-}" ]]
95 then
96 while read -r message
97 do
98 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
99 done
100 return
101 fi
102
103 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
104}
105
106# Utility function for cleaning up snapshots, pid file, etc.
107cleanup() {
108 set +eu
109 echo "Cleanup..."
110
111 rm -rf "${PGPASS_PATH}"
112
113
114 umount "$_DB_BIND"
115 umount "$_NC_INSTALL_BIND"
116 umount "$_USER_FILES_BIND"
117
118 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
119 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
120
121 rm -rf "$SQL_DUMP_TARGET_DIR";
122 # Cleanup NC install dir snapshot
123
124 umount "${NC_INSTALL_SNAPSHOT_PATH}"
125 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
126 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
127 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
128
129 # Cleanup NC files snapshot
130
131 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
132
133
134 # Remove PID file
135 rm "$_PID_FILE"
136 echo "Done."
137}
138
139
140# Utility function for mounting lvm snapshot to a mountpoint
141mount_snapshot_lvm() {
142 local snapshot mountpoint ownership
143 snapshot="${1?}"
144 mountpoint="${2?}"
145 mkdir -p "${mountpoint}"
146 lvchange -ay -K "${snapshot}"
147 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
148}
149
150
151## Detect whether a backup is still running (and abort in this case)
152# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
153if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
154then
155 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
156 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
157 then
158 log "Detected backup process going for over 10 hours, killing it..."
159 kill -9 "$(cat "$_PID_FILE")"
160 else
161 log "Ongoing backup process detected. Aborting..."
162 exit 0
163 fi
164fi
165
166# cleanup leftovers from last run if any
167cleanup > /dev/null 2>&1 ||:
168
169# Save current process id to pid file
170echo "$$" > "$_PID_FILE"
171
172# cleanup will be run at the end of the script whether it fails or succeeds
173trap 'cleanup 2>&1 | log' EXIT
174
175# Make sure directories exist
176mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
177
178timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
179log "Starting Nextcloud Snapshot at $timestamp"
180
181# Snapshot disks
182log "Create snapshots..."
183
184lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
185mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
186
187
188btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
189
190log "Done."
191
192# Dump DB
193log "Create DB dump..."
194"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
195log "Done."
196
197# Bind mount all individual snapshots into a common parent directory
198log "Create bind mounts..."
199mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
200mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
201mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
202log "Done."
203
204# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
205log "Create kopia snapshot..."
206# Temporary log file as a workaround to pass kopia logs to our log function
207kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
208tail -f "$kopia_logfile" &
209kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
210cat "${kopia_logfile%.log}"*.log | log > /dev/null
211log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23# Path to the btrfs subvolume containing your NC files directory
24USER_FILES_VOLUME="/mnt/data/ncdata"
25# Path where the (temporary) btrfs snapshot for the NC files directory should be created
26USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44
45# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
46PGPASS_PATH="${HOME}/.pgpass"
47install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
48SQLDUMP_CMD=(pg_dump "${DB_NAME}")
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="lvm"
55# NC install LVM snapshot name
56NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
57NC_INSTALL_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nextcloud"
58
59USER_FILES_FSTYPE="btrfs"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# file for keeping track of running backup operations
81_PID_FILE="/var/run/nc-backup.pid"
82
83
84# Utility function for logging to both the log file and stdout
85log() {
86 local msg message
87
88 fwd=(echo)
89 if [[ -n "$LOG_FILE" ]]
90 then
91 fwd=(tee -a "$LOG_FILE")
92 fi
93
94 if [[ -z "${1:-}" ]]
95 then
96 while read -r message
97 do
98 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
99 done
100 return
101 fi
102
103 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
104}
105
106# Utility function for cleaning up snapshots, pid file, etc.
107cleanup() {
108 set +eu
109 echo "Cleanup..."
110
111 rm -rf "${PGPASS_PATH}"
112
113
114 umount "$_DB_BIND"
115 umount "$_NC_INSTALL_BIND"
116 umount "$_USER_FILES_BIND"
117
118 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
119 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
120
121 rm -rf "$SQL_DUMP_TARGET_DIR";
122 # Cleanup NC install dir snapshot
123
124 umount "${NC_INSTALL_SNAPSHOT_PATH}"
125 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
126 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
127 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
128
129 # Cleanup NC files snapshot
130
131 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
132
133
134 # Remove PID file
135 rm "$_PID_FILE"
136 echo "Done."
137}
138
139
140# Utility function for mounting lvm snapshot to a mountpoint
141mount_snapshot_lvm() {
142 local snapshot mountpoint ownership
143 snapshot="${1?}"
144 mountpoint="${2?}"
145 mkdir -p "${mountpoint}"
146 lvchange -ay -K "${snapshot}"
147 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
148}
149
150
151## Detect whether a backup is still running (and abort in this case)
152# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
153if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
154then
155 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
156 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
157 then
158 log "Detected backup process going for over 10 hours, killing it..."
159 kill -9 "$(cat "$_PID_FILE")"
160 else
161 log "Ongoing backup process detected. Aborting..."
162 exit 0
163 fi
164fi
165
166# cleanup leftovers from last run if any
167cleanup > /dev/null 2>&1 ||:
168
169# Save current process id to pid file
170echo "$$" > "$_PID_FILE"
171
172# cleanup will be run at the end of the script whether it fails or succeeds
173trap 'cleanup 2>&1 | log' EXIT
174
175# Make sure directories exist
176mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
177
178timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
179log "Starting Nextcloud Snapshot at $timestamp"
180
181# Snapshot disks
182log "Create snapshots..."
183
184lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
185mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
186
187
188btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
189
190log "Done."
191
192# Dump DB
193log "Create DB dump..."
194"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
195log "Done."
196
197# Bind mount all individual snapshots into a common parent directory
198log "Create bind mounts..."
199mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
200mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
201mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
202log "Done."
203
204# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
205log "Create kopia snapshot..."
206# Temporary log file as a workaround to pass kopia logs to our log function
207kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
208tail -f "$kopia_logfile" &
209kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
210cat "${kopia_logfile%.log}"*.log | log > /dev/null
211log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
24USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
25# DB dump config
26# Path where to temporarily store database dump (should have enough free space)
27SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
28# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
29# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
30# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
31# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
32# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
33# - For postgresql, no adjustemnts are needed
34
35# Configure the connection options for your database:
36DB_USER="nextcloud"
37DB_NAME="nextcloud"
38DB_PASSWORD="password"
39DB_HOST="127.0.0.1"
40DB_PORT="5432"
41
42# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
43MYCNF_PATH="$HOME/.my.cnf"
44install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
45SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" --single-transaction --skip-lock-tables "${DB_NAME}")
46
47
48# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
49KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
50
51### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
52NC_INSTALL_FSTYPE="lvm"
53# NC install LVM snapshot name
54NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
55NC_INSTALL_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nextcloud"
56
57USER_FILES_FSTYPE="lvm"
58# NC files LVM snapshot name
59USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
60USER_FILES_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nc-files"
61
62
63# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
64NC_OWNER_UID=33
65NC_OWNER_GID=33
66
67# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
68USER_FILES_OWNER_UID=33
69USER_FILES_OWNER_GID=33
70
71# Configure path where all parts of the backup will be mapped to before transferring with Kopia
72BACKUP_ROOT="/run/nc-backup"
73
74
75### You usually should not need to touch the following variables
76
77_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
78_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
79_DB_BIND="${BACKUP_ROOT}/db"
80
81# file for keeping track of running backup operations
82_PID_FILE="/var/run/nc-backup.pid"
83
84
85# Utility function for logging to both the log file and stdout
86log() {
87 local msg message
88
89 fwd=(echo)
90 if [[ -n "$LOG_FILE" ]]
91 then
92 fwd=(tee -a "$LOG_FILE")
93 fi
94
95 if [[ -z "${1:-}" ]]
96 then
97 while read -r message
98 do
99 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
100 done
101 return
102 fi
103
104 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
105}
106
107# Utility function for cleaning up snapshots, pid file, etc.
108cleanup() {
109 set +eu
110 echo "Cleanup..."
111
112 rm -rf "${PGPASS_PATH}"
113
114
115 umount "$_DB_BIND"
116 umount "$_NC_INSTALL_BIND"
117 umount "$_USER_FILES_BIND"
118
119 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
120 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
121
122 rm -rf "$SQL_DUMP_TARGET_DIR";
123 # Cleanup NC install dir snapshot
124
125 umount "${NC_INSTALL_SNAPSHOT_PATH}"
126 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
127 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
128 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
129
130 # Cleanup NC files snapshot
131
132 umount "${USER_FILES_SNAPSHOT_PATH}"
133 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
134 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
135 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
136
137
138 # Remove PID file
139 rm "$_PID_FILE"
140 echo "Done."
141}
142
143
144# Utility function for mounting lvm snapshot to a mountpoint
145mount_snapshot_lvm() {
146 local snapshot mountpoint ownership
147 snapshot="${1?}"
148 mountpoint="${2?}"
149 mkdir -p "${mountpoint}"
150 lvchange -ay -K "${snapshot}"
151 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
152}
153
154
155## Detect whether a backup is still running (and abort in this case)
156# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
157if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
158then
159 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
160 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
161 then
162 log "Detected backup process going for over 10 hours, killing it..."
163 kill -9 "$(cat "$_PID_FILE")"
164 else
165 log "Ongoing backup process detected. Aborting..."
166 exit 0
167 fi
168fi
169
170# cleanup leftovers from last run if any
171cleanup > /dev/null 2>&1 ||:
172
173# Save current process id to pid file
174echo "$$" > "$_PID_FILE"
175
176# cleanup will be run at the end of the script whether it fails or succeeds
177trap 'cleanup 2>&1 | log' EXIT
178
179# Make sure directories exist
180mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
181
182timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
183log "Starting Nextcloud Snapshot at $timestamp"
184
185# Snapshot disks
186log "Create snapshots..."
187
188lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
189mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
190
191
192lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
193mount_snapshot_lvm "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
194
195log "Done."
196
197# Dump DB
198log "Create DB dump..."
199"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
200log "Done."
201
202# Bind mount all individual snapshots into a common parent directory
203log "Create bind mounts..."
204mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
205mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
206mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
207log "Done."
208
209# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
210log "Create kopia snapshot..."
211# Temporary log file as a workaround to pass kopia logs to our log function
212kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
213tail -f "$kopia_logfile" &
214kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
215cat "${kopia_logfile%.log}"*.log | log > /dev/null
216log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
24USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
25# DB dump config
26# Path where to temporarily store database dump (should have enough free space)
27SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
28# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
29# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
30# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
31# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
32# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
33# - For postgresql, no adjustemnts are needed
34
35# Configure the connection options for your database:
36DB_USER="nextcloud"
37DB_NAME="nextcloud"
38DB_PASSWORD="password"
39DB_HOST="127.0.0.1"
40DB_PORT="5432"
41
42
43# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
44PGPASS_PATH="${HOME}/.pgpass"
45install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
46SQLDUMP_CMD=(pg_dump "${DB_NAME}")
47
48# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
49KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
50
51### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
52NC_INSTALL_FSTYPE="lvm"
53# NC install LVM snapshot name
54NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
55NC_INSTALL_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nextcloud"
56
57USER_FILES_FSTYPE="lvm"
58# NC files LVM snapshot name
59USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
60USER_FILES_SNAPSHOT_PATH="/run/mnt/nc-backup-lvm-nc-files"
61
62
63# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
64NC_OWNER_UID=33
65NC_OWNER_GID=33
66
67# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
68USER_FILES_OWNER_UID=33
69USER_FILES_OWNER_GID=33
70
71# Configure path where all parts of the backup will be mapped to before transferring with Kopia
72BACKUP_ROOT="/run/nc-backup"
73
74
75### You usually should not need to touch the following variables
76
77_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
78_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
79_DB_BIND="${BACKUP_ROOT}/db"
80
81# file for keeping track of running backup operations
82_PID_FILE="/var/run/nc-backup.pid"
83
84
85# Utility function for logging to both the log file and stdout
86log() {
87 local msg message
88
89 fwd=(echo)
90 if [[ -n "$LOG_FILE" ]]
91 then
92 fwd=(tee -a "$LOG_FILE")
93 fi
94
95 if [[ -z "${1:-}" ]]
96 then
97 while read -r message
98 do
99 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
100 done
101 return
102 fi
103
104 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
105}
106
107# Utility function for cleaning up snapshots, pid file, etc.
108cleanup() {
109 set +eu
110 echo "Cleanup..."
111
112 rm -rf "${PGPASS_PATH}"
113
114
115 umount "$_DB_BIND"
116 umount "$_NC_INSTALL_BIND"
117 umount "$_USER_FILES_BIND"
118
119 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
120 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
121
122 rm -rf "$SQL_DUMP_TARGET_DIR";
123 # Cleanup NC install dir snapshot
124
125 umount "${NC_INSTALL_SNAPSHOT_PATH}"
126 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
127 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
128 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
129
130 # Cleanup NC files snapshot
131
132 umount "${USER_FILES_SNAPSHOT_PATH}"
133 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
134 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
135 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
136
137
138 # Remove PID file
139 rm "$_PID_FILE"
140 echo "Done."
141}
142
143
144# Utility function for mounting lvm snapshot to a mountpoint
145mount_snapshot_lvm() {
146 local snapshot mountpoint ownership
147 snapshot="${1?}"
148 mountpoint="${2?}"
149 mkdir -p "${mountpoint}"
150 lvchange -ay -K "${snapshot}"
151 mount -o "ro" "/dev/${snapshot}" "${mountpoint}"
152}
153
154
155## Detect whether a backup is still running (and abort in this case)
156# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
157if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
158then
159 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
160 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
161 then
162 log "Detected backup process going for over 10 hours, killing it..."
163 kill -9 "$(cat "$_PID_FILE")"
164 else
165 log "Ongoing backup process detected. Aborting..."
166 exit 0
167 fi
168fi
169
170# cleanup leftovers from last run if any
171cleanup > /dev/null 2>&1 ||:
172
173# Save current process id to pid file
174echo "$$" > "$_PID_FILE"
175
176# cleanup will be run at the end of the script whether it fails or succeeds
177trap 'cleanup 2>&1 | log' EXIT
178
179# Make sure directories exist
180mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
181
182timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
183log "Starting Nextcloud Snapshot at $timestamp"
184
185# Snapshot disks
186log "Create snapshots..."
187
188lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
189mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
190
191
192lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
193mount_snapshot_lvm "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
194
195log "Done."
196
197# Dump DB
198log "Create DB dump..."
199"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
200log "Done."
201
202# Bind mount all individual snapshots into a common parent directory
203log "Create bind mounts..."
204mount --bind -o "ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
205mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
206mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
207log "Done."
208
209# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
210log "Create kopia snapshot..."
211# Temporary log file as a workaround to pass kopia logs to our log function
212kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
213tail -f "$kopia_logfile" &
214kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
215cat "${kopia_logfile%.log}"*.log | log > /dev/null
216log "Done."

Running regular backups

Now set up the script as cron job. If you want hourly backups (that's how I'm running it), you need to do nothing more than drop it into /etc/cron.hourly/.

If your want a custom schedule, save it in a separate location instead (e.g. /usr/sbin/) and create an entry in /etc/crontab, e.g.:

# minute hour day month weekday command
  30     */4  */2   *     *     bash /usr/sbin/nextcloud-backup.sh

which will run the script every 4 hours at every 2nd day at the half hour mark.

Make sure to adjust the script permissions so it is executable and not readable by other users:

chmod 0500 /usr/sbin/nextcloud-backup.sh

Restoring a backup

Now, let's have a look at restoring a backup:

Reuse or setup a Nextcloud server

In order to restore a snapshot, you need a Nextcloud server, that is working fine (apart from missing your data). So either set one up or reuse your existing server, if you're confident that only the backed up data (i.e. Nextcloud installation directory, user files directory or database) was damaged. This server needs to match your previous one (at the time of the backup) as closely as possible (e.g. the database and PHP version should be the same or at least compatible).

Restore a snapshot

The first thing to do is, to have a look at the list of snapshots and restore one of them. Alternatively to using the given commands, we can also use the Kopia GUI.

# List the snapshots and find one that you want to restore
kopia snapshots list --all | less
# If we're not on a fresh server, remove the existing directories that will be restored
rm -rf /var/www/nextcloud/* /var/www/nextcloud/.[!.]*
rm -rf /mnt/data/ncdata/* /mnt/data/ncdata/.[!.]*
# Restore the snapshot to the original locations (see https://kopia.io/docs/reference/command-line/common/snapshot-restore/)
# Let's assume the id of the snapshot we want to restore is "kffbb7c28ea6c34d6cbe555d1cf80faa9"
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/nextcloud" /var/www/nextcloud
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/nc_files" /mnt/data/ncdata
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/db.sql" /run/nc-restore-db.sql
# Finally, let's ensure, the restored directories have correct ownership.
# Assuming your nextcloud webserver is running under uid=33, do:
chown -R 33: /var/www/nextcloud
chown -R 33: /mnt/data/ncdata

Now we need to restore the database from the restored /run/nc-restore-db.sql:

PostgreSQL MariaDB / MySQL
DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# psql will be supplied with the connection options by writing them to ~/.pgpass
install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "$HOME/.pgpass"
psql -d "${DB_NAME}" < "/run/nc-restore-db.sql"
DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# mysql will be supplied with the db credentials by writing them to ~/.my.cnf
install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "$HOME/.my.cnf"
mysql \
  --host="${DB_HOST}" \
  --port="${DB_PORT}" \
  "${DB_NAME}" < "/run/nc-restore-db.sql"

That's it! If things in your setup changed, you might still have to adjust /var/www/nextcloud/config/config.php ( e.g. db credentials, redis adress ... ). But other than that, your backup should be restored successfully at this point.

Conclusion

As becomes clear, producing great backups can be a bit involved. But in my opinion, they are more than worth it, because once set, they provide a quality of sleep that's otherwise hard to achieve as sysadmin. :D Jokes aside, having robust backups means a significant reduction in the "worst case" that we could endure with our service (in regard to data loss) and are therefore crucial to have.

Revisiting our criteria

Let's check how well we fared in terms of our criteria from part 1 of this blog series:

As you can see - nearly all boxes are ticked! The remaining box will be addressed in the upcoming 3rd part of this series where I talk about backup monitoring. So stay tuned and consider following me on mastodon or subscribing your feed reader to the RSS feed to make sure, you won't miss it. :)