| json

Our backup approach

In the previous post, I presented my criteria for good backups and outlined some implementation approaches. In this post I'll provide a step-by-step description for setting up a backup process for a Nextcloud Server that will fulfill all those criteria.

Feel free to chose and pick and adjust my approach to your own system.

You can also skip directly to the final code if you're fluent in bash and don't care for the reasoning behind the individual steps.

Step 1: reliable zero downtime snapshots

A typical Nextcloud backup will likely look like this:

  1. Turn maintenance mode on
  2. dump/backup the database
  3. copy the user files and the nextcloud installation directory to the backup target location
  4. Turn maintenance mode back off

Enabling maintenance mode serves to ensure, that all the things we are including in our backup are in sync and thus represent a consistent state of our service at a specific point in time. Otherwise, we might run into issues when restoring the backup - for instance files being present in the filesystem but not known to the database (and thus not visible to Nextcloud).

Unfortunately, we have ruled out using maintenance mode, because we don't want any downtime during our backup process, so we need another solution.

To accomplish consistent zero downtime backups, we will create synchronized snapshots of all the things that need to be included in a typical Nextcloud backup: the users' files, the installation directory and the database. For this purpose, we will rely on the snapshot feature of our filesystem and on so-called single-transaction copies for our database. This means, that we must choose a database and filesystems for our server, to include those features.

Popular modern filesystem types like BTRFS, ZFS, XFS, LVM and many VM storage solutions provide snapshot capabilities. Filesystem snapshots are read-only copies of a directory or volume that are usually created nearly instantly and take little to no additional storage space. This guide will give examples for BTRFS and LVM (more might be added in the future).

Most modern transactional database should provide options to do single-transaction copies of the database. I will give examples on how to achieve this for mariadb/mysql and postgresql.

Is any of this applicable to containerized deployments (e.g. Nextcloud All-in-One which is based on Docker)?

Yes, you just need to make sure that the volumes which contain your Nextcloud installation directory and your user files are backed by compatible storage - e.g. mounting a btrfs subvolume under the respective paths.

You will also have to prepend some of the commands in this guide with whatever is necessary to run a command in you container. For example for docker, you would need: docker exec <name-of-your-container> <original-command>. Other commands, especially filesystem operations, always need to be executed on the host system.

Taking filesystem snapshots

Let's walk us through the process of creating filesystem snapshots for our user files directory and Nextcloud installation directory.

BTRFS LVM

This requires the nextcloud installation directory (assumed to be at /var/www/nextcloud) or respectively the user files directory (assumed to be at /mnt/data) to be a BTRFS subvolume (created with btrfs subvolume create <path>).

# Path to the btrfs subvolume containing your NC files directory
USER_FILES_VOLUME=/mnt/data/nc_files
# Path where the (temporary) btrfs snapshot for the NC files directory should be created
USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"

# Path to the btrfs subvolume containing your NC installation directory
NC_INSTALL_VOLUME=/var/www/nextcloud
# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
btrfs subvolume snapshot \
  "$USER_FILES_VOLUME" "$USER_FILES_SNAPSHOT_PATH"
btrfs subvolume snapshot \
  "$NC_INSTALL_VOLUME" "$NC_INSTALL_SNAPSHOT_PATH"

This requires the nextcloud installation directory or respectively the user files directory to be a LVM logical volume.

# Adjust to match the LVM volume group containing your nc installation directory
USER_FILES_VG="lvm-1"
# Adjust to match the logical volume name containing your nc installation directory
USER_FILES_LV="nc-files"
# Arbitrary name for the (temporary) lvm snapshot
USER_FILES_SNAPSHOT_NAME="nc-backup-files"

# Adjust to match the LVM volume group containing your nc installation directory
NC_INSTALL_VG="lvm-1"
# Adjust to match the logical volume name containing your nc installation directory
NC_INSTALL_VOLUME="nextcloud"
# Arbitrary name for the (temporary) lvm snapshot
NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
lvcreate -s \
  --name "${USER_FILES_SNAPSHOT_NAME}" \
  "${USER_FILES_VG}/${USER_FILES_LV}"
lvcreate -s \
  --name "$NC_INSTALL_SNAPSHOT_NAME" \
  "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}"

Taking a database snapshot

For creating our database snapshot, we will keep things simple and use the trusty sql dump utilities available for our reference databases mariadb and postgres. An sql dump is basically a set of SQL instructions that can be used to recreate the database in its current state.

Following you will find code snippets for creating those dumps:

PostgreSQL MariaDB / MySQL
SQL_DUMP_TARGET_DIR=/tmp/nc_backup/

DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "$HOME/.pgpass"
pg_dump > "${SQL_DUMP_TARGET_DIR}/db.sql"
SQL_DUMP_TARGET_DIR=/tmp/nc_backup/

DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "$HOME/.my.cnf"
mysqldump \
  --host="${DB_HOST}" \
  --port="${DB_PORT}" \
  --single-transaction \
  --skip-lock-tables \
  "${DB_NAME}" > "${SQL_DUMP_TARGET_DIR}/db.sql"

Note: We need the parameter --single-transaction to ensure a "real" snapshot (a single transaction copy to be precise) that is unaffected by ongoing writes to the DB. We add --skip-lock-tables as well, to improve performance (not locking tables is okay here, because we're just reading from the DB).

Now that we can create both, filesystem snapshots and an sql dump, we are capable of creating a consistent point-in-time snapshot of our Nextcloud instance with zero downtime (well, actually 3 synchronized snapshots, but we'll get to that). Just make sure, to either trigger all three operations at the same time or perform the filesystem snapshots first (since they are super fast, that's basically the same thing). It's important that all 3 snapshots refer to the same point in time.

Merging the snapshots into one directory

Right now, we have 3 individual snapshots lying around. They might even be located on separate disks. For the next steps we want one aggregated view on them, which we will achieve using bind mounts for btrfs (or other filesystems) and normal mounts for LVM. Bind mounts are a Linux feature which allows us to mount one directory to another path in our filesystem.

So, using the same variables as above, the code for this would look like follows:

BTRFS LVM
mount --bind -o ro \
  "$NC_INSTALL_SNAPSHOT_PATH" \
  "$SQL_DUMP_TARGET_DIR/nextcloud"
mount --bind -o ro \
  "$USER_FILES_SNAPSHOT_PATH" \
  "$SQL_DUMP_TARGET_DIR/nc_files"
# Activate the LVM snapshot so we can mount it
lvchange -ay -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
# Mount the snapshot read-only inside $SQL_DUNMP_TARGET_DIR with ownership forced to the user that owns Nextcloud runs as
mount -o "ro,uid=33,gid=33" \
  "/dev/${USER_FIELS_VG}/${USER_FILES_SNAPSHOT_NAME}" \
  "${SQL_DUMP_TARGET_DIR}/nextcloud"
# Activate the LVM snapshot so we can mount it
lvchange -ay -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
# Mount the snapshot read-only inside $SQL_DUNMP_TARGET_DIR with ownership forced to the user that owns Nextcloud runs as
mount -o "ro,uid=33,gid=33" \
  "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" \
  "${SQL_DUMP_TARGET_DIR}/nextcloud"

The result is a directory (/tmp/nc_backup) which looks like it contains all filesystem snapshots and the sql dump:

/tmp/nc_backup
├── nextcloud  # nextcloud installation directory
├── nc_files   # user files directory
└── db.sql     # sql dump of the database

Step 2: Providing Backup Storage

Choosing our remote storage location

As we have established, we need our backups to not be affected by a number of local hazards, including hardware failure or fires. Therefore, we need to store them somewhere else. Because we also want ransomware protection, we need a storage type with one of two options:

Either

These features (which I will explain in more detail later) will allow us to prevent attackers (e.g. ransomware) from destroying our backups.

In this guide, we will focus on object locking to ensure ransomware protection. If you want to use restricted access permissions, please refer to the Kopia documentation.

Because we need those features, we can't go with many of the storage options offered by Kopia and are left with something called "object storage" - basically a special kind of storage often found in cloud computing. Apart from big cloud providers, this kind of storage is so ubiquitous today, that you will find it offered by numerous providers and you can even self-host it. Because we will encrypt the backups you don't need to have a high level of trust in your provider, though (apart from trust in their capability of not losing your data, of course). Here are some options:

*Note: All options marked with S3 compatible will share the same code snippets under the name "S3 compatible" going forward.

Setting up one of the self-hosting options is out of scope for this guide, however, I will provide instructions for AWS S3, Google Cloud Storage and Min-IO going forward (maybe I'll add Ceph at a later point).

For reference though: My personal storage server is formed by a RaspberryPi 5 8GB running Min-IO and coupled with a JBOD system that contains 4 8TB disks.

Configuring a bucket

So now, that you have chosen (or become!) a storage provider, you need to create a bucket (which is comparable to the concept of directories in traditional file systems, although they can't be nested). If you need object locking than you need to make sure it is supported by your provider and enabled for your bucket.

Now you need to create credentials for your bucket. Depending on your storage provider that can look differently.

S3 compatible Google Cloud Storage

For S3 compatible providers you need to create user and a policy and potentially an access key.

Assuming your bucket is named kopia, here is what a policy could look like:

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Action": [
    "s3:*"
   ],
   "Resource": [
    "arn:aws:s3:::kopia",
    "arn:aws:s3:::kopia/*"
   ]
  },
  {
   "Effect": "Deny",
   "Action": [
    "s3:DeleteBucket",
    "s3:DeleteBucketPolicy"
   ],
   "Resource": [
    "arn:aws:s3:::kopia",
    "arn:aws:s3:::kopia/*"
   ]
  }
 ]
}

To apply (using the Min-IO cli (mcli)):

  1. Save the policy as kopiawriter.json
  2. Edit it to match your Abucket name
  3. Now execute the following commands:
# Configure your S3 storage
mc alias set backupserver <HOSTNAME> <ACCESS_KEY> <SECRET_KEY>
# Create a new user
mc admin user add backupserver kopia <kopia-login-secret>
# Add the policy to your S3 server
mcli admin policy add backupserver kopiawriter kopiawriter.json
# Attach the policy to the kopia user
mcli admin policy set backupserver kopiawriter user=kopia

Now you can access the bucket with accesskey=kopia and secretaccesskey=<kopia-login-secret>.

If you want to use Google Cloud Storage, I recommend setting up a service account that you will then grant access to your google storage bucket. Finally, you will need to create a service account key for your service account that you will later use to connect the storage.

For this guide, I won't go into more detail on those steps. Please follow the linked official documentation instead.

Step 3: Connecting our Storage

By now, we have a snapshot of our instance and we have some remote storage for our backups. Now let's address getting our backup there.

Introducing Kopia

To turn our instance snapshot into a backup that ticks all the boxes from our backup criteria, we will need a specialized backup tool. There are a number of options available, notably Restic, Borg and Kopia, which should all cover most of our needs. Because I have the most experience with it and use it for my own backups, we're going with Kopia in this guide (feel free to supply information on how to do the same with another tool and I'll consider adding it).

Why do we need a specialized tool? While we could implement some of its features ourselves, this would be more error-prone and very complex to achieve. Some features, like error correction and ransomware protection are especially infeasible to solve by hand.

For which features do we use Kopia? Notable features of Kopia that we rely on are:

Before proceeding, please make sure you have Kopia installed on your server as per the installation instructions.

Writing the backup

Now, that we have our backup tool ready, we're going take our instance snapshot and write it to our object storage.

Let's first setup a Kopia Repository .

S3 compatible Google CLoud Storage

Use the following commands to initialize a Kopia repository. Adjust the variables ENDPOINT,BUCKET_NAME, ACCESS_KEY, SECRET_ACCESS_KEY, PREFIX according to your S3 storage.

For all available paremeters, refer to the kopia documentation.

ENDPOINT="s3.amazonaws.com" # The endpoint where to find your S3 compatible server (including port, excluding protocol)
BUCKET_NAME="my-backup-bucket" # The name of your S3 bucket
ACCESS_KEY="my-s3-user" # Your access key could be your user name or an ID attached to a specific credential (depending on your provider and configuration)
SECRET_ACCESS_KEY="my-secret-s3-password" # The secret part of your access key - could be your user password or a secret attached to a specific credential (depending on your provider and configuration
PREFIX="/" # Path prefix for all objects written by Kopia
kopia repository create s3 \
  --endpoint "$ENDPOINT" \
  --bucket "$BUCKET_NAME" \
  --access-key "$ACCESS_KEY" \
  --secret-access-key "$SECRET_ACCESS_KEY" \
  --prefix "$PREFIX" \
  --retention-mode governance 

Use the following commands to initialize a Kopia repository. Adjust the variables BUCKET_NAME, SERVICE_ACCOUNT_CREDENTIALS_FILES, PREFIX according to your setup.

For all available paremeters, refer to the kopia documentation.

BUCKET_NAME="my-backup-bucket" # The name of your Google bucket
SERVICE_ACCOUNT_CREDENTIALS_FILE="/path/to/credentials.json" # You can skip the --credentials-file parameter to use your default google credentials
PREFIX="/" # Path prefix for all objects written by Kopia
kopia repository create gcs \
  --bucket "$BUCKET_NAME" \
  --credentials-file "${SERVICE_ACCOUNT_CREDENTIALS_FILE}" \
  --prefix "$PREFIX" \
  --retention-mode governance 

Now we need to actually enable the use of object locks in Kopia:

kopia maintenance set --extend-object-locks true

And adjust our Kopia global policy (we could also do this per repository, but since we only have one, this is fine). Here we setup how many backups to retain and the type of compression to use. You can find the full list of parameters in the Kopia documentation.

kopia policy set \
  --compression zstd \
  --compression-min-size 100K \
  --keep-annual 2 \
  --keep-monthly 24 \
  --keep-weekly 8 \
  --keep-daily 14 \
  --keep-hourly 48

All set! We're finally ready to actually transfer the snapshot to the backup storage:

kopia snapshot create \
  --parallel=4 \
  --tags "job:nc-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes" \
  "${SQL_DUMP_TARGET_DIR}"

Kopia does a lot of things for us. Apart from the things that we have configured explicitly (like retention, encryption, compression) we get very efficient deduplication, meaning we only need space for data that is actually new. That's also, why we don't compress our database dump - because that way we can make use of the more efficient compression and deduplication Kopia provides.

One particularly nice thing about Kopia backups is, that the progress of a partially transferred backup that got aborted due to e.g. network issues will be picked up during the next try, because Kopia still stores all the data blocks that were transferred and never transfers the same data block again (because of deduplication). In other words: If we need 10 attempts to create our initial backup, because we have a large amount of data, we will still make progress during every attempt and end up with a complete backup at the end.

Step 4: Cleanup

Lastly, we can cleanup the snapshots, we don't need it anymore. Here's a script snippet to do that depending on the filesystem used:

BTRFS LVM
# Unmount our --bind mounts
umount "${SQL_DUMP_TARGET_DIR}/nextcloud"
umount "${SQL_DUMP_TARGET_DIR}/nc_files"

# Remove the snapshot directory (including the db.sql)
rm -rf "${SQL_DUMP_TARGET_DIR}"
# Remove the filesystem snapshots
btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
# Unmount our snapshot mounts
umount "${SQL_DUMP_TARGET_DIR}/nextcloud"
umount "${SQL_DUMP_TARGET_DIR}/nc_files"

# Remove the snapshot directory (including the db.sql)
rm -rf "${SQL_DUMP_TARGET_DIR}"
# Remove the filesystem snapshots
lvchange -an -K "${NC_INSTALL_VOLGROUP}/${NC_INSTALL_SNAPSHOT}"
lvremove "/dev/${NC_INSTALL_VOLGROUP}/${NC_INSTALL_SNAPSHOT}"

lvchange -an -K "${NC_FILES_VOLGROUP}/${NC_FILES_SNAPSHOT}"
lvremove "/dev/${NC_FILES_VOLGROUP}/${NC_FILES_SNAPSHOT}"

Putting it all together

Now that we know how to create a backup, let's automate the process and run it on a schedule. For your convenience, here is a script, that you can drop in /etc/cron.hourly/ or use in a systemd service unit with corresponding systemd timer.

Setup Kopia Repository

If you haven't done it already, execute the following snippet (as root) to configure your Kopia Repository. Select the type of remote repository that matches your setup.

S3 compatible Google Cloud Storage After configuring the options to match your backup storage setup, run this script snippet as root to setup your Kopia repository.

# You will be asked for your access key and access key secret.
# Your access key could be your user name or an ID attached to a specific credential (depending on your provider and configuration)
# The secret part of your access key - could be your user password or a secret attached to a specific credential (depending on your provider and configuration
read -rsp 'Enter S3 Access Key ID: ' ACCESS_KEY \
  && read -rsp $'\n''Enter S3 Secret Access Key: ' SECRET_ACCESS_KEY \
  && kopia repository create s3 \
  --endpoint "[[ENDPOINT]]" \
  --bucket "[[BUCKET_NAME]]" \
  --access-key "${ACCESS_KEY?}" \
  --secret-access-key "${SECRET_ACCESS_KEY?}" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true
After configuring the options to match your backup storage setup, run this script snippet as root to setup your Kopia repository.

Google Default Credentials Google Service Account key file
kopia repository create gcs \
  --bucket "[[BUCKET_NAME]]" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true
kopia repository create gcs \
  --bucket "[[BUCKET_NAME]]" \
  --credentials "[[GOOGLE_CREDENTIALS_FILE]]" \
  --prefix "[[PREFIX]]" \
  --retention-mode governance \
  && kopia maintenance set --extend-object-locks true

Setup the backup script

Backup Script

Following, configure the options to match your setup and then copy or download the backup script. Afterward, adjust the configuration section at the top of the script (mostly paths to your Nextcloud setup) and run it as root. The backup script will use your default Kopia repository (which you have initialized in the previous step).

Note: Make sure to save the script with permissions 0700 or 0500 to avoid other users from reading the credentials inside! (Or move them outside the script, e.g. inside systemd credentials).

S3 or S3 compatible Google cloud Storage BTRFS LVM BTRFS LVM MariaDB PostgreSQL
nextcloud-backup.sh Download Download Download Download Download Download Download Download
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25# Path to the btrfs subvolume containing your NC files directory
26USER_FILES_VOLUME="/mnt/data/ncdata"
27# Path where the (temporary) btrfs snapshot for the NC files directory should be created
28USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
29# DB dump config
30# Path where to temporarily store database dump (should have enough free space)
31SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
32# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
33# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
34# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
35# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
36# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
37# - For postgresql, no adjustemnts are needed
38
39# Configure the connection options for your database:
40DB_USER="nextcloud"
41DB_NAME="nextcloud"
42DB_PASSWORD="password"
43DB_HOST="127.0.0.1"
44DB_PORT="5432"
45
46# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
47MYCNF_PATH="$HOME/.my.cnf"
48install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
49SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" "${DB_NAME}" --single-transaction --skip-lock-tables "${DB_NAME}")
50
51
52# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
53KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
54
55### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
56NC_INSTALL_FSTYPE="btrfs"
57
58USER_FILES_FSTYPE="btrfs"
59
60
61# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
62NC_OWNER_UID=33
63NC_OWNER_GID=33
64
65# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
66USER_FILES_OWNER_UID=33
67USER_FILES_OWNER_GID=33
68
69# Configure path where all parts of the backup will be mapped to before transferring with Kopia
70BACKUP_ROOT="/run/nc-backup"
71
72
73### You usually should not need to touch the following variables
74
75_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
76_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
77_DB_BIND="${BACKUP_ROOT}/db"
78
79# Directory for (temporarily mounting the rootfs snapshot)
80_ROOT_MOUNT="/run/mnt/nc-rootfs"
81# file for keeping track of running backup operations
82_PID_FILE="/var/run/nc-backup.pid"
83
84
85# Utility function for logging to both the log file and stdout
86log() {
87 local msg message
88
89 fwd=(echo)
90 if [[ -n "$LOG_FILE" ]]
91 then
92 fwd=(tee -a "$LOG_FILE")
93 fi
94
95 if [[ -z "${1:-}" ]]
96 then
97 while read -r message
98 do
99 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
100 done
101 return
102 fi
103
104 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
105}
106
107# Utility function for cleaning up snapshots, pid file, etc.
108cleanup() {
109 set +eu
110 echo "Cleanup..."
111
112 rm -rf "${PGPASS_PATH}"
113
114
115 umount "$_DB_BIND"
116 umount "$_NC_INSTALL_BIND"
117 umount "$_USER_FILES_BIND"
118
119 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
120 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
121
122 rm -rf "$SQL_DUMP_TARGET_DIR";
123 # Cleanup NC install dir snapshot
124
125 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
126
127 # Cleanup NC files snapshot
128
129 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
130
131
132 # Remove PID file
133 rm "$_PID_FILE"
134 echo "Done."
135}
136
137
138
139## Detect whether a backup is still running (and abort in this case)
140# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
141if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
142then
143 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
144 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
145 then
146 log "Detected backup process going for over 10 hours, killing it..."
147 kill -9 "$(cat "$_PID_FILE")"
148 else
149 log "Ongoing backup process detected. Aborting..."
150 exit 0
151 fi
152fi
153
154# cleanup leftovers from last run if any
155cleanup > /dev/null 2>&1 ||:
156
157# Save current process id to pid file
158echo "$$" > "$_PID_FILE"
159
160# cleanup will be run at the end of the script whether it fails or succeeds
161trap 'cleanup 2>&1 | log' EXIT
162
163# Make sure directories exist
164mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
165
166timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
167log "Starting Nextcloud Snapshot at $timestamp"
168
169# Snapshot disks
170log "Create snapshots..."
171
172btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
173
174
175btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
176
177log "Done."
178
179# Dump DB
180log "Create DB dump..."
181"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
182log "Done."
183
184# Bind mount all individual snapshots into a common parent directory
185log "Create bind mounts..."
186
187mount --bind -o ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID} "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
188
189
190mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
191
192mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
193log "Done."
194
195# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
196log "Create kopia snapshot..."
197# Temporary log file as a workaround to pass kopia logs to our log function
198kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
199tail -f "$kopia_logfile" &
200kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
201cat "${kopia_logfile%.log}"*.log | log > /dev/null
202log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25# Path to the btrfs subvolume containing your NC files directory
26USER_FILES_VOLUME="/mnt/data/ncdata"
27# Path where the (temporary) btrfs snapshot for the NC files directory should be created
28USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
29# DB dump config
30# Path where to temporarily store database dump (should have enough free space)
31SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
32# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
33# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
34# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
35# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
36# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
37# - For postgresql, no adjustemnts are needed
38
39# Configure the connection options for your database:
40DB_USER="nextcloud"
41DB_NAME="nextcloud"
42DB_PASSWORD="password"
43DB_HOST="127.0.0.1"
44DB_PORT="5432"
45
46
47# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
48PGPASS_PATH="${HOME}/.pgpass"
49install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
50SQLDUMP_CMD=(pg_dump "${DB_NAME}")
51
52# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
53KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
54
55### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
56NC_INSTALL_FSTYPE="btrfs"
57
58USER_FILES_FSTYPE="btrfs"
59
60
61# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
62NC_OWNER_UID=33
63NC_OWNER_GID=33
64
65# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
66USER_FILES_OWNER_UID=33
67USER_FILES_OWNER_GID=33
68
69# Configure path where all parts of the backup will be mapped to before transferring with Kopia
70BACKUP_ROOT="/run/nc-backup"
71
72
73### You usually should not need to touch the following variables
74
75_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
76_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
77_DB_BIND="${BACKUP_ROOT}/db"
78
79# Directory for (temporarily mounting the rootfs snapshot)
80_ROOT_MOUNT="/run/mnt/nc-rootfs"
81# file for keeping track of running backup operations
82_PID_FILE="/var/run/nc-backup.pid"
83
84
85# Utility function for logging to both the log file and stdout
86log() {
87 local msg message
88
89 fwd=(echo)
90 if [[ -n "$LOG_FILE" ]]
91 then
92 fwd=(tee -a "$LOG_FILE")
93 fi
94
95 if [[ -z "${1:-}" ]]
96 then
97 while read -r message
98 do
99 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
100 done
101 return
102 fi
103
104 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
105}
106
107# Utility function for cleaning up snapshots, pid file, etc.
108cleanup() {
109 set +eu
110 echo "Cleanup..."
111
112 rm -rf "${PGPASS_PATH}"
113
114
115 umount "$_DB_BIND"
116 umount "$_NC_INSTALL_BIND"
117 umount "$_USER_FILES_BIND"
118
119 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
120 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
121
122 rm -rf "$SQL_DUMP_TARGET_DIR";
123 # Cleanup NC install dir snapshot
124
125 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
126
127 # Cleanup NC files snapshot
128
129 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
130
131
132 # Remove PID file
133 rm "$_PID_FILE"
134 echo "Done."
135}
136
137
138
139## Detect whether a backup is still running (and abort in this case)
140# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
141if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
142then
143 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
144 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
145 then
146 log "Detected backup process going for over 10 hours, killing it..."
147 kill -9 "$(cat "$_PID_FILE")"
148 else
149 log "Ongoing backup process detected. Aborting..."
150 exit 0
151 fi
152fi
153
154# cleanup leftovers from last run if any
155cleanup > /dev/null 2>&1 ||:
156
157# Save current process id to pid file
158echo "$$" > "$_PID_FILE"
159
160# cleanup will be run at the end of the script whether it fails or succeeds
161trap 'cleanup 2>&1 | log' EXIT
162
163# Make sure directories exist
164mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
165
166timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
167log "Starting Nextcloud Snapshot at $timestamp"
168
169# Snapshot disks
170log "Create snapshots..."
171
172btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
173
174
175btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
176
177log "Done."
178
179# Dump DB
180log "Create DB dump..."
181"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
182log "Done."
183
184# Bind mount all individual snapshots into a common parent directory
185log "Create bind mounts..."
186
187mount --bind -o ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID} "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
188
189
190mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
191
192mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
193log "Done."
194
195# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
196log "Create kopia snapshot..."
197# Temporary log file as a workaround to pass kopia logs to our log function
198kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
199tail -f "$kopia_logfile" &
200kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
201cat "${kopia_logfile%.log}"*.log | log > /dev/null
202log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
26USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
45MYCNF_PATH="$HOME/.my.cnf"
46install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
47SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" "${DB_NAME}" --single-transaction --skip-lock-tables "${DB_NAME}")
48
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="btrfs"
55
56USER_FILES_FSTYPE="lvm"
57# NC files LVM snapshot name
58USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
59USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nc-files"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# Directory for (temporarily mounting the rootfs snapshot)
81_ROOT_MOUNT="/run/mnt/nc-rootfs"
82# file for keeping track of running backup operations
83_PID_FILE="/var/run/nc-backup.pid"
84
85
86# Utility function for logging to both the log file and stdout
87log() {
88 local msg message
89
90 fwd=(echo)
91 if [[ -n "$LOG_FILE" ]]
92 then
93 fwd=(tee -a "$LOG_FILE")
94 fi
95
96 if [[ -z "${1:-}" ]]
97 then
98 while read -r message
99 do
100 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
101 done
102 return
103 fi
104
105 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
106}
107
108# Utility function for cleaning up snapshots, pid file, etc.
109cleanup() {
110 set +eu
111 echo "Cleanup..."
112
113 rm -rf "${PGPASS_PATH}"
114
115
116 umount "$_DB_BIND"
117 umount "$_NC_INSTALL_BIND"
118 umount "$_USER_FILES_BIND"
119
120 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
121 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
122
123 rm -rf "$SQL_DUMP_TARGET_DIR";
124 # Cleanup NC install dir snapshot
125
126 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
127
128 # Cleanup NC files snapshot
129
130 umount "${USER_FILES_SNAPSHOT_PATH}"
131 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
132 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
133 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
134
135
136 # Remove PID file
137 rm "$_PID_FILE"
138 echo "Done."
139}
140
141
142# Utility function for mounting lvm snapshot to a mountpoint
143mount_snapshot_lvm() {
144 local snapshot mountpoint ownership
145 snapshot="${1?}"
146 mountpoint="${2?}"
147 ownership="${3?}"
148 mkdir -p "${mountpoint}"
149 lvchange -ay -K "${snapshot}"
150 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
151}
152
153
154## Detect whether a backup is still running (and abort in this case)
155# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
156if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
157then
158 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
159 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
160 then
161 log "Detected backup process going for over 10 hours, killing it..."
162 kill -9 "$(cat "$_PID_FILE")"
163 else
164 log "Ongoing backup process detected. Aborting..."
165 exit 0
166 fi
167fi
168
169# cleanup leftovers from last run if any
170cleanup > /dev/null 2>&1 ||:
171
172# Save current process id to pid file
173echo "$$" > "$_PID_FILE"
174
175# cleanup will be run at the end of the script whether it fails or succeeds
176trap 'cleanup 2>&1 | log' EXIT
177
178# Make sure directories exist
179mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
180
181timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
182log "Starting Nextcloud Snapshot at $timestamp"
183
184# Snapshot disks
185log "Create snapshots..."
186
187btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
188
189
190lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
191
192log "Done."
193
194# Dump DB
195log "Create DB dump..."
196"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
197log "Done."
198
199# Bind mount all individual snapshots into a common parent directory
200log "Create bind mounts..."
201
202mount --bind -o ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID} "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
203
204
205mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_USER_FILES_BIND}" "uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID}" 2>&1 | log
206
207mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
208log "Done."
209
210# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
211log "Create kopia snapshot..."
212# Temporary log file as a workaround to pass kopia logs to our log function
213kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
214tail -f "$kopia_logfile" &
215kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
216cat "${kopia_logfile%.log}"*.log | log > /dev/null
217log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17# Path to the btrfs subvolume containing your NC installation directory
18NC_INSTALL_VOLUME="/var/www/nextcloud"
19# Path where the (temporary) btrfs snapshot for the NC installation directory should be created
20NC_INSTALL_SNAPSHOT_PATH="/var/www/.nc_snapshot"
21# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
22NC_INSTALL_RELATIVE_PATH="/"
23
24## Nextcloud files directory config
25USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
26USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44
45# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
46PGPASS_PATH="${HOME}/.pgpass"
47install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
48SQLDUMP_CMD=(pg_dump "${DB_NAME}")
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="btrfs"
55
56USER_FILES_FSTYPE="lvm"
57# NC files LVM snapshot name
58USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
59USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nc-files"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# Directory for (temporarily mounting the rootfs snapshot)
81_ROOT_MOUNT="/run/mnt/nc-rootfs"
82# file for keeping track of running backup operations
83_PID_FILE="/var/run/nc-backup.pid"
84
85
86# Utility function for logging to both the log file and stdout
87log() {
88 local msg message
89
90 fwd=(echo)
91 if [[ -n "$LOG_FILE" ]]
92 then
93 fwd=(tee -a "$LOG_FILE")
94 fi
95
96 if [[ -z "${1:-}" ]]
97 then
98 while read -r message
99 do
100 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
101 done
102 return
103 fi
104
105 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
106}
107
108# Utility function for cleaning up snapshots, pid file, etc.
109cleanup() {
110 set +eu
111 echo "Cleanup..."
112
113 rm -rf "${PGPASS_PATH}"
114
115
116 umount "$_DB_BIND"
117 umount "$_NC_INSTALL_BIND"
118 umount "$_USER_FILES_BIND"
119
120 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
121 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
122
123 rm -rf "$SQL_DUMP_TARGET_DIR";
124 # Cleanup NC install dir snapshot
125
126 btrfs subvolume delete "$NC_INSTALL_SNAPSHOT_PATH"
127
128 # Cleanup NC files snapshot
129
130 umount "${USER_FILES_SNAPSHOT_PATH}"
131 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
132 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
133 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
134
135
136 # Remove PID file
137 rm "$_PID_FILE"
138 echo "Done."
139}
140
141
142# Utility function for mounting lvm snapshot to a mountpoint
143mount_snapshot_lvm() {
144 local snapshot mountpoint ownership
145 snapshot="${1?}"
146 mountpoint="${2?}"
147 ownership="${3?}"
148 mkdir -p "${mountpoint}"
149 lvchange -ay -K "${snapshot}"
150 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
151}
152
153
154## Detect whether a backup is still running (and abort in this case)
155# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
156if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
157then
158 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
159 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
160 then
161 log "Detected backup process going for over 10 hours, killing it..."
162 kill -9 "$(cat "$_PID_FILE")"
163 else
164 log "Ongoing backup process detected. Aborting..."
165 exit 0
166 fi
167fi
168
169# cleanup leftovers from last run if any
170cleanup > /dev/null 2>&1 ||:
171
172# Save current process id to pid file
173echo "$$" > "$_PID_FILE"
174
175# cleanup will be run at the end of the script whether it fails or succeeds
176trap 'cleanup 2>&1 | log' EXIT
177
178# Make sure directories exist
179mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
180
181timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
182log "Starting Nextcloud Snapshot at $timestamp"
183
184# Snapshot disks
185log "Create snapshots..."
186
187btrfs subvolume snapshot -r "${NC_INSTALL_VOLUME}" "${NC_INSTALL_SNAPSHOT_PATH}" 2>&1 | log
188
189
190lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
191
192log "Done."
193
194# Dump DB
195log "Create DB dump..."
196"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
197log "Done."
198
199# Bind mount all individual snapshots into a common parent directory
200log "Create bind mounts..."
201
202mount --bind -o ro,uid=${NC_OWNER_UID},gid=${NC_OWNER_GID} "${NC_INSTALL_SNAPSHOT_PATH}/${NC_INSTALL_RELATIVE_PATH#/}" "${_NC_INSTALL_BIND}" 2>&1 | log
203
204
205mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_USER_FILES_BIND}" "uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID}" 2>&1 | log
206
207mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
208log "Done."
209
210# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
211log "Create kopia snapshot..."
212# Temporary log file as a workaround to pass kopia logs to our log function
213kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
214tail -f "$kopia_logfile" &
215kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
216cat "${kopia_logfile%.log}"*.log | log > /dev/null
217log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23# Path to the btrfs subvolume containing your NC files directory
24USER_FILES_VOLUME="/mnt/data/ncdata"
25# Path where the (temporary) btrfs snapshot for the NC files directory should be created
26USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
45MYCNF_PATH="$HOME/.my.cnf"
46install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
47SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" "${DB_NAME}" --single-transaction --skip-lock-tables "${DB_NAME}")
48
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="lvm"
55# NC install LVM snapshot name
56NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
57USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nextcloud"
58
59USER_FILES_FSTYPE="btrfs"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# Directory for (temporarily mounting the rootfs snapshot)
81_ROOT_MOUNT="/run/mnt/nc-rootfs"
82# file for keeping track of running backup operations
83_PID_FILE="/var/run/nc-backup.pid"
84
85
86# Utility function for logging to both the log file and stdout
87log() {
88 local msg message
89
90 fwd=(echo)
91 if [[ -n "$LOG_FILE" ]]
92 then
93 fwd=(tee -a "$LOG_FILE")
94 fi
95
96 if [[ -z "${1:-}" ]]
97 then
98 while read -r message
99 do
100 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
101 done
102 return
103 fi
104
105 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
106}
107
108# Utility function for cleaning up snapshots, pid file, etc.
109cleanup() {
110 set +eu
111 echo "Cleanup..."
112
113 rm -rf "${PGPASS_PATH}"
114
115
116 umount "$_DB_BIND"
117 umount "$_NC_INSTALL_BIND"
118 umount "$_USER_FILES_BIND"
119
120 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
121 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
122
123 rm -rf "$SQL_DUMP_TARGET_DIR";
124 # Cleanup NC install dir snapshot
125
126 umount "${NC_INSTALL_SNAPSHOT_PATH}"
127 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
128 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
129 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
130
131 # Cleanup NC files snapshot
132
133 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
134
135
136 # Remove PID file
137 rm "$_PID_FILE"
138 echo "Done."
139}
140
141
142# Utility function for mounting lvm snapshot to a mountpoint
143mount_snapshot_lvm() {
144 local snapshot mountpoint ownership
145 snapshot="${1?}"
146 mountpoint="${2?}"
147 ownership="${3?}"
148 mkdir -p "${mountpoint}"
149 lvchange -ay -K "${snapshot}"
150 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
151}
152
153
154## Detect whether a backup is still running (and abort in this case)
155# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
156if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
157then
158 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
159 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
160 then
161 log "Detected backup process going for over 10 hours, killing it..."
162 kill -9 "$(cat "$_PID_FILE")"
163 else
164 log "Ongoing backup process detected. Aborting..."
165 exit 0
166 fi
167fi
168
169# cleanup leftovers from last run if any
170cleanup > /dev/null 2>&1 ||:
171
172# Save current process id to pid file
173echo "$$" > "$_PID_FILE"
174
175# cleanup will be run at the end of the script whether it fails or succeeds
176trap 'cleanup 2>&1 | log' EXIT
177
178# Make sure directories exist
179mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
180
181timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
182log "Starting Nextcloud Snapshot at $timestamp"
183
184# Snapshot disks
185log "Create snapshots..."
186
187lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
188
189
190btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
191
192log "Done."
193
194# Dump DB
195log "Create DB dump..."
196"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
197log "Done."
198
199# Bind mount all individual snapshots into a common parent directory
200log "Create bind mounts..."
201
202mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_NC_INSTALL_BIND}" "uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" 2>&1 | log
203
204
205mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
206
207mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
208log "Done."
209
210# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
211log "Create kopia snapshot..."
212# Temporary log file as a workaround to pass kopia logs to our log function
213kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
214tail -f "$kopia_logfile" &
215kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
216cat "${kopia_logfile%.log}"*.log | log > /dev/null
217log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23# Path to the btrfs subvolume containing your NC files directory
24USER_FILES_VOLUME="/mnt/data/ncdata"
25# Path where the (temporary) btrfs snapshot for the NC files directory should be created
26USER_FILES_SNAPSHOT_PATH="/mnt/data/.nc_files_snapshot"
27# DB dump config
28# Path where to temporarily store database dump (should have enough free space)
29SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
30# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
31# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
32# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
33# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
34# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
35# - For postgresql, no adjustemnts are needed
36
37# Configure the connection options for your database:
38DB_USER="nextcloud"
39DB_NAME="nextcloud"
40DB_PASSWORD="password"
41DB_HOST="127.0.0.1"
42DB_PORT="5432"
43
44
45# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
46PGPASS_PATH="${HOME}/.pgpass"
47install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
48SQLDUMP_CMD=(pg_dump "${DB_NAME}")
49
50# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
51KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
52
53### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
54NC_INSTALL_FSTYPE="lvm"
55# NC install LVM snapshot name
56NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
57USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nextcloud"
58
59USER_FILES_FSTYPE="btrfs"
60
61
62# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
63NC_OWNER_UID=33
64NC_OWNER_GID=33
65
66# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
67USER_FILES_OWNER_UID=33
68USER_FILES_OWNER_GID=33
69
70# Configure path where all parts of the backup will be mapped to before transferring with Kopia
71BACKUP_ROOT="/run/nc-backup"
72
73
74### You usually should not need to touch the following variables
75
76_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
77_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
78_DB_BIND="${BACKUP_ROOT}/db"
79
80# Directory for (temporarily mounting the rootfs snapshot)
81_ROOT_MOUNT="/run/mnt/nc-rootfs"
82# file for keeping track of running backup operations
83_PID_FILE="/var/run/nc-backup.pid"
84
85
86# Utility function for logging to both the log file and stdout
87log() {
88 local msg message
89
90 fwd=(echo)
91 if [[ -n "$LOG_FILE" ]]
92 then
93 fwd=(tee -a "$LOG_FILE")
94 fi
95
96 if [[ -z "${1:-}" ]]
97 then
98 while read -r message
99 do
100 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
101 done
102 return
103 fi
104
105 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
106}
107
108# Utility function for cleaning up snapshots, pid file, etc.
109cleanup() {
110 set +eu
111 echo "Cleanup..."
112
113 rm -rf "${PGPASS_PATH}"
114
115
116 umount "$_DB_BIND"
117 umount "$_NC_INSTALL_BIND"
118 umount "$_USER_FILES_BIND"
119
120 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
121 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
122
123 rm -rf "$SQL_DUMP_TARGET_DIR";
124 # Cleanup NC install dir snapshot
125
126 umount "${NC_INSTALL_SNAPSHOT_PATH}"
127 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
128 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
129 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
130
131 # Cleanup NC files snapshot
132
133 btrfs subvolume delete "$USER_FILES_SNAPSHOT_PATH"
134
135
136 # Remove PID file
137 rm "$_PID_FILE"
138 echo "Done."
139}
140
141
142# Utility function for mounting lvm snapshot to a mountpoint
143mount_snapshot_lvm() {
144 local snapshot mountpoint ownership
145 snapshot="${1?}"
146 mountpoint="${2?}"
147 ownership="${3?}"
148 mkdir -p "${mountpoint}"
149 lvchange -ay -K "${snapshot}"
150 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
151}
152
153
154## Detect whether a backup is still running (and abort in this case)
155# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
156if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
157then
158 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
159 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
160 then
161 log "Detected backup process going for over 10 hours, killing it..."
162 kill -9 "$(cat "$_PID_FILE")"
163 else
164 log "Ongoing backup process detected. Aborting..."
165 exit 0
166 fi
167fi
168
169# cleanup leftovers from last run if any
170cleanup > /dev/null 2>&1 ||:
171
172# Save current process id to pid file
173echo "$$" > "$_PID_FILE"
174
175# cleanup will be run at the end of the script whether it fails or succeeds
176trap 'cleanup 2>&1 | log' EXIT
177
178# Make sure directories exist
179mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
180
181timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
182log "Starting Nextcloud Snapshot at $timestamp"
183
184# Snapshot disks
185log "Create snapshots..."
186
187lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
188
189
190btrfs subvolume snapshot -r "${USER_FILES_VOLUME}" "${USER_FILES_SNAPSHOT_PATH}" 2>&1 | log
191
192log "Done."
193
194# Dump DB
195log "Create DB dump..."
196"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
197log "Done."
198
199# Bind mount all individual snapshots into a common parent directory
200log "Create bind mounts..."
201
202mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_NC_INSTALL_BIND}" "uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" 2>&1 | log
203
204
205mount --bind -o ro,uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID} "${USER_FILES_SNAPSHOT_PATH}/${USER_FILES_RELATIVE_PATH#/}" "${_USER_FILES_BIND}" 2>&1 | log
206
207mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
208log "Done."
209
210# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
211log "Create kopia snapshot..."
212# Temporary log file as a workaround to pass kopia logs to our log function
213kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
214tail -f "$kopia_logfile" &
215kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
216cat "${kopia_logfile%.log}"*.log | log > /dev/null
217log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
24USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
25# DB dump config
26# Path where to temporarily store database dump (should have enough free space)
27SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
28# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
29# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
30# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
31# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
32# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
33# - For postgresql, no adjustemnts are needed
34
35# Configure the connection options for your database:
36DB_USER="nextcloud"
37DB_NAME="nextcloud"
38DB_PASSWORD="password"
39DB_HOST="127.0.0.1"
40DB_PORT="5432"
41
42# mysqldump will be supplied with the db credentials by writing them to ~/.my.cnf
43MYCNF_PATH="$HOME/.my.cnf"
44install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "${MYCNF_PATH}"
45SQLDUMP_CMD=(mysqldump --host="${DB_HOST}" --port="${DB_PORT}" "${DB_NAME}" --single-transaction --skip-lock-tables "${DB_NAME}")
46
47
48# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
49KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
50
51### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
52NC_INSTALL_FSTYPE="lvm"
53# NC install LVM snapshot name
54NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
55USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nextcloud"
56
57USER_FILES_FSTYPE="lvm"
58# NC files LVM snapshot name
59USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
60USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nc-files"
61
62
63# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
64NC_OWNER_UID=33
65NC_OWNER_GID=33
66
67# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
68USER_FILES_OWNER_UID=33
69USER_FILES_OWNER_GID=33
70
71# Configure path where all parts of the backup will be mapped to before transferring with Kopia
72BACKUP_ROOT="/run/nc-backup"
73
74
75### You usually should not need to touch the following variables
76
77_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
78_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
79_DB_BIND="${BACKUP_ROOT}/db"
80
81# Directory for (temporarily mounting the rootfs snapshot)
82_ROOT_MOUNT="/run/mnt/nc-rootfs"
83# file for keeping track of running backup operations
84_PID_FILE="/var/run/nc-backup.pid"
85
86
87# Utility function for logging to both the log file and stdout
88log() {
89 local msg message
90
91 fwd=(echo)
92 if [[ -n "$LOG_FILE" ]]
93 then
94 fwd=(tee -a "$LOG_FILE")
95 fi
96
97 if [[ -z "${1:-}" ]]
98 then
99 while read -r message
100 do
101 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
102 done
103 return
104 fi
105
106 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
107}
108
109# Utility function for cleaning up snapshots, pid file, etc.
110cleanup() {
111 set +eu
112 echo "Cleanup..."
113
114 rm -rf "${PGPASS_PATH}"
115
116
117 umount "$_DB_BIND"
118 umount "$_NC_INSTALL_BIND"
119 umount "$_USER_FILES_BIND"
120
121 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
122 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
123
124 rm -rf "$SQL_DUMP_TARGET_DIR";
125 # Cleanup NC install dir snapshot
126
127 umount "${NC_INSTALL_SNAPSHOT_PATH}"
128 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
129 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
130 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
131
132 # Cleanup NC files snapshot
133
134 umount "${USER_FILES_SNAPSHOT_PATH}"
135 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
136 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
137 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
138
139
140 # Remove PID file
141 rm "$_PID_FILE"
142 echo "Done."
143}
144
145
146# Utility function for mounting lvm snapshot to a mountpoint
147mount_snapshot_lvm() {
148 local snapshot mountpoint ownership
149 snapshot="${1?}"
150 mountpoint="${2?}"
151 ownership="${3?}"
152 mkdir -p "${mountpoint}"
153 lvchange -ay -K "${snapshot}"
154 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
155}
156
157
158## Detect whether a backup is still running (and abort in this case)
159# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
160if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
161then
162 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
163 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
164 then
165 log "Detected backup process going for over 10 hours, killing it..."
166 kill -9 "$(cat "$_PID_FILE")"
167 else
168 log "Ongoing backup process detected. Aborting..."
169 exit 0
170 fi
171fi
172
173# cleanup leftovers from last run if any
174cleanup > /dev/null 2>&1 ||:
175
176# Save current process id to pid file
177echo "$$" > "$_PID_FILE"
178
179# cleanup will be run at the end of the script whether it fails or succeeds
180trap 'cleanup 2>&1 | log' EXIT
181
182# Make sure directories exist
183mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
184
185timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
186log "Starting Nextcloud Snapshot at $timestamp"
187
188# Snapshot disks
189log "Create snapshots..."
190
191lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
192
193
194lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
195
196log "Done."
197
198# Dump DB
199log "Create DB dump..."
200"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
201log "Done."
202
203# Bind mount all individual snapshots into a common parent directory
204log "Create bind mounts..."
205
206mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_NC_INSTALL_BIND}" "uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" 2>&1 | log
207
208
209mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_USER_FILES_BIND}" "uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID}" 2>&1 | log
210
211mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
212log "Done."
213
214# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
215log "Create kopia snapshot..."
216# Temporary log file as a workaround to pass kopia logs to our log function
217kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
218tail -f "$kopia_logfile" &
219kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
220cat "${kopia_logfile%.log}"*.log | log > /dev/null
221log "Done."
1#!/bin/bash
2
3set -euo pipefail
4
5### ========== ###
6# Procedure
7# - The script will create snapshots of the lvm root volume and the btrfs subvolume backing Nextcloud's user files. Then it will start an sqldump of the database
8# - Afterwards, all 3 snapshots will be mounted/copied in the same parent directory and uploaded to a kopia repository
9### ========== ###
10
11### CONFIGURATION
12
13# file for writing logs. Can be left empty to only log to stdout
14LOG_FILE="/var/log/nc-backup.log"
15
16## Nextcloud installation directory config
17NC_INSTALL_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
18NC_INSTALL_VOLUME="nextcloud" # Adjust to match the logical volume name containing your nc installation directory
19# Adjust this if the nc installation directory is not in the root of the backing filesystem volume
20NC_INSTALL_RELATIVE_PATH="/"
21
22## Nextcloud files directory config
23USER_FILES_VG="lvm-1" # Adjust to match the LVM volume group containing your nc installation directory
24USER_FILES_LV="nc-files" # Adjust to match the logical volume name containing your nc installation directory
25# DB dump config
26# Path where to temporarily store database dump (should have enough free space)
27SQL_DUMP_TARGET_DIR="/var/nc-backup-db"
28# Whatever command can be used to create an sqldump (including parameters like user, host, dbname, password cfg ...)
29# - If your DB is run inside docker, just prepend the command with whatever is needed to run a command inside the container,
30# for example: SQLDUMP_CMD=(docker exec <containername> pg_dump -U postgres -d nc)
31# - For MariaDB (with the InnoDB engine), make sure to use --single-transaction (and optionally --skip-lock-tables to avoid slow downs for your system) to avoid side effects.
32# See https://mysqldump.guru/mysqldump-single-transaction-flag.html for more info.
33# - For postgresql, no adjustemnts are needed
34
35# Configure the connection options for your database:
36DB_USER="nextcloud"
37DB_NAME="nextcloud"
38DB_PASSWORD="password"
39DB_HOST="127.0.0.1"
40DB_PORT="5432"
41
42
43# pg_dump will be supplied with the connection options by writing them to ~/.pgpass
44PGPASS_PATH="${HOME}/.pgpass"
45install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "${PGPASS_PATH}"
46SQLDUMP_CMD=(pg_dump "${DB_NAME}")
47
48# Setup any tags here that you want to be added to your your kopia snapshot (comma separated in the format label:value)
49KOPIA_TAGS="job:nextcloud-full-backup,nc-files:yes,nc-db:yes,nc-nextcloud:yes"
50
51### ADVANCED/INTERNAL CONFIG - only adjust if you are sure that you know what you're doing ###
52NC_INSTALL_FSTYPE="lvm"
53# NC install LVM snapshot name
54NC_INSTALL_SNAPSHOT_NAME="nc-backup-nextcloud"
55USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nextcloud"
56
57USER_FILES_FSTYPE="lvm"
58# NC files LVM snapshot name
59USER_FILES_SNAPSHOT_NAME="nc-backup-nc-files"
60USER_FILES_SNAPSHOT_PATH="/run/nc-backup/nc-files"
61
62
63# FORCE UID and GID that all files from the nextcloud install directory should have in the backup (e.g. that of your www-data user and group)
64NC_OWNER_UID=33
65NC_OWNER_GID=33
66
67# FORCE UID and GID that all user files should have in the backup (e.g. that of your www-data user and group)
68USER_FILES_OWNER_UID=33
69USER_FILES_OWNER_GID=33
70
71# Configure path where all parts of the backup will be mapped to before transferring with Kopia
72BACKUP_ROOT="/run/nc-backup"
73
74
75### You usually should not need to touch the following variables
76
77_USER_FILES_BIND="${BACKUP_ROOT}/nc_files"
78_NC_INSTALL_BIND="${BACKUP_ROOT}/nextcloud"
79_DB_BIND="${BACKUP_ROOT}/db"
80
81# Directory for (temporarily mounting the rootfs snapshot)
82_ROOT_MOUNT="/run/mnt/nc-rootfs"
83# file for keeping track of running backup operations
84_PID_FILE="/var/run/nc-backup.pid"
85
86
87# Utility function for logging to both the log file and stdout
88log() {
89 local msg message
90
91 fwd=(echo)
92 if [[ -n "$LOG_FILE" ]]
93 then
94 fwd=(tee -a "$LOG_FILE")
95 fi
96
97 if [[ -z "${1:-}" ]]
98 then
99 while read -r message
100 do
101 echo "\$$$>$(date -Is)=>" "${message[@]}" 2>&1 | "${fwd[@]}"
102 done
103 return
104 fi
105
106 echo "\$$$>$(date -Is)=>" "${@}" 2>&1 | "${fwd[@]}"
107}
108
109# Utility function for cleaning up snapshots, pid file, etc.
110cleanup() {
111 set +eu
112 echo "Cleanup..."
113
114 rm -rf "${PGPASS_PATH}"
115
116
117 umount "$_DB_BIND"
118 umount "$_NC_INSTALL_BIND"
119 umount "$_USER_FILES_BIND"
120
121 [[ -z "${USER_FILES_MOUNT}" ]] || umount "$USER_FILES_MOUNT"
122 [[ -z "${NC_INSTALL_MOUNT}" ]] || umount "$NC_INSTALL_MOUNT"
123
124 rm -rf "$SQL_DUMP_TARGET_DIR";
125 # Cleanup NC install dir snapshot
126
127 umount "${NC_INSTALL_SNAPSHOT_PATH}"
128 rm -rf "${NC_INSTALL_SNAPSHOT_PATH}"
129 lvchange -an -K "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
130 lvremove "/dev/${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}"
131
132 # Cleanup NC files snapshot
133
134 umount "${USER_FILES_SNAPSHOT_PATH}"
135 rm -rf "${USER_FILES_SNAPSHOT_PATH}"
136 lvchange -an -K "${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
137 lvremove "/dev/${USER_FILES_VG}/${USER_FILES_SNAPSHOT_NAME}"
138
139
140 # Remove PID file
141 rm "$_PID_FILE"
142 echo "Done."
143}
144
145
146# Utility function for mounting lvm snapshot to a mountpoint
147mount_snapshot_lvm() {
148 local snapshot mountpoint ownership
149 snapshot="${1?}"
150 mountpoint="${2?}"
151 ownership="${3?}"
152 mkdir -p "${mountpoint}"
153 lvchange -ay -K "${snapshot}"
154 mount -o "ro,${ownership}" "/dev/${snapshot}" "${mountpoint}"
155}
156
157
158## Detect whether a backup is still running (and abort in this case)
159# If the backup has been running for more than 10 hours, it is assumed to be stuck and its process killed
160if [[ -f "$_PID_FILE" ]] && ps -p "$(cat "$_PID_FILE")" > /dev/null
161then
162 elapsed="$(ps -o etimes= -p "$(cat "$_PID_FILE")")"
163 if [[ "${elapsed// /}" -gt $((10 * 3600)) ]]
164 then
165 log "Detected backup process going for over 10 hours, killing it..."
166 kill -9 "$(cat "$_PID_FILE")"
167 else
168 log "Ongoing backup process detected. Aborting..."
169 exit 0
170 fi
171fi
172
173# cleanup leftovers from last run if any
174cleanup > /dev/null 2>&1 ||:
175
176# Save current process id to pid file
177echo "$$" > "$_PID_FILE"
178
179# cleanup will be run at the end of the script whether it fails or succeeds
180trap 'cleanup 2>&1 | log' EXIT
181
182# Make sure directories exist
183mkdir -p "$_USER_FILES_BIND" "$_NC_INSTALL_BIND" "$_DB_BIND" "$USER_FILES_MOUNT" "$NC_INSTALL_MOUNT" "$SQL_DUMP_TARGET_DIR"
184
185timestamp="$(date "+%Y-%m-%d %H:%M:%S %Z")"
186log "Starting Nextcloud Snapshot at $timestamp"
187
188# Snapshot disks
189log "Create snapshots..."
190
191lvcreate -s --name "$NC_INSTALL_SNAPSHOT_NAME" "${NC_INSTALL_VG}/${NC_INSTALL_VOLUME}" 2>&1 | log
192
193
194lvcreate -s --name "${USER_FILES_SNAPSHOT_NAME}" "${USER_FILES_VG}/${USER_FILES_LV}" 2>&1 | log
195
196log "Done."
197
198# Dump DB
199log "Create DB dump..."
200"${SQLDUMP_CMD[@]}" > "${SQL_DUMP_TARGET_DIR}/db.sql"
201log "Done."
202
203# Bind mount all individual snapshots into a common parent directory
204log "Create bind mounts..."
205
206mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_NC_INSTALL_BIND}" "uid=${NC_OWNER_UID},gid=${NC_OWNER_GID}" 2>&1 | log
207
208
209mount_snapshot_lvm "${NC_INSTALL_VG}/${NC_INSTALL_SNAPSHOT_NAME}" "${_USER_FILES_BIND}" "uid=${USER_FILES_OWNER_UID},gid=${USER_FILES_OWNER_GID}" 2>&1 | log
210
211mount --bind -o ro,uid=root,gid=root "${SQL_DUMP_TARGET_DIR}" "${_DB_BIND}" 2>&1 | log
212log "Done."
213
214# Create the final snapshot with kopia using the default kopia repository config (if needed, you can change the config file with the --config-file parameter)
215log "Create kopia snapshot..."
216# Temporary log file as a workaround to pass kopia logs to our log function
217kopia_logfile="$(mktemp --tmpdir kopia-snapshot.XXXXX.log)"
218tail -f "$kopia_logfile" &
219kopia snapshot create --parallel=4 --tags "${KOPIA_TAGS}" --start-time="$timestamp" --log-file="$kopia_logfile" --file-log-level="info" "${BACKUP_ROOT}"
220cat "${kopia_logfile%.log}"*.log | log > /dev/null
221log "Done."

Running regular backups

Now set up the script as cron job. If you want hourly backups (that's how I'm running it), you need to do nothing more than drop it into /etc/cron.hourly/.

If your want a custom schedule, save it in a separate location instead (e.g. /usr/sbin/) and create an entry in /etc/crontab, e.g.:

# minute hour day month weekday command
  30     */4  */2   *     *     bash /usr/sbin/nextcloud-backup.sh

which will run the script every 4 hours at every 2nd day at the half hour mark.

Make sure to adjust the script permissions so it is executable and not readable by other users:

chmod 0500 /usr/sbin/nextcloud-backup.sh

Restoring a backup

Now, let's have a look at restoring a backup:

Reuse or setup a Nextcloud server

In order to restore a snapshot, you need a Nextcloud server, that is working fine (apart from missing your data). So either set one up or reuse your existing server, if you're confident that only the backed up data (i.e. Nextcloud installation directory, user files directory or database) was damaged. This server needs to match your previous one (at the time of the backup) as closely as possible (e.g. the database and PHP version should be the same or at least compatible).

Restore a snapshot

The first thing to do is, to have a look at the list of snapshots and restore one of them. Alternatively to using the given commands, we can also use the Kopia GUI.

# List the snapshots and find one that you want to restore
kopia snapshots list --all | less
# If we're not on a fresh server, remove the existing directories that will be restored
rm -rf /var/www/nextcloud/* /var/www/nextcloud/.[!.]*
rm -rf /mnt/data/ncdata/* /mnt/data/ncdata/.[!.]*
# Restore the snapshot to the original locations (see https://kopia.io/docs/reference/command-line/common/snapshot-restore/)
# Let's assume the id of the snapshot we want to restore is "kffbb7c28ea6c34d6cbe555d1cf80faa9"
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/nextcloud" /var/www/nextcloud
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/nc_files" /mnt/data/ncdata
kopia snapshot restore "kffbb7c28ea6c34d6cbe555d1cf80faa9/db.sql" /run/nc-restore-db.sql
# Finally, let's ensure, the restored directories have correct ownership.
# Assuming your nextcloud webserver is running under uid=33, do:
chown -R 33: /var/www/nextcloud
chown -R 33: /mnt/data/ncdata

Now we need to restore the database from the restored /run/nc-restore-db.sql:

PostgreSQL MariaDB / MySQL
DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# psql will be supplied with the connection options by writing them to ~/.pgpass
install -m 0600 <(echo "${DB_HOST}:${DB_PORT}:${DB_NAME}:${DB_USER}:${DB_PASSWORD}") "$HOME/.pgpass"
psql -d "${DB_NAME}" < "/run/nc-restore-db.sql"
DB_USER="root"
DB_NAME="nextcloud"
DB_PASSWORD="password"
DB_HOST="localhost"
DB_PORT=5432
# mysql will be supplied with the db credentials by writing them to ~/.my.cnf
install -m 0600 <(echo -e "[mysqldump]\nuser=${DB_USER}\npassword=${DB_PASSWORD}") "$HOME/.my.cnf"
mysql \
  --host="${DB_HOST}" \
  --port="${DB_PORT}" \
  "${DB_NAME}" < "/run/nc-restore-db.sql"

That's it! If things in your setup changed, you might still have to adjust /var/www/nextcloud/config/config.php ( e.g. db credentials, redis adress ... ). But other than that, your backup should be restored successfully at this point.

Conclusion

As becomes clear, producing great backups can be a bit involved. But in my opinion, they are more than worth it, because once set, they provide a quality of sleep that's otherwise hard to achieve as sysadmin. :D Jokes aside, having robust backups means a significant reduction in the "worst case" that we could endure with our service (in regard to data loss) and are therefore crucial to have.

Revisiting our criteria

Let's check how well we fared in terms of our criteria from part 1 of this blog series:

As you can see - nearly all boxes are ticked! The remaining box will be addressed in the upcoming 3rd part of this series where I talk about backup monitoring. So stay tuned and consider following me on mastodon or subscribing your feed reader to the RSS feed to make sure, you won't miss it. :)