So you find yourself with a Linux server, configured exactly how you like it. Now, you’d like to make sure you don’t have to rebuild it from scratch if your hard drive dies or some other catastrophic event happens. Let’s do that with borg backup, systemd
, and a little shell scripting.
I back up a computer running Debian Sid, and a Raspberry Pi running Debian Bullseye to 3 targets. It looks something like this:
Since borg only works over SSH, most of my backup targets are also Linux computers, with the exception of Google Drive, which just uses rclone
to copy the borg local repository to a folder in Google Drive.
With different backup targets all having different storage sizes, there are different kinds of backups I’d like to manage. For instance, I don’t mind that much if copies of Linux ISOs are lost, so I just back them up locally on a separate hard drive nightly and an airgapped RAID array every so often.
borg backup script
borg backup provides a nice script on their docs to automate backups, have a read about it here.
This is a slightly modified version of that script, which makes use of environment variables and listfiles, to work better with my setup:
If you don’t have any particular desires, you can pretty much just copy/paste this into a file on your computer and chmod +x
it to make it executable.
Environment Variables
Here’s a rundown of the variables and how they needs to be set:
Environment Variable | Description |
---|---|
BORG_REPO |
Either a local or an ssh:// path of a borg repository which has already been initialised |
BORG_PASSPHRASE |
The password for aforementioned borg repo |
PATTERN_LIST_LOCATION |
The location of a patternfile, see here for help |
BTRFS_SNAP_LOCATION |
If you use btrfs, a location to temporarily store snaps. |
EXTRA_ARGS |
A string to pass to borg create, in my instance I set it as --files-cache ctime,size on backups using mergerfs as a source. |
For testing, setting all the environment variables with the appropriate values for your setup like export BORG_REPO=/where/my/borg/repo/is
and then running the script should yield in success.
Templating with systemd
systemd
has some handy functionality with the @ symbol, whereby a template service can be ran with the same service file, but different overrides for each service@foo instance.
For our template service file, /etc/systemd/system/borg@.service
, we just execute the script above as below:
The %I
will be replaced by foo if we were to call systemctl start borg@foo
. (I’ll use foo as an example instance from hereon)
Getting environment variables into our systemd
unit
Currently, if we were to start this systemd unit file with borg@foo
, it would fail since there is no environment variables set. To set them for an instance of borg@
, we can create an override file in a directory next to borg@.service
.
To create these files, we can use systemctl edit borg@foo
. This will give us an editor where we can edit the overrides. To set environment variables for our instance overrides, refer to the override.conf
example file:
If we save this, we can see that /etc/systemd/system/borg@foo.service.d/override.conf
has been created.
Since this file contains secrets, it’s wise to secure it properly - chmod -R 740 /etc/systemd/system/borg@foo.service.d
will make sure that only root can read and write this file.
Now, if we run systemctl start borg@foo
, it should run our script borg_backups.sh
with the environment variables that you’ve set in your override files.
We check the status with systemctl status borg@foo
, or for just logs, I use journalctl -efu borg@foo
Running automatically with systemd
To run this according to a schedule, we’ll use another systemd
unit and a systemd
timer.
Our systemd unit can look like this:
The script it’s targeting looks like this:
And our systemd timer can look like this:
How you set this up is down to personal preference. run_backups.sh
could also be used to run other backups or tasks you’d like to perform before or after running your borg backups. For instance, I use it to run btrbk
as well as my borg backups, just in case I want to rollback the entire system.
To get our timer to actually run, use systemctl start backups.timer && systemctl enable backups.timer
Notes
btrfs
If you want to use this with btrfs snapshotting, please modify the script to match the subvolumes you want to snapshot. At the moment, it’s going to try and snapshot @rootfs and @home, so you’ll need to make those modifications, both where the snapshot is created, and where the snapshot is deleted.
Conclusion
- By leveraging
systemd
to hold all our environment variables, we don’t expose our secrets in the process name. - Logging is simplified and organised for us with our
systemd
instances. - Making changes to our master
borg_backup.sh
script applies to all of oursystemd
instances instead of manually having to update each backup script n times.
Here’s everything visualised in a rudimentary diagram: