Borgbackup

We use BorgBackup to provide our customers with secure, efficient, and easy-to-configure backups as part or our standard managed and part-managed hosting.

What is Borgbackup?

BorgBackup (often shortened to Borg) is deduplicating archiver with compression and encryption, providing efficient, secure data backups. We use BorgBackup to provide our customers with secure, efficient, and easy-to-configure backups as part or our standard managed and part-managed hosting, and you can see how we use it here

What does it do?

The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. And the authenticated encryption technique makes it suitable for backups to not fully trusted targets.

The main features of BorgBackup

  • Space efficient storage of backups.
  • Speed - local caching of files/chunks index data and quick detection of unmodified files
  • Secure, authenticated encryption.
  • Compression: LZ4, zlib, LZMA, zstd (since borg 1.1.4).
  • Backup archives are mountable as userspace filesystems for easy interactive backup examination and restores (e.g. by using a regular file manager).
  • Easy installation on multiple platforms: Linux, macOS, BSD, offering single-file binaries that don’t require installing
  • Free software (BSD license).
  • Backed by a large and active open source community.
  • Offsite backups - can store data on any remote host accessible over SSH

BorgBackup deduplication

Compared to some other deduplication approaches, the method does NOT depend on:

  • file/directory names staying the same: So you can move your data around without killing the deduplication, even between machines sharing a repo.
  • complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored, so this is good for raw disks or VMs
  • The absolute position of a data chunk inside a file, so things may get shifted and will still be found by the deduplication algorithm

Borgbackup compression

Valid compression specifiers apart from “none” (do not compress) are:

lz4

Use lz4 compression. Very high speed, very low compression. (default)

zstd[,L]

Use zstd (“zstandard”) compression, a modern wide-range algorithm. If you do not give the compression level L (ranging from 1 to 22), it will use level 3. Archives compressed with zstd are not compatible with Borg Backup < 1.1.4.

zlib[,L]

Use zlib (“gz”) compression. Medium speed, medium compression. If you do not give the compression level L (ranging from 0 to 9), it will use level 6.

lzma[,L]

Use lzma (“xz”) compression. Low speed, high compression. If you do not give the compression level L (ranging from 0 to 9), it will use level 6. Giving levels above 6 is counterproductive because it does not compress better due to the buffer size used by BorgBackup ( and it wastes CPU cycles and RAM)

auto,C[,L]

Use a built-in heuristic to decide per chunk whether to compress or not. The heuristic tries with lz4 whether the data is compressible. For incompressible data, it will not use compression (uses “none”). For compressible data, it uses the given C[,L] compression - with C[,L] being any valid compression specifier.

Where do you get BorgBackup?

You can get BorgBackup here

Github and the issue tracker is here

Some stats

There’s some useful information at openhub.net to aid evaluation, tracking and comparison of BorgBackup with other free and open source software.

BorgBackup documentation

The documentation can be found here

The ManKier project has a man page for BorgBackup borg command man page - borgbackup

List of other FOSS linux backup solutions

Here’s a pretty exhaustive list of free open source backup solutions for Linux - GitHub - restic/others: Exhaustive list of backup solutions for Linux

How we use BorgBackup

We have a standard build for our part- and fully-managed hosting customers:

By default borg is set to run every day at the server’s daily cron job time:

  • backup all your configuration files in /etc and retain: 28 days and 12 months
  • backup all your sites and user data in /home and retain: 7 days, 4 weeks, and 6 months
  • backup all email account data in /srv/mail and retain: 7 days, 2 weeks, and 2 months
  • backup MySQL database backups kept in /var/lib/automysqlbackup and retain: 3 days, 2 weeks, and 1 month
  • backup calendar data stored in /srv/radicale and retain: 7 days, 2 weeks, and 2 months

[Note that to make backups of MySQL databases, we also use automysqlbackup which stores a gzipped mysqldump file for each database, every day, in /var/lib/automysqlbackup]

Browsing Backups

  • . /etc/faelix/moose/borgbackup to get the configuration loaded
  • export BORG_PASSPHRASE so you don’t need to type the passphrase
  • borg list $REPOSITORY to show all the backups

For example:

etc-2017-02-17
Fri, 2017-02-17 04:11:06
home-2017-02-17
Fri, 2017-02-17 04:11:10
mail-2017-02-17
Fri, 2017-02-17 04:11:14
radicale-2017-02-17
Fri, 2017-02-17 04:11:17
mysql-2017-02-17
Fri, 2017-02-17 04:11:21

Here we have backups of /etc, /home, /srv/mail, /srv/radicale and /var/lib/automysqlbackup all taken on 2017-02-17 (17th February 2017).

To show information about a specific backup archive, say mysql-2017-02-17:

  • . /etc/faelix/moose/borgbackup to get the configuration loaded
  • export BORG_PASSPHRASE so you don’t need to type the passphrase
  • borg info $REPOSITORY::mysql-2017-02-17 will show you the information

In our example:

Name: mysql-2017-02-17 Fingerprint: 3c01cef01f61b229325a76f58de050ef8b381a14abe2ed4846feb55aec97e243 Hostname: moose.faelix.net Username: root Time (start): Fri, 2017-02-17 04:11:21 Time (end): Fri, 2017-02-17 04:11:21 Command line: /usr/local/bin/borg create –compression none -v –stats borg@borg0.g.faelix.net:/home/borg/moose.faelix.net::mysql-{now:%Y-%m-%d} /var/lib/automysqlbackup/latest Number of files: 6

Name Original size Compressed size Deduplicated size
This archive 7.40 kB 7.74 kB 7.74 kB
All archives 192.25 MB 90.09 MB 11.01 MB

Unique chunks Total chunks
Chunk index 3348 36053

This shows us when the backup was created, what it is a backup of, and how large it is (compressed, and deduplicated).

Making an Extra Backup

Sometimes you want to take an extra backup, either just before or just after a large change. We’ve made this easy with just one command:

moose.backup before-upgrade will create backups of everything, giving them names ending YYYY-MM-DD-before-upgrade (where YYYY-MM-DD is the date) You might want to do a backup after your changes as well, just to give extra peace of mind.

Restoring Data from Backup

  • . /etc/faelix/moose/borgbackup to get the configuration loaded
  • export BORG_PASSPHRASE so you don’t need to type the passphrase
  • borg mount $REPOSITORY /mnt to mount the backups so you can copy files out of previous versions
  • umount /mnt when you’re done