Compare commits

...

93 Commits

Author SHA1 Message Date
a1cf833079 initial-setup: include systemd unit in vars 2025-03-15 12:52:39 -04:00
2024519b16 initial-setup: fix script permissions 2025-03-15 11:12:53 -04:00
90e9310ae0 Update README, add logs.sh 2025-03-15 11:08:52 -04:00
a9ab963d49 initial-setup: support different systemd unit names 2025-03-14 23:34:32 -04:00
eec423bc6f initial-setup: bugfixes 2025-03-14 15:00:29 -04:00
8caf8e04fb initial-setup: clean up, add PORT option, improve --update 2025-03-14 14:16:18 -04:00
b16c4d7330 Support recovery by setting up an existing repo 2024-07-01 14:24:28 -04:00
59ceff6d1a Update Pipfile.lock for newer Cython compatibility 2023-11-03 13:52:32 -04:00
9703c5fc72 bin: rebuild borg.x86_64 with staticx 0.13.8 2022-09-17 11:55:44 -04:00
c68b867b50 Add notes about host ID 2021-11-16 10:11:08 -05:00
7dea155f58 use python3 when getting UUID 2021-11-16 09:55:58 -05:00
f6e8863128 backup: adjust email formatting 2021-10-26 21:37:53 -04:00
342e2cd0e8 Update README 2021-10-26 16:03:52 -04:00
f14b0d2d4d backup.py: fix notification error 2021-10-26 16:00:46 -04:00
b74f9c75a2 initial-setup: fix --update 2021-10-26 15:55:32 -04:00
9b38c248d8 borg: add ARM binary for Pi; update scripts to use it 2021-10-26 15:50:08 -04:00
46f9f98860 backup: show errors at top of email notification 2021-10-26 13:24:39 -04:00
dc7d72b2da initial-setup: make systemd units restart on failure 2021-10-26 13:24:28 -04:00
cb12e09c46 backup: rework output to make notification emails easier to read 2021-10-26 13:20:21 -04:00
1115d1f821 backup: rename pstr() helper to b2s()
The helper is just bytes->str conversion with errors=backslashreplace,
which we can use for more than just paths.
2021-10-26 12:54:41 -04:00
e2f92ccb7a readme: fix typo 2021-10-19 14:48:28 -04:00
a15cb5b07d all: remove concept of read-write key
We don't need a read-write key: we can just SSH directly to
jim-backups@backup.jim.sh instead and run commands thay way.
Remove read-write key and document it in the README.

Also add some tools to update the README variables on updates.
2021-10-19 14:46:34 -04:00
51c5b5e9ca backup: fix prune archive name 2021-10-19 12:28:20 -04:00
ed8ea15aa7 backup: only prune archives that match default naming pattern 2021-10-19 12:18:46 -04:00
481e01896b backup: fix issue with ignoring "changed while we backed it up" warnings 2021-10-19 12:14:42 -04:00
e85e08cace backup: call prune after backup; add run_borg helper
Automatically prunes after backup, although this doesn't actually
free up space (because we're in append-only mode).
2021-10-19 11:16:17 -04:00
4b7802ad5f backup: flush stderr after all writes 2021-10-18 19:35:39 -04:00
4a30b82e39 backup: replace simple max size with rule-based system
Now individual files or patterns can have their own maximum sizes.
2021-10-18 17:43:33 -04:00
ac12b42cad backup: rename force-include to unexclude
Force-include is a misnomer because it won't include files
that weren't considered at all (like files in an excluded subdir).
Instead, call it "unexclude" to make it slightly clearer that this
will just override the exclusions.
2021-10-18 16:25:23 -04:00
97b9060344 make: add targets to help view log status 2021-10-18 15:13:10 -04:00
16fe205715 backup: remove pathnames from progress output
It clutters up the output and isn't super useful
2021-10-18 15:03:32 -04:00
1fb8645b27 build: make Borg.bin a static binary
This prevents it from e.g. needing a specific glibc version on the
client.
2021-10-17 21:31:16 -04:00
b1748455a0 setup: add pipenv check 2021-10-17 21:16:52 -04:00
d413ea3b82 setup: prevent pager in systemctl list-timers 2021-10-17 21:03:15 -04:00
1932a76f72 prune: remove -v option to support old ssh-add 2021-10-17 20:05:47 -04:00
81d430b56b backup: print exceptions from reader thread 2021-10-17 20:02:46 -04:00
2d89e530be backup: split handling of log_message and progress_message 2021-10-17 20:01:55 -04:00
3024cf2e69 backup: stop main thread if reader thread dies unexpectedly
_thread.interrupt_main will trigger KeyboardInterrupt in the main thread.
2021-10-17 20:00:55 -04:00
a540f4336f backup: fix nonlocal variable issue with errors 2021-10-17 19:31:56 -04:00
643f41d7f7 backup: tweak types for python 3.7 compatibility 2021-10-17 19:31:36 -04:00
1beda9d613 setup: pick host-dependent start time 2021-10-17 09:26:41 -04:00
8d7282eac1 borg.sh: fix ssh option for read-write mode 2021-10-17 08:56:53 -04:00
2b81094a32 backup: fix borg exit code handling for ret=0 2021-10-17 01:05:03 -04:00
e7b0320c9f backup: fix ignoring of harmless borg warnings 2021-10-17 00:55:49 -04:00
a18b9ed6d0 backup: track errors/warnings from borg; add prefix to them
This also ignores the "file changed while we backed it up" error, because
that isn't important enough to warrant sending an email.
2021-10-17 00:16:43 -04:00
756dbe1898 backup: fix mypy-detected errors 2021-10-17 00:14:14 -04:00
ed1d79d400 makefile: reload systemd unit files after rebase 2021-10-16 23:47:34 -04:00
2caceedea7 backup: show detailed progress from borg 2021-10-16 23:40:36 -04:00
42edd0225d setup: fix bitwarden entry name 2021-10-16 19:21:09 -04:00
ad13bb343a make: add helper to rebase local branches to incorporate upstream changes 2021-10-16 18:52:34 -04:00
f2b47dcba2 backup: parse vars.sh and use hostname from that 2021-10-16 18:52:34 -04:00
d1d561cb70 setup: allow hostname to be overridden 2021-10-16 18:48:43 -04:00
6066188ef1 vars: remove duplicate host_id 2021-10-16 09:45:50 -04:00
f70bffed37 misc: ignore .venv dir 2021-10-16 09:37:04 -04:00
979dfd892f backup: revert to catching fewer exceptions
We specifically don't want to catch BrokenPipeError; just list
file-related ones that we might expect to see if we hit bad
permissions, disk errors, or race conditions.
2021-10-16 09:37:04 -04:00
ab6dce0c2c borg: update binary to fix upstream bug 6009 2021-10-16 09:37:04 -04:00
aff447c1b6 notify: fix notify.sh to work with server side; adjust text 2021-10-16 09:37:03 -04:00
f7e9c3e232 borg.sh: only try ssh keys, not password authentication 2021-10-16 09:37:03 -04:00
d168c5bf54 backup: catch all OSError exceptions while accessing files
We might see these if files change during the scan, for example.
2021-10-16 09:37:03 -04:00
31d88f9345 backup: print final results and run notification script on error 2021-10-16 09:37:03 -04:00
ccf54b98d7 backup: fix archive name
Was overly quoted from when this was a shell script
2021-10-16 09:37:03 -04:00
59ad2b5b4d backup: capture borg output for later reporting 2021-10-16 09:37:03 -04:00
0c74f1676c backup: add bold option to log(); simplify logic 2021-10-16 09:37:03 -04:00
5e06ebd822 backup: change some warnings into errors 2021-10-16 09:37:03 -04:00
929a323cf0 notify: add ssh key for running remote notifications; add notify.sh 2021-10-16 09:37:03 -04:00
86bb72f201 setup: fix borg path in initial connection test 2021-10-16 09:37:03 -04:00
54437456ae prune: use new vars.sh 2021-10-16 09:37:03 -04:00
c7a6d08665 initial-setup: generate vars.sh instead of borg.sh; commit borg.sh
Put setup-time variables into a generated vars.sh, and put borg.sh
directly into the repo.
2021-10-16 09:36:56 -04:00
3c5dcd2189 config: remove /efi, it probably doesn't exist 2021-10-15 23:21:28 -04:00
1a44035ae8 makefile: fix test-backup target 2021-10-15 23:21:14 -04:00
6830daa2b1 prune: save password in an SSH agent, and compact after pruning
Since we want to run two commands, use a temporary SSH agent
to hold the key, so that the user only has to enter the password
once.
2021-10-14 15:34:10 -04:00
69bfecd657 borg: include borg binary in repository
Put our own binary in here, so we can keep it updated with local
patches more easily.  Also add build instructions.
This one is built from
https://github.com/borgbackup/borg/pull/6011
2021-10-14 13:26:24 -04:00
43ceb39120 backup: support multiple roots; remove "relative absolute path" nonsense
Support multiple roots in config file, not just one.
The absolute path stuff before would match against exclusions/inclusions
based on paths from the root dir, but that doesn't make sense when we
have multiple roots, and added needless complexity.
2021-10-14 12:33:07 -04:00
35c72e7ce6 backup: calculate size only once
We need to calculate size so we get an idea of actual used disk space
(which is closer to how much maximum space will be used in the backup,
in case files have huge holes).  Calculate it once to avoid errors.
2021-10-14 12:33:07 -04:00
27213033a2 backup: use decorated paths for matching patterns
By ensuring that directory names end in '/', the behavior of
"match only directories if the pattern ends with /" comes for
free based on how wcmatch.glob works, so we don't need to run
the regex match twice.
2021-10-14 12:33:07 -04:00
5152a316c6 backup: use helper to format binary paths as strings 2021-10-14 12:33:07 -04:00
46195daaaa Improve borg process spawning and result checking 2021-10-14 12:33:07 -04:00
ffe13a45e6 Add --debug option 2021-10-14 12:33:07 -04:00
34817890b2 Update README cheat sheet 2021-10-14 12:33:07 -04:00
0af42b8217 Fix shebang line after setup 2021-10-13 15:15:00 -04:00
738573a292 Fix type issue 2021-10-13 15:02:48 -04:00
4a707968ab Spawn borg and pass input 2021-10-13 14:58:44 -04:00
356f6db2ca Remove debug prints 2021-10-13 14:46:58 -04:00
e72564436c Update git setup 2021-10-13 14:46:25 -04:00
74e9e82117 Change to borg dir for setup 2021-10-13 14:42:24 -04:00
e82d40bbfb Fix some exclude/include path issues; misc setup improvements 2021-10-13 14:38:47 -04:00
22b663fa61 Rework how exclude/include pattern matching works a bit 2021-10-13 11:48:53 -04:00
863b7acc9b Add arg to specify borg program 2021-10-13 10:17:28 -04:00
0f56415493 Use actual file blocks rather than apparent size; doc updates 2021-10-11 23:30:53 -04:00
0039ca1ee0 Implement filesystem scanning with configurable filters 2021-10-11 16:50:08 -04:00
6978cfc012 Continue reworking towards local copy of borg, etc 2021-10-11 12:34:57 -04:00
883f984aef Restructure things; we will clone this repo directly on each client 2021-10-08 16:08:03 -04:00
2dd60aaf28 Add initial version of backup file lister 2021-08-19 11:55:19 -04:00
16 changed files with 1430 additions and 438 deletions

1
.gitea/README.md Symbolic link
View File

@@ -0,0 +1 @@
../templates/README.md

9
.gitignore vendored Normal file
View File

@@ -0,0 +1,9 @@
.venv
*.html
cache/
config/
key.txt
passphrase
ssh/
/README.md
/notify.sh

View File

@@ -1,17 +1,57 @@
.PHONY: all
all: check
@echo "Use 'make deploy' to copy to https://psy.jim.sh/borg-setup.sh"
all:
@echo
@echo "For initial setup, run"
@echo " sudo ./initial-setup.sh"
@echo
@echo "Or run borg commands with e.g.:"
@echo " ./borg.sh info"
@echo " ./borg.sh list"
@echo
.PHONY: check
check:
shellcheck -f gcc borg-setup.sh
.PHONY: ctrl
ctrl: test-backup
.PHONY: test
test:
.venv:
mkdir .venv
pipenv install --dev
.PHONY: test-backup
test-backup: .venv
.venv/bin/mypy backup.py
./backup.py -n
.PHONY: test-setup
test-setup:
shellcheck -f gcc initial-setup.sh
rm -rf /tmp/test-borg
BORG_DIR=/tmp/test-borg ./borg-setup.sh
ls -al /tmp/test-borg
mkdir /tmp/test-borg
git clone . /tmp/test-borg
#: "normally this would be a git clone, but we want the working tree..."
#git ls-files -z | tar --null -T - -cf - | tar -C /tmp/test-borg -xvf -
/tmp/test-borg/initial-setup.sh
.PHONY: deploy
deploy:
scp borg-setup.sh psy:/www/psy
# Pull master and rebase "setup-$HOSTNAME" branch onto it
.PHONY: rebase
rebase:
git checkout master
git pull
git checkout -
git rebase master
./initial-setup.sh --update
systemctl daemon-reload
git status
# Show status of most recent backup run
.PHONY: status
status:
systemctl status --full --lines 999999 --no-pager --all borg-backup || true
# Watch live log output
.PHONY: tail
tail:
journalctl --all --follow --lines 200 --unit borg-backup
.PHONY: clean
clean:
rm -f README.html

16
Pipfile Normal file
View File

@@ -0,0 +1,16 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
humanfriendly = "*"
wcmatch = "*"
pyyaml = "*"
[dev-packages]
mypy = "*"
types-pyyaml = "*"
[requires]
python_version = "3"

159
Pipfile.lock generated Normal file
View File

@@ -0,0 +1,159 @@
{
"_meta": {
"hash": {
"sha256": "902260ee06bc3bac3fe1ea87c09d4fc28e5aceef95635b3c72b43b6905050278"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"bracex": {
"hashes": [
"sha256:a27eaf1df42cf561fed58b7a8f3fdf129d1ea16a81e1fadd1d17989bc6384beb",
"sha256:efdc71eff95eaff5e0f8cfebe7d01adf2c8637c8c92edaf63ef348c241a82418"
],
"markers": "python_version >= '3.8'",
"version": "==2.4"
},
"humanfriendly": {
"hashes": [
"sha256:1697e1a8a8f550fd43c2865cd84542fc175a61dcb779b6fee18cf6b6ccba1477",
"sha256:6b0b831ce8f15f7300721aa49829fc4e83921a9a301cc7f606be6686a2288ddc"
],
"index": "pypi",
"version": "==10.0"
},
"pyyaml": {
"hashes": [
"sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5",
"sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc",
"sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df",
"sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741",
"sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206",
"sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27",
"sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595",
"sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62",
"sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98",
"sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696",
"sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290",
"sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9",
"sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d",
"sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6",
"sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867",
"sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47",
"sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486",
"sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6",
"sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3",
"sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007",
"sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938",
"sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0",
"sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c",
"sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735",
"sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d",
"sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28",
"sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4",
"sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba",
"sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8",
"sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5",
"sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd",
"sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3",
"sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0",
"sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515",
"sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c",
"sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c",
"sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924",
"sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34",
"sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43",
"sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859",
"sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673",
"sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54",
"sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a",
"sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b",
"sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab",
"sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa",
"sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c",
"sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585",
"sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d",
"sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"
],
"index": "pypi",
"version": "==6.0.1"
},
"wcmatch": {
"hashes": [
"sha256:14554e409b142edeefab901dc68ad570b30a72a8ab9a79106c5d5e9a6d241bd5",
"sha256:86c17572d0f75cbf3bcb1a18f3bf2f9e72b39a9c08c9b4a74e991e1882a8efb3"
],
"index": "pypi",
"version": "==8.5"
}
},
"develop": {
"mypy": {
"hashes": [
"sha256:19f905bcfd9e167159b3d63ecd8cb5e696151c3e59a1742e79bc3bcb540c42c7",
"sha256:21a1ad938fee7d2d96ca666c77b7c494c3c5bd88dff792220e1afbebb2925b5e",
"sha256:40b1844d2e8b232ed92e50a4bd11c48d2daa351f9deee6c194b83bf03e418b0c",
"sha256:41697773aa0bf53ff917aa077e2cde7aa50254f28750f9b88884acea38a16169",
"sha256:49ae115da099dcc0922a7a895c1eec82c1518109ea5c162ed50e3b3594c71208",
"sha256:4c46b51de523817a0045b150ed11b56f9fff55f12b9edd0f3ed35b15a2809de0",
"sha256:4cbe68ef919c28ea561165206a2dcb68591c50f3bcf777932323bc208d949cf1",
"sha256:4d01c00d09a0be62a4ca3f933e315455bde83f37f892ba4b08ce92f3cf44bcc1",
"sha256:59a0d7d24dfb26729e0a068639a6ce3500e31d6655df8557156c51c1cb874ce7",
"sha256:68351911e85145f582b5aa6cd9ad666c8958bcae897a1bfda8f4940472463c45",
"sha256:7274b0c57737bd3476d2229c6389b2ec9eefeb090bbaf77777e9d6b1b5a9d143",
"sha256:81af8adaa5e3099469e7623436881eff6b3b06db5ef75e6f5b6d4871263547e5",
"sha256:82e469518d3e9a321912955cc702d418773a2fd1e91c651280a1bda10622f02f",
"sha256:8b27958f8c76bed8edaa63da0739d76e4e9ad4ed325c814f9b3851425582a3cd",
"sha256:8c223fa57cb154c7eab5156856c231c3f5eace1e0bed9b32a24696b7ba3c3245",
"sha256:8f57e6b6927a49550da3d122f0cb983d400f843a8a82e65b3b380d3d7259468f",
"sha256:925cd6a3b7b55dfba252b7c4561892311c5358c6b5a601847015a1ad4eb7d332",
"sha256:a43ef1c8ddfdb9575691720b6352761f3f53d85f1b57d7745701041053deff30",
"sha256:a8032e00ce71c3ceb93eeba63963b864bf635a18f6c0c12da6c13c450eedb183",
"sha256:b96ae2c1279d1065413965c607712006205a9ac541895004a1e0d4f281f2ff9f",
"sha256:bb8ccb4724f7d8601938571bf3f24da0da791fe2db7be3d9e79849cb64e0ae85",
"sha256:bbaf4662e498c8c2e352da5f5bca5ab29d378895fa2d980630656178bd607c46",
"sha256:cfd13d47b29ed3bbaafaff7d8b21e90d827631afda134836962011acb5904b71",
"sha256:d4473c22cc296425bbbce7e9429588e76e05bc7342da359d6520b6427bf76660",
"sha256:d8fbb68711905f8912e5af474ca8b78d077447d8f3918997fecbf26943ff3cbb",
"sha256:e5012e5cc2ac628177eaac0e83d622b2dd499e28253d4107a08ecc59ede3fc2c",
"sha256:eb4f18589d196a4cbe5290b435d135dee96567e07c2b2d43b5c4621b6501531a"
],
"index": "pypi",
"version": "==1.6.1"
},
"mypy-extensions": {
"hashes": [
"sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d",
"sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"
],
"markers": "python_version >= '3.5'",
"version": "==1.0.0"
},
"types-pyyaml": {
"hashes": [
"sha256:334373d392fde0fdf95af5c3f1661885fa10c52167b14593eb856289e1855062",
"sha256:c05bc6c158facb0676674b7f11fe3960db4f389718e19e62bd2b84d6205cfd24"
],
"index": "pypi",
"version": "==6.0.12.12"
},
"typing-extensions": {
"hashes": [
"sha256:8f92fc8806f9a6b641eaa5318da32b44d401efaac0f6678c9bc448ba3605faa0",
"sha256:df8e4339e9cb77357558cbdbceca33c303714cf861d1eef15e1070055ae8b7ef"
],
"markers": "python_version >= '3.8'",
"version": "==4.8.0"
}
}
}

View File

@@ -1,22 +0,0 @@
# Design
- On bucket, we have a separate user account "jim-backups". Password
for this account is in bitwarden.
- Repository keys are repokeys, with passphrases saved on clients
and in bitwarden.
- Each client has two SSH keys: one for append-only operation (no
pass) and one for read-write (password in bitwarden)
- Pruning requires the password and is a manual operation (run `sudo
/opt/borg/prune.sh`)
- Systemd timers start daily backups
# Usage
Run on client:
wget https://psy.jim.sh/borg-setup.sh
sudo ./borg-setup.sh

513
backup.py Executable file
View File

@@ -0,0 +1,513 @@
#!.venv/bin/python
# Scan filesystem to generate a list of files to back up, based on a
# configuration file. Pass this list to borg to actually create the
# backup. Execute a notification script on the remote server to
# report the backup status.
import os
import re
import sys
import json
import stat
import time
import select
import pathlib
import threading
import subprocess
import _thread # for interrupt_main
import typing
import yaml
import wcmatch.glob # type: ignore
import humanfriendly # type: ignore
def b2s(raw: bytes) -> str:
return raw.decode(errors='backslashreplace')
def format_size(n: int) -> str:
return humanfriendly.format_size(n, keep_width=True, binary=True)
# Type corresponding to patterns that are generated by
# wcmatch.translate: two lists of compiled REs (a,b). A path matches
# if it matches at least one regex in "a" and none in "b".
MatchPatterns = typing.Tuple[typing.List[re.Pattern], typing.List[re.Pattern]]
class Config:
roots: typing.List[bytes]
one_file_system: bool
exclude_caches: bool
exclude: MatchPatterns
unexclude: MatchPatterns
max_size_rules: typing.List[typing.Tuple[int, MatchPatterns]]
notify_email: typing.Optional[str]
def __init__(self, configfile: str):
# Helper to process lists of patterns into regexes
def process_match_list(config_entry):
raw = config_entry.encode().split(b'\n')
pats = []
# Prepend '**/' to any relative patterns
for x in raw:
if not len(x):
continue
if x.startswith(b'/'):
pats.append(x)
else:
pats.append(b'**/' + x)
# Compile patterns.
(a, b) = wcmatch.glob.translate(
pats, flags=(wcmatch.glob.GLOBSTAR |
wcmatch.glob.DOTGLOB |
wcmatch.glob.NODOTDIR |
wcmatch.glob.EXTGLOB |
wcmatch.glob.BRACE))
return ([ re.compile(x) for x in a ],
[ re.compile(x) for x in b ])
# Read config
with open(configfile, 'r') as f:
config = yaml.safe_load(f)
self.one_file_system = config.get('one-file-system', False)
self.exclude_caches = config.get('exclude-caches', False)
raw = config.get('roots', '').encode().split(b'\n')
self.roots = []
for x in raw:
if not len(x):
continue
self.roots.append(x)
self.roots.sort(key=len)
self.exclude = process_match_list(config.get('exclude', ''))
self.unexclude = process_match_list(config.get('unexclude', ''))
self.max_size_rules = []
rules = { humanfriendly.parse_size(k): v
for k, v in config.get('max-size-rules', {}).items() }
for size in reversed(sorted(rules)):
self.max_size_rules.append(
(size, process_match_list(rules[size])))
self.notify_email = config.get('notify-email', None)
def match_re(self, r: MatchPatterns, path: bytes):
# Path matches if it matches at least one regex in
# r[0] and no regex in r[1].
for a in r[0]:
if a.match(path):
for b in r[1]:
if b.match(path):
return False
return True
return False
class Backup:
def __init__(self, config: Config, dry_run: bool):
self.config = config
self.dry_run = dry_run
self.root_seen: typing.Dict[bytes, bool] = {}
# Saved log messages (which includes borg output)
self.logs: typing.List[typing.Tuple[str, str]] = []
def out(self, path: bytes):
self.outfile.write(path + (b'\n' if self.dry_run else b'\0'))
def log(self, letter: str, msg: str, bold: bool=False):
colors = {
'E': 31, # red: error
'W': 33, # yellow: warning
'N': 34, # blue: notice, a weaker warning (no email generated)
'I': 36, # cyan: info, backup.py script output
'O': 37, # white: regular output from borg
};
c = colors[letter] if letter in colors else 0
b = "" if bold else "\033[22m"
sys.stdout.write(f"\033[1;{c}m{letter}:{b} {msg}\033[0m\n")
sys.stdout.flush()
self.logs.append((letter, msg))
def run(self, outfile: typing.IO[bytes]):
self.outfile = outfile
for root in self.config.roots:
if root in self.root_seen:
self.log('I', f"ignoring root, already seen: {b2s(root)}")
continue
try:
st = os.lstat(root)
if not stat.S_ISDIR(st.st_mode):
raise NotADirectoryError
except FileNotFoundError:
self.log('E', f"root does not exist: {b2s(root)}")
continue
except NotADirectoryError:
self.log('E', f"root is not a directory: {b2s(root)}")
continue
self.log('I', f"processing root {b2s(root)}")
self.scan(root)
def scan(self, path: bytes, parent_st: os.stat_result=None):
"""If the given path should be backed up, print it. If it's
a directory and its contents should be included, recurse.
"""
try:
st = os.lstat(path)
is_dir = stat.S_ISDIR(st.st_mode)
is_reg = stat.S_ISREG(st.st_mode)
size = st.st_blocks * 512
# Decorated path ends with a '/' if it's a directory.
decorated_path = path
if is_dir and not decorated_path.endswith(b'/'):
decorated_path += b'/'
# See if there's a reason to exclude it
exclude_reason = None
if self.config.match_re(self.config.exclude, decorated_path):
# Config file says to exclude
exclude_reason = ('I', f"skipping, excluded by config file")
elif (self.config.one_file_system
and parent_st is not None
and is_dir
and st.st_dev != parent_st.st_dev):
# Crosses a mount point
exclude_reason = ('I', "skipping, on different filesystem")
elif (is_reg
and len(self.config.max_size_rules)
and size > self.config.max_size_rules[-1][0]):
# Check file sizes against our list.
# Only need to check if the size is bigger than the smallest
# entry on the list; then, we need to check it against all rules
# to see which one applies.
for (max_size, patterns) in self.config.max_size_rules:
if self.config.match_re(patterns, decorated_path):
if size > max_size:
a = format_size(size)
b = format_size(max_size)
exclude_reason = (
'W', f"file size {a} exceeds limit {b}")
break
# If we have a reason to exclude it, stop now unless it's
# force-included
force = self.config.match_re(self.config.unexclude, decorated_path)
if exclude_reason and not force:
self.log(exclude_reason[0],
f"{exclude_reason[1]}: {b2s(path)}")
return
# Print path for Borg
self.out(path)
# Process directories
if is_dir:
if path in self.config.roots:
self.root_seen[path] = True
if decorated_path in self.config.roots:
self.root_seen[decorated_path] = True
# Skip if it contains CACHEDIR.TAG
# (mirroring the --exclude-caches borg option)
if self.config.exclude_caches:
try:
tag = b'Signature: 8a477f597d28d172789f06886806bc55'
with open(path + b'/CACHEDIR.TAG', 'rb') as f:
if f.read(len(tag)) == tag:
self.log(
'I', f"skipping, cache dir: {b2s(path)}")
return
except:
pass
# Recurse
with os.scandir(path) as it:
for entry in it:
self.scan(path=entry.path, parent_st=st)
except (FileNotFoundError,
IsADirectoryError,
NotADirectoryError,
PermissionError) as e:
self.log('E', f"can't read {b2s(path)}: {str(e)}")
return
def run_borg(self, argv: typing.List[str],
stdin_writer: typing.Callable[[typing.IO[bytes]],
typing.Any]=None):
"""Run a borg command, capturing and displaying output, while feeding
input using stdin_writer. Returns True on Borg success, False on error.
"""
borg = subprocess.Popen(argv,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
if borg.stdin is None:
raise Exception("no pipe")
# Count warnings and errors from Borg, so we can interpret its
# error codes correctly (e.g. ignoring exit codes if warnings
# were all harmless).
borg_saw_warnings = 0
borg_saw_errors = 0
# Use a thread to capture output
def reader_thread(fh):
nonlocal borg_saw_warnings
nonlocal borg_saw_errors
last_progress = 0
for line in fh:
try:
data = json.loads(line)
if data['type'] == 'log_message':
changed_msg = "file changed while we backed it up"
if data['levelname'] == 'WARNING':
if changed_msg in data['message']:
# harmless; don't count as a Borg warning
outlevel = 'N'
else:
borg_saw_warnings += 1
outlevel = 'W'
output = "warning: "
elif data['levelname'] not in ('DEBUG', 'INFO'):
borg_saw_errors += 1
outlevel = 'E'
output = "error: "
else:
outlevel = 'O'
output = ""
output += data['message']
elif (data['type'] == 'progress_message'
and 'message' in data):
outlevel = 'O'
output = data['message']
elif data['type'] == 'archive_progress':
now = time.time()
if now - last_progress > 10:
last_progress = now
def size(short: str, full: str) -> str:
return f" {short}={format_size(data[full])}"
outlevel = 'O'
output = (f"progress:" +
f" files={data['nfiles']}" +
size('orig', 'original_size') +
size('comp', 'compressed_size') +
size('dedup', 'deduplicated_size'))
else:
continue
else:
# ignore unknown progress line
continue
except Exception as e:
# on error, print raw line with exception
outlevel = 'E'
output = f"[exception: {str(e)}] " + b2s(line).rstrip()
self.log(outlevel, output)
fh.close()
def _reader_thread(fh):
try:
return reader_thread(fh)
except BrokenPipeError:
pass
except Exception:
_thread.interrupt_main()
reader = threading.Thread(target=_reader_thread, args=(borg.stdout,))
reader.daemon = True
reader.start()
try:
if stdin_writer:
# Give borg some time to start, just to clean up stdout
time.sleep(1)
stdin_writer(borg.stdin)
except BrokenPipeError:
self.log('E', "<broken pipe>")
finally:
try:
borg.stdin.close()
except BrokenPipeError:
pass
borg.wait()
reader.join()
ret = borg.returncode
if ret < 0:
self.log('E', f"borg exited with signal {-ret}")
elif ret == 2 or borg_saw_errors:
self.log('E', f"borg exited with errors (ret={ret})")
elif ret == 1:
if borg_saw_warnings:
self.log('W', f"borg exited with warnings (ret={ret})")
else:
return True
elif ret != 0:
self.log('E', f"borg exited with unknown error code {ret}")
else:
return True
return False
def main(argv: typing.List[str]):
import argparse
def humansize(string):
return humanfriendly.parse_size(string)
# Parse args
parser = argparse.ArgumentParser(
prog=argv[0],
description="Back up the local system using borg",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
base = pathlib.Path(__file__).parent
parser.add_argument('-c', '--config',
help="Config file", default=str(base / "config.yaml"))
parser.add_argument('-v', '--vars',
help="Variables file", default=str(base / "vars.sh"))
parser.add_argument('-n', '--dry-run', action="store_true",
help="Just print log output, don't run borg")
parser.add_argument('-d', '--debug', action="store_true",
help="Print filenames for --dry-run")
args = parser.parse_args()
config = Config(args.config)
backup = Backup(config, args.dry_run)
# Parse variables from vars.sh
hostname = os.uname().nodename
borg_sh = str(base / "borg.sh")
notify_sh = str(base / "notify.sh")
try:
with open(args.vars) as f:
for line in f:
m = re.match(r"\s*export\s*([A-Z_]+)=(.*)", line)
if not m:
continue
var = m.group(1)
value = m.group(2)
if var == "HOSTNAME":
hostname = value
if var == "BORG":
borg_sh = value
if var == "BORG_DIR":
notify_sh = str(pathlib.Path(value) / "notify.sh")
except Exception as e:
backup.log('W', f"failed to parse variables from {args.vars}: {str(e)}")
# Run backup
if args.dry_run:
if args.debug:
backup.run(sys.stdout.buffer)
else:
with open(os.devnull, "wb") as out:
backup.run(out)
sys.stdout.flush()
else:
if backup.run_borg([borg_sh,
"create",
"--verbose",
"--progress",
"--log-json",
"--list",
"--filter", "E",
"--stats",
"--checkpoint-interval", "900",
"--compression", "zstd,3",
"--paths-from-stdin",
"--paths-delimiter", "\\0",
"::" + hostname + "-{now:%Y%m%d-%H%M%S}"],
stdin_writer=backup.run):
# backup success; run prune. Note that this won't actually free
# space until a "./borg.sh --rw compact", because we're in
# append-only mode.
backup.log('I', f"pruning archives", bold=True)
backup.run_borg([borg_sh,
"prune",
"--verbose",
"--list",
"--progress",
"--log-json",
"--stats",
"--keep-within=7d",
"--keep-daily=14",
"--keep-weekly=8",
"--keep-monthly=-1",
"--glob-archives", hostname + "-????????-??????"])
# See if we had any errors
warnings = sum(1 for (letter, msg) in backup.logs if letter == 'W')
errors = sum(1 for (letter, msg) in backup.logs if letter == 'E')
def plural(num: int, word: str) -> str:
suffix = "" if num == 1 else "s"
return f"{num} {word}{suffix}"
warnmsg = plural(warnings, "warning") if warnings else None
errmsg = plural(errors, "error") if errors else None
if not warnings and not errors:
backup.log('I', f"backup successful", bold=True)
else:
if warnmsg:
backup.log('W', f"reported {warnmsg}", bold=True)
if errors:
backup.log('E', f"reported {errmsg}", bold=True)
# Send a notification of errors
email = backup.config.notify_email
if email and not args.dry_run:
backup.log('I', f"sending error notification to {email}")
def write_logs(title, only_include=None):
body = [ title ]
for (letter, msg) in backup.logs:
if only_include and letter not in only_include:
continue
# Use a ":" prefix for warnings/errors/notices so that
# the mail reader highlights them.
if letter in "EWN":
prefix = ":"
else:
prefix = " "
body.append(f"{prefix}{letter}: {msg}")
return "\n".join(body).encode()
body_text = write_logs("Logged errors and warnings:", "EWN")
body_text += b"\n\n"
body_text += write_logs("All log messages:")
# Subject summary
if errmsg and warnmsg:
summary = f"{errmsg}, {warnmsg}"
elif errors:
summary = errmsg or ""
else:
summary = warnmsg or ""
# Call notify.sh
res = subprocess.run([notify_sh, summary, email], input=body_text)
if res.returncode != 0:
backup.log('E', f"failed to send notification")
errors += 1
# Exit with an error code if we had any errors
if errors:
return 1
return 0
if __name__ == "__main__":
import sys
raise SystemExit(main(sys.argv))

BIN
bin/borg.armv7l Executable file
View File

Binary file not shown.

BIN
bin/borg.x86_64 Executable file
View File

Binary file not shown.

View File

@@ -1,404 +0,0 @@
#!/bin/bash
BORG_DIR=${BORG_DIR:-/opt/borg}
BACKUP_HOST=${BACKUP_HOST:-backup.jim.sh}
BACKUP_USER=${BACKUP_USER:-jim-backups}
BACKUP_REPO=${BACKUP_REPO:-borg/$(hostname)}
# Use stable host ID in case MAC address changes
HOSTID="$(hostname -f)@$(python -c 'import uuid;print(uuid.getnode())')"
function error_handler() {
echo "Error at $1 line $2:"
echo -n '>>> ' ; tail -n +"$2" < "$1" | head -1
echo "... exited with code $3"
exit "$3"
}
trap 'error_handler ${BASH_SOURCE} ${LINENO} $?' ERR
set -o errexit
set -o errtrace
if [ -e "$BORG_DIR" ]; then
echo "Error: BORG_DIR $BORG_DIR already exists; giving up"
exit 1
fi
# Make a temp dir to work in
mkdir "$BORG_DIR"
TMP=$(mktemp -d --tmpdir="$BORG_DIR")
# Install some cleanup handlers
cleanup()
{
set +o errexit
set +o errtrace
trap - ERR
ssh -o ControlPath="$TMP"/ssh-control -O exit x >/dev/null 2>&1
rm -rf -- "$TMP"
}
cleanup_int()
{
echo
cleanup
exit 1
}
trap cleanup 0
trap cleanup_int 1 2 15
msg()
{
color="$1"
shift
echo -ne "\033[1;${color}m===\033[0;${color}m" "$@"
echo -e "\033[0m"
}
log(){ msg 33 "$@" ; }
notice() { msg 32 "$@" ; }
warn() { msg 31 "$@" ; }
error() { msg 31 "Error:" "$@" ; exit 1 ; }
# Install required packages
install_dependencies()
{
NEED=
check() {
command -v "$1" >/dev/null || NEED+=" $2"
}
check borg borgbackup
if [ -n "${NEED:+x}" ]; then
log "Need to install packages: $NEED"
apt install --no-upgrade $NEED
fi
}
# Create wrapper to execute borg
create_borg_wrapper()
{
BORG=${BORG_DIR}/borg.sh
BORG_REPO="ssh://${BACKUP_USER}@${BACKUP_HOST}/./${BACKUP_REPO}"
SSH=$BORG_DIR/ssh
cat >"$BORG" <<EOF
#!/bin/sh
export BORG_REPO=${BORG_REPO}
export BORG_PASSCOMMAND="cat ${BORG_DIR}/passphrase"
export BORG_HOST_ID=${HOSTID}
export BORG_BASE_DIR=${BORG_DIR}
export BORG_CACHE_DIR=${BORG_DIR}/cache
export BORG_CONFIG_DIR=${BORG_DIR}/config
export BORG_RSH="ssh -F $SSH/config -i $SSH/id_ecdsa_appendonly"
exec borg "\$@"
EOF
chmod +x "$BORG"
if ! "$BORG" -h >/dev/null ; then
error "Can't run the new borg wrapper; does borg work?"
fi
}
print_random_key()
{
dd if=/dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 16
}
generate_keys()
{
PASS_SSH=$(print_random_key)
PASS_REPOKEY=$(print_random_key)
echo "$PASS_REPOKEY" > "${BORG_DIR}/passphrase"
chmod 600 "${BORG_DIR}/passphrase"
}
# Run a command on the remote host over an existing SSH tunnel
run_ssh_command()
{
ssh -o ControlPath="$TMP"/ssh-control use-existing-control-tunnel "$@"
}
# Configure SSH key-based login
configure_ssh()
{
mkdir "$SSH"
# Create keys
log "Creating SSH keys"
ssh-keygen -N "" -t ecdsa \
-C "backup-appendonly@$HOSTID" -f "$SSH/id_ecdsa_appendonly"
ssh-keygen -N "$PASS_SSH" -t ecdsa \
-C "backup@$HOSTID" -f "$SSH/id_ecdsa"
# Create config snippets
log "Creating SSH config and wrapper script"
cat >> "$SSH/config" <<EOF
User $BACKUP_USER
ControlPath none
ServerAliveInterval 120
Compression no
UserKnownHostsFile $SSH/known_hosts
ForwardX11 no
ForwardAgent no
BatchMode yes
IdentitiesOnly yes
EOF
# Connect to backup host, using persistent control socket
log "Connecting to server"
log "Please enter password; look in Bitwarden for: ${BACKUP_USER}@${BACKUP_HOST}"
ssh -F "$SSH/config" -o BatchMode=no -o PubkeyAuthentication=no \
-o ControlMaster=yes -o ControlPath="$TMP/ssh-control" \
-o StrictHostKeyChecking=accept-new \
-f "${BACKUP_USER}@${BACKUP_HOST}" sleep 600
if ! run_ssh_command true >/dev/null 2>&1 </dev/null ; then
error "SSH failed"
fi
log "Connected to ${BACKUP_USER}@${BACKUP_HOST}"
# Since we now have an SSH connection, check that the repo doesn't exist
if run_ssh_command "test -e $BACKUP_REPO" ; then
error "$BACKUP_REPO already exists on the server, bailing out"
fi
# Copy SSH keys to the server's authorized_keys file, removing any
# existing keys with this HOSTID.
log "Setting up SSH keys on remote host"
cmd="borg serve --restrict-to-repository ~/$BACKUP_REPO"
keys=".ssh/authorized_keys"
backup="${keys}.old-$(date +%Y%m%d-%H%M%S)"
run_ssh_command "mkdir -p .ssh; chmod 700 .ssh; touch $keys"
run_ssh_command "mv $keys $backup; sed '/@$HOSTID\$/d' < $backup > $keys"
run_ssh_command "if cmp -s $backup $keys; then rm $backup ; fi"
run_ssh_command "cat >> .ssh/authorized_keys" <<EOF
command="$cmd --append-only",restrict $(cat "$SSH/id_ecdsa_appendonly.pub")
command="$cmd",restrict $(cat "$SSH/id_ecdsa.pub")
EOF
# Test that everything worked
log "Testing SSH login with new key"
if ! ssh -F "$SSH/config" -i "$SSH/id_ecdsa_appendonly" -T \
"${BACKUP_USER}@${BACKUP_HOST}" borg --version </dev/null ; then
error "Logging in with a key failed -- is server set up correctly?"
fi
log "Remote connection OK!"
}
# Create the repository on the server
create_repo()
{
log "Creating repo $BACKUP_REPO"
# Create repo
$BORG init --make-parent-dirs --encryption repokey
}
# Export keys as HTML page
export_keys()
{
log "Exporting keys"
$BORG key export --paper '' "${BORG_DIR}/key.txt"
chmod 600 "${BORG_DIR}/key.txt"
cat >>"${BORG_DIR}/key.txt" <<EOF
Repository: ${BORG_REPO}
Passphrase: ${PASS_REPOKEY}
EOF
}
# Create helper scripts to backup, prune, and mount
create_scripts()
{
cat > "${BORG_DIR}/backup.sh" <<EOF
#!/bin/bash
BORG=$BORG_DIR/borg.sh
set -e
# Explicitly list a bunch of directories to back up, in case they come
# from different filesystems. If not, duplicates have no effect.
DIRS="/"
for DIR in /usr /var /home /boot /efi ; do
if [ -e "\$DIR" ] ; then
DIRS="\$DIRS \$DIR"
fi
done
# Allow dirs to be overridden
BORG_BACKUP_DIRS=\${BORG_BACKUP_DIRS:-\$DIRS}
echo "Backing up: \$BORG_BACKUP_DIRS"
\$BORG create \\
--verbose \\
--list \\
--filter E \\
--stats \\
--exclude-caches \\
--one-file-system \\
--checkpoint-interval 900 \\
--compression zstd,3 \\
::'{hostname}-{now:%Y%m%d-%H%M%S}' \\
\$BORG_BACKUP_DIRS
\$BORG check \\
--verbose \\
--last 10
EOF
cat > "${BORG_DIR}/prune.sh" <<EOF
#!/bin/bash
BORG=$BORG_DIR/borg.sh
set -e
echo "=== Need SSH key passphrase. Check Bitwarden for:"
echo "=== borg $(hostname) / read-write SSH key"
\$BORG prune \\
--rsh="ssh -F $SSH/config -o BatchMode=no -i $SSH/id_ecdsa" \\
--verbose \\
--stats \\
--keep-within=7d \\
--keep-daily=14 \\
--keep-weekly=8 \\
--keep-monthly=-1
EOF
chmod 755 "${BORG_DIR}/backup.sh"
chmod 755 "${BORG_DIR}/prune.sh"
}
configure_systemd()
{
TIMER=borg-backup.timer
SERVICE=borg-backup.service
TIMER_UNIT=${BORG_DIR}/${TIMER}
SERVICE_UNIT=${BORG_DIR}/${SERVICE}
log "Creating systemd files"
cat > "$TIMER_UNIT" <<EOF
[Unit]
Description=Borg backup to ${BACKUP_HOST}
[Timer]
OnCalendar=*-*-* 01:00:00
RandomizedDelaySec=1800
FixedRandomDelay=true
Persistent=true
[Install]
WantedBy=timers.target
EOF
cat >> "$SERVICE_UNIT" <<EOF
[Unit]
Description=Borg backup to ${BACKUP_HOST}
[Service]
Type=simple
ExecStart=${BORG_DIR}/backup.sh
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=6
EOF
log "Setting up systemd"
if (
ln -sfv "${TIMER_UNIT}" /etc/systemd/system &&
ln -sfv "${SERVICE_UNIT}" /etc/systemd/system &&
systemctl --no-ask-password daemon-reload &&
systemctl --no-ask-password enable ${TIMER} &&
systemctl --no-ask-password start ${TIMER}
); then
log "Backup timer installed:"
systemctl list-timers ${TIMER}
else
warn ""
warn "Systemd setup failed"
warn "Do something like this to configure automatic backups:"
echo " sudo ln -sfv \"${TIMER_UNIT}\" /etc/systemd/system &&"
echo " sudo ln -sfv \"${SERVICE_UNIT}\" /etc/systemd/system &&"
echo " sudo systemctl daemon-reload &&"
echo " sudo systemctl enable ${TIMER} &&"
echo " sudo systemctl start ${TIMER}"
warn ""
fi
}
make_readme()
{
cat > "${BORG_DIR}/README" <<EOF
Backup Configuration
--------------------
Hostname: $(hostname)
Destination: ${BACKUP_USER}@${BACKUP_HOST}
Repository: ${BACKUP_REPO}
Cheat sheet
-----------
See when next backup is scheduled:
systemctl list-timers borg-backup.timer
See progress of most recent backup:
systemctl status -l -n 99999 borg-backup
Start backup now:
sudo systemctl start borg-backup
Interrupt backup in progress:
sudo systemctl stop borg-backup
Show backups and related info:
sudo ${BORG_DIR}/borg.sh info
sudo ${BORG_DIR}/borg.sh list
Mount and look at files:
mkdir mnt
sudo ${BORG_DIR}/borg.sh mount :: mnt
sudo -s # to explore as root
sudo umount mnt
Prune old backups. Only run if sure local system was never compromised,
as object deletion could have been queued during append-only operations.
Requires SSH key password from bitwarden.
sudo ${BORG_DIR}/prune.sh
EOF
}
log "Configuration:"
log " Backup server host: ${BACKUP_HOST}"
log " Backup server user: ${BACKUP_USER}"
log " Repository path: ${BACKUP_REPO}"
install_dependencies
create_borg_wrapper
generate_keys
configure_ssh
create_repo
export_keys
create_scripts
configure_systemd
make_readme
echo
notice "Add these two passwords to Bitwarden:"
notice ""
notice " Name: borg $(hostname)"
notice " Username: repo key"
notice " Password: $PASS_REPOKEY"
notice ""
notice " Name: borg $(hostname)"
notice " Username: read-write ssh key"
notice " Password: $PASS_SSH"
notice ""
notice "You should also print out the full repo key: ${BORG_DIR}/key.txt"
echo
echo "All done"

12
borg.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -e
. "$(dirname "$0")"/vars.sh
export BORG_PASSCOMMAND="cat ${BORG_DIR}/passphrase"
export BORG_BASE_DIR=${BORG_DIR}
export BORG_CACHE_DIR=${BORG_DIR}/cache
export BORG_CONFIG_DIR=${BORG_DIR}/config
export BORG_RSH="ssh -F $SSH/config -i $SSH/id_ecdsa_appendonly"
exec "${BORG_BIN}" "$@"

42
config.yaml Normal file
View File

@@ -0,0 +1,42 @@
# List multiple roots, in case they come from different file systems.
# Any paths already included by another root will be excluded, so it's
# OK if these paths actually live on the same filesystem.
roots: |
/
/boot
/usr
/var
one-file-system: true
exclude-caches: true
# Files/dirs to exclude from backup.
# Relative paths are treated as if starting with **/
# Paths ending in / will only match directories.
exclude: |
/var/tmp/
/tmp/
/var/cache/apt/archives/
Steam/steamapps/
Steam/ubuntu*/
.cache/
# Rules to exclude files based on file size.
# This is a dict of sizes, each with a list of rules.
# For a given path, the largest size with a matching rule applies.
# Matching follows the same behavior as the "exclude" list.
# Size is calculated as used blocks (think "du", not "du --apparent-size").
max-size-rules:
500 MiB: |
*
# 1.0 GiB: |
# *.mp4
# Files that are always included, even if they would have been
# excluded due to file size or the "exclude" list.
# Matching follows the same behavior as the "exclude" list.
unexclude: |
.git/objects/pack/*.pack
# Email address for notification at end of backup
notify-email: jim@jim.sh

451
initial-setup.sh Executable file
View File

@@ -0,0 +1,451 @@
#!/bin/bash
# These can be overridden when running this script
set_default_variables()
{
HOSTNAME=${HOSTNAME:-$(hostname)}
BACKUP_HOST=${BACKUP_HOST:-backup.jim.sh}
BACKUP_PORT=${BACKUP_PORT:-222}
BACKUP_USER=${BACKUP_USER:-jim-backups}
BACKUP_REPO=${BACKUP_REPO:-borg/${HOSTNAME}}
SYSTEMD_UNIT=${SYSTEMD_UNIT:-borg-backup}
# Use stable host ID in case MAC address changes.
# Note that this host ID is only used to manage locks, so it's
# not crucial that it remains stable.
UUID=$(python3 -c 'import uuid;print(uuid.getnode())')
HOSTID=${BORG_HOST_ID:-"${HOSTNAME}@${UUID}"}
log "Configuration:"
log " HOSTNAME Local hostname: \033[1m${HOSTNAME}"
log " HOSTID Local host ID: \033[1m${HOSTID}"
log " BACKUP_USER Backup server user: \033[1m${BACKUP_USER}"
log " BACKUP_HOST Backup server host: \033[1m${BACKUP_HOST}"
log " BACKUP_PORT Backup server port: \033[1m${BACKUP_PORT}"
log " BACKUP_REPO Server repository: \033[1m${BACKUP_REPO}"
log " SYSTEMD_UNIT Systemd unit name: \033[1m${SYSTEMD_UNIT}"
for i in $(seq 15 -1 1); do
printf "\rPress ENTER or wait $i seconds to continue... \b"
if read -t 1 ; then break; fi
done
}
# Main dir is where this repo was checked out
BORG_DIR="$(realpath "$(dirname "$0")")"
cd "${BORG_DIR}"
BORG_BIN="${BORG_DIR}/bin/borg.$(uname -m)"
function error_handler() {
echo "Error at $1 line $2:"
echo -n '>>> ' ; tail -n +"$2" < "$1" | head -1
echo "... exited with code $3"
exit "$3"
}
trap 'error_handler ${BASH_SOURCE} ${LINENO} $?' ERR
set -o errexit
set -o errtrace
# Create pip environment
setup_venv()
{
if ! which pipenv >/dev/null 2>&1 ; then
error "pipenv not found, try: sudo apt install pipenv"
fi
mkdir -p .venv
pipenv install
}
# Create shell script with environment variables
create_borg_vars()
{
VARS=${BORG_DIR}/vars.sh
# These variables are used elsewhere in this script
BORG_REPO="ssh://${BACKUP_USER}@${BACKUP_HOST}:${BACKUP_PORT}/./${BACKUP_REPO}"
BORG=${BORG_DIR}/borg.sh
SSH=$BORG_DIR/ssh
cat >"$VARS" <<EOF
export BACKUP_USER=${BACKUP_USER}
export BACKUP_HOST=${BACKUP_HOST}
export BACKUP_PORT=${BACKUP_PORT}
export BACKUP_REPO=${BACKUP_REPO}
export HOSTNAME=${HOSTNAME}
export BORG_REPO=${BORG_REPO}
export BORG_HOST_ID=${HOSTID}
export BORG_PASSCOMMAND="cat ${BORG_DIR}/passphrase"
export BORG_DIR=${BORG_DIR}
export SSH=${SSH}
export BORG=${BORG}
export BORG_BIN=${BORG_BIN}
export SYSTEMD_UNIT=${SYSTEMD_UNIT}
EOF
if ! "$BORG" -h >/dev/null ; then
error "Can't run the borg wrapper; does borg work?"
fi
}
# Copy templated files, filling in templates as needed
install_templated_files()
{
DOCS="README.md"
SCRIPTS="notify.sh logs.sh"
for i in ${DOCS} ${SCRIPTS}; do
sed -e "s!\${HOSTNAME}!${HOSTNAME}!g" \
-e "s!\${BORG_DIR}!${BORG_DIR}!g" \
-e "s!\${BORG_BIN}!${BORG_BIN}!g" \
-e "s!\${BACKUP_USER}!${BACKUP_USER}!g" \
-e "s!\${BACKUP_HOST}!${BACKUP_HOST}!g" \
-e "s!\${BACKUP_PORT}!${BACKUP_PORT}!g" \
-e "s!\${BACKUP_REPO}!${BACKUP_REPO}!g" \
-e "s!\${SYSTEMD_UNIT}!${SYSTEMD_UNIT}!g" \
templates/$i > $i
done
chmod +x ${SCRIPTS}
}
# Update local paths in scripts
update_paths()
{
sed -i\
-e "1c#!${BORG_DIR}/.venv/bin/python" \
backup.py
}
# See if we're just supposed to update an existing install, or recovering
parse_args()
{
RECOVER=0
UPDATE=0
if [ "$1" == "--recover" ] ; then
if [ -e "vars.sh" ]; then
error "It looks like this borg was already set up, can only recover from fresh start"
fi
RECOVER=1
elif [ "$1" == "--update-paths" ] || [ "$1" == "--update" ] ; then
if [ ! -e "vars.sh" ]; then
error "Can't update, not set up yet"
fi
UPDATE=1
elif [ -n "$1" ] ; then
error "Unknown arg $1"
elif [ -e "vars.sh" ]; then
warn "Error: BORG_DIR $BORG_DIR already looks set up."
warn "Use \"git clean\" to return it to original state if desired".
warn "Or specify --update to refresh things from latest git."
error "Giving up"
fi
}
# Make a temp dir to work in
TMP=$(mktemp -d)
# Install some cleanup handlers
cleanup()
{
set +o errexit
set +o errtrace
trap - ERR
ssh -o ControlPath="$TMP"/ssh-control -O exit x >/dev/null 2>&1
rm -rf -- "$TMP"
}
cleanup_int()
{
echo
cleanup
exit 1
}
trap cleanup 0
trap cleanup_int 1 2 15
msg()
{
color="$1"
shift
echo -ne "\033[1;${color}m===\033[0;${color}m" "$@"
echo -e "\033[0m"
}
log(){ msg 33 "$@" ; }
notice() { msg 32 "$@" ; }
warn() { msg 31 "$@" ; }
error() { msg 31 "Error:" "$@" ; exit 1 ; }
print_random_key()
{
dd if=/dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 16
}
generate_keys()
{
if [ $RECOVER -eq 1 ] ; then
notice "Recovering configuration in order to use an existing backup"
read -s -p "Repo key for \"borg ${HOSTNAME}\": " PASS_REPOKEY
echo
read -s -p "Again: " PASS_REPOKEY2
echo
if [ -z "$PASS_REPOKEY" ] || [ $PASS_REPOKEY != $PASS_REPOKEY2 ] ; then
error "Bad repo key"
fi
else
PASS_REPOKEY=$(print_random_key)
fi
echo "$PASS_REPOKEY" > passphrase
chmod 600 passphrase
}
# Run a command on the remote host over an existing SSH tunnel
run_ssh_command()
{
ssh -o ControlPath="$TMP"/ssh-control use-existing-control-tunnel "$@"
}
# Configure SSH key-based login
configure_ssh()
{
mkdir "$SSH"
# Create keys
log "Creating SSH keys"
ssh-keygen -N "" -t ecdsa \
-C "backup-appendonly@$HOSTID" -f "$SSH/id_ecdsa_appendonly"
ssh-keygen -N "" -t ecdsa \
-C "backup-notify@$HOSTID" -f "$SSH/id_ecdsa_notify"
# Create config snippets
log "Creating SSH config and wrapper script"
cat >> "$SSH/config" <<EOF
User $BACKUP_USER
ControlPath none
ServerAliveInterval 120
Compression no
UserKnownHostsFile $SSH/known_hosts
ForwardX11 no
ForwardAgent no
BatchMode yes
IdentitiesOnly yes
EOF
# Connect to backup host, using persistent control socket
log "Connecting to server"
log "Please enter password; look in Bitwarden for: ssh ${BACKUP_HOST} / ${BACKUP_USER}"
ssh -F "$SSH/config" -o BatchMode=no -o PubkeyAuthentication=no \
-o ControlMaster=yes -o ControlPath="$TMP/ssh-control" \
-o StrictHostKeyChecking=accept-new \
-p "${BACKUP_PORT}" \
-f "${BACKUP_USER}@${BACKUP_HOST}" sleep 600
if ! run_ssh_command true >/dev/null 2>&1 </dev/null ; then
error "SSH failed"
fi
log "Connected to ${BACKUP_USER}@${BACKUP_HOST}:${BACKUP_PORT}"
# Since we now have an SSH connection, check repo existence
if [ $RECOVER -eq 0 ] && run_ssh_command "test -e $BACKUP_REPO"; then
error "$BACKUP_REPO already exists on the server, bailing out"
elif [ $RECOVER -ne 0 ] && ! run_ssh_command "test -e $BACKUP_REPO"; then
error "$BACKUP_REPO does NOT exist on the server, can't recover backup config"
fi
# Copy SSH keys to the server's authorized_keys file, removing any
# existing keys with this HOSTID.
log "Setting up SSH keys on remote host"
REMOTE_BORG="borg/borg"
cmd="$REMOTE_BORG serve --restrict-to-repository ~/$BACKUP_REPO"
keys=".ssh/authorized_keys"
backup="${keys}.old-$(date +%Y%m%d-%H%M%S)"
run_ssh_command "mkdir -p .ssh; chmod 700 .ssh; touch $keys"
run_ssh_command "mv $keys $backup; sed '/@$HOSTID\$/d' < $backup > $keys"
run_ssh_command "if cmp -s $backup $keys; then rm $backup ; fi"
run_ssh_command "cat >> .ssh/authorized_keys" <<EOF
command="$cmd --append-only",restrict $(cat "$SSH/id_ecdsa_appendonly.pub")
command="borg/notify.sh",restrict $(cat "$SSH/id_ecdsa_notify.pub")
EOF
# Test that everything worked
log "Testing SSH login with new key"
if ! ssh -F "$SSH/config" -i "$SSH/id_ecdsa_appendonly" -T \
-p "${BACKUP_PORT}" "${BACKUP_USER}@${BACKUP_HOST}" "$REMOTE_BORG" \
--version </dev/null ; then
error "Logging in with a key failed -- is server set up correctly?"
fi
log "Remote connection OK!"
}
# Create the repository on the server
create_repo()
{
log "Creating repo $BACKUP_REPO"
# Create repo
$BORG init --make-parent-dirs --encryption repokey
}
# Export keys as HTML page
export_keys()
{
log "Exporting keys"
$BORG key export --paper '' key.txt
chmod 600 key.txt
cat >>key.txt <<EOF
Repository: ${BORG_REPO}
Passphrase: ${PASS_REPOKEY}
EOF
}
configure_systemd()
{
TIMER=${SYSTEMD_UNIT}.timer
SERVICE=${SYSTEMD_UNIT}.service
TIMER_UNIT=${BORG_DIR}/${TIMER}
SERVICE_UNIT=${BORG_DIR}/${SERVICE}
log "Creating systemd files"
# Choose a time between 1am and 6am based on this hostname
HASH=$(echo hash of "$HOSTNAME" | sha1sum)
HOUR=$((0x${HASH:0:8} % 5 + 1))
MINUTE=$((0x${HASH:8:8} % 6 * 10))
TIME=$(printf %02d:%02d:00 $HOUR $MINUTE)
log "Backup time is $TIME"
cat > "$TIMER_UNIT" <<EOF
[Unit]
Description=Borg backup to ${BACKUP_HOST}
[Timer]
OnCalendar=*-*-* $TIME
Persistent=true
[Install]
WantedBy=timers.target
EOF
cat >> "$SERVICE_UNIT" <<EOF
[Unit]
Description=Borg backup to ${BACKUP_HOST}
[Service]
Type=simple
ExecStart=${BORG_DIR}/backup.py
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=6
Restart=on-failure
RestartSec=600
EOF
if [ $RECOVER -eq 1 ] ; then
log "Partially setting up systemd"
ln -sfv "${TIMER_UNIT}" /etc/systemd/system
ln -sfv "${SERVICE_UNIT}" /etc/systemd/system
systemctl --no-ask-password daemon-reload
systemctl --no-ask-password stop ${TIMER}
systemctl --no-ask-password disable ${TIMER}
warn "Since we're recovering, systemd automatic backups aren't enabled"
warn "Do something like this to configure automatic backups:"
echo " sudo systemctl enable ${TIMER} &&"
echo " sudo systemctl start ${TIMER}"
warn ""
else
log "Setting up systemd"
if (
ln -sfv "${TIMER_UNIT}" /etc/systemd/system &&
ln -sfv "${SERVICE_UNIT}" /etc/systemd/system &&
systemctl --no-ask-password daemon-reload &&
systemctl --no-ask-password enable ${TIMER} &&
systemctl --no-ask-password start ${TIMER}
); then
log "Backup timer installed:"
systemctl list-timers --no-pager ${TIMER}
else
warn ""
warn "Systemd setup failed"
warn "Do something like this to configure automatic backups:"
echo " sudo ln -sfv \"${TIMER_UNIT}\" /etc/systemd/system &&"
echo " sudo ln -sfv \"${SERVICE_UNIT}\" /etc/systemd/system &&"
echo " sudo systemctl daemon-reload &&"
echo " sudo systemctl enable ${TIMER} &&"
echo " sudo systemctl start ${TIMER}"
warn ""
fi
fi
}
git_setup()
{
log "Committing local changes to git"
if ! git checkout -b "setup-${HOSTNAME}" ||
! git add ${SYSTEMD_UNIT}.service ${SYSTEMD_UNIT}.timer vars.sh ||
! git commit -a -m "autocommit after initial setup on ${HOSTNAME}" ; then
warn "Git setup failed; ignoring"
return
fi
}
main() {
if [ $UPDATE -eq 1 ] ; then
notice "Non-destructively updating paths, variables, and venv..."
source vars.sh
set_default_variables
install_templated_files
update_paths
setup_venv
create_borg_vars
notice "Testing SSH"
ssh -F "$SSH/config" -i "$SSH/id_ecdsa_appendonly" \
-o BatchMode=no -o StrictHostKeyChecking=ask \
-p "${BACKUP_PORT}" "${BACKUP_USER}@${BACKUP_HOST}" info >/dev/null
notice "Testing borg: if host location changed, say 'y' here"
${BORG_DIR}/borg.sh info
notice "Done -- check 'git diff' and verify changes."
exit 0
fi
set_default_variables
setup_venv
create_borg_vars
generate_keys
configure_ssh
[ $RECOVER -eq 0 ] && create_repo
export_keys
configure_systemd
install_templated_files
update_paths
git_setup
echo
if [ $RECOVER -eq 1 ] ; then
notice "You should be set up with borg pointing to the existing repo now."
notice "Use commands like these to look at the backup:"
notice " sudo ${BORG_DIR}/borg.sh info"
notice " sudo ${BORG_DIR}/borg.sh list"
notice "You'll want to now restore files like ${BORG_DIR}/config.yaml before enabling systemd timers"
else
notice "Add this password to Bitwarden:"
notice ""
notice " Name: borg ${HOSTNAME}"
notice " Username: repo key"
notice " Password: $PASS_REPOKEY"
notice ""
notice "Test the backup file list with"
notice " sudo ${BORG_DIR}/backup.py --dry-run"
notice "and make any necessary adjustments to:"
notice " ${BORG_DIR}/config.yaml"
fi
echo
echo "All done"
}
parse_args "$@"
main

145
templates/README.md Normal file
View File

@@ -0,0 +1,145 @@
Initial setup
=============
Run on client:
sudo git clone https://git.jim.sh/jim/borg-setup.git /opt/borg
sudo /opt/borg/initial-setup.sh
Customize `/opt/borg/config.yaml` as desired.
Cheat sheet
===========
*After setup, /opt/borg/README.md will have the variables in this
section filled in automatically*
## Configuration
Hostname: ${HOSTNAME}
Base directory: ${BORG_DIR}
Destination: ${BACKUP_USER}@${BACKUP_HOST}:${BACKUP_PORT}
Repository: ${BACKUP_REPO}
## Commands
See when next backup is scheduled:
systemctl list-timers ${SYSTEMD_UNIT}.timer
See log of most recent backup attempt:
${BORG_DIR}/logs.sh
${BORG_DIR}/logs.sh -f
Start backup now:
sudo systemctl restart ${SYSTEMD_UNIT}
Interrupt backup in progress:
sudo systemctl stop ${SYSTEMD_UNIT}
Show backups and related info:
sudo ${BORG_DIR}/borg.sh info
sudo ${BORG_DIR}/borg.sh list
Run Borg using the read-write SSH key:
sudo ${BORG_DIR}/borg.sh --rw list
Mount and look at files:
mkdir mnt
sudo ${BORG_DIR}/borg.sh mount :: mnt
sudo -s # to explore as root
sudo umount mnt
Update borg-setup from git:
cd /opt/borg
sudo git remote update
sudo git rebase origin/master
sudo ./inital-setup.sh --update
sudo git commit -m 'Update borg-setup'
## Compaction and remote access
Old backups are "pruned" automatically, but because the SSH key is
append-only, no space is actually recovered on the server, it's just
marked for deletion. If you are sure that the client system was not
compromised, then you can run compaction manually directly on the
backup host by logging in via SSH (bitwarden `ssh ${BACKUP_HOST} /
${BACKUP_USER}`) and compacting there:
ssh -p ${BACKUP_PORT} ${BACKUP_USER}@${BACKUP_HOST} borg/borg compact --verbose --progress ${BACKUP_REPO}
This doesn't require the repo key. That key shouldn't be entered on
the untrusted backup host, so for operations that need it, use a
trusted host and run borg remotely instead, e.g.:
${BORG_BIN} --remote-path borg/borg info ssh://${BACKUP_USER}@${BACKUP_HOST}:${BACKUP_PORT}/borg/${HOSTNAME}
The repo passphrase is in bitwarden `borg ${HOSTNAME} / repo key`.
Design
======
- On server, we have a separate user account "jim-backups". Password
for this account is in bitwarden in the "Backups" folder, under `ssh
backup.jim.sh`.
- Repository keys are repokeys, which get stored on the server, inside
the repo. Passphrases are stored:
- on clients (in `/opt/borg/passphrase`, for making backups)
- in bitwarden (under `borg <hostname>`, user `repo key`)
- Each client has two passwordless SSH keys for connecting to the server:
- `/opt/borg/ssh/id_ecdsa_appendonly`
- configured on server for append-only operation
- used for making backups
- `/opt/borg/ssh/id_ecdsa_notify`
- configured on server for running `borg/notify.sh` only
- used for sending email notifications on errors
- Systemd timers start daily backups:
/etc/systemd/system/borg-backup.service -> /opt/borg/borg-backup.service
/etc/systemd/system/borg-backup.timer -> /opt/borg/borg-backup.timer
- Backup script `/opt/borg/backup.py` uses configuration in
`/opt/borg/config.yaml` to generate our own list of files, excluding
anything that's too large by default. This requires borg 1.2 or newer.
Notes
=====
# Building Borg binary from git
sudo apt install python3.9 scons libacl1-dev libfuse-dev libpython3.9-dev patchelf
git clone https://github.com/borgbackup/borg.git
cd borg
virtualenv --python=python3.9 borg-env
source borg-env/bin/activate
pip install -r requirements.d/development.txt
pip install pyinstaller
pip install llfuse
pip install -e .[llfuse]
pyinstaller --clean --noconfirm scripts/borg.exe.spec
pip install staticx
# for x86
staticx -l /lib/x86_64-linux-gnu/libm.so.6 dist/borg.exe borg.x86_64
# for ARM; see https://github.com/JonathonReinhart/staticx/issues/209
staticx -l /lib/arm-linux-gnueabihf/libm.so.6 dist/borg.exe borg.armv7l
Then run `borg.x86_64`. Confirm the version with `borg.armv7l --version`.
*Note:* This uses the deprecated `llfuse` instead of the newer `pyfuse3`.
`pyfuse3` doesn't work because, at minimum, it pulls in `trio` which
requires `ssl` which is explicitly excluded by `scripts/borg.exe.spec`.

3
templates/logs.sh Normal file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
exec journalctl --all --unit ${SYSTEMD_UNIT} --since "$(systemctl show -P ExecMainStartTimestamp ${SYSTEMD_UNIT})" "$@"

27
templates/notify.sh Normal file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
set -e
. "$(dirname "$0")"/vars.sh
# Send notification email using a script on the backup host
# First argument is our hostname, second argument is destination;
# mail body is provided on stdin.
if tty -s ; then
echo 'Refusing to read mail body from terminal'
exit 1
fi
SUMMARY="$1"
EMAIL="$2"
# Remote notify.sh wants subject as first line, not as an argument,
# since it's a bit messy to pass complex strings through ssh command
# lines.
( echo "backup $HOSTNAME: $SUMMARY" ; cat ) | \
ssh \
-F "$SSH/config" \
-i "$SSH/id_ecdsa_notify" \
-p "$BACKUP_PORT" \
"$BACKUP_USER@$BACKUP_HOST" \
borg/notify.sh "$EMAIL"