How to restore a Snapper root snapshot on an unbootable system

Hi everyone,

I have been running CachyOS since August and absolutely loving it. I came from almost 5 years running Manjaro and I’m daily asking myself why I didn’t jump that ship earlier to get closer to a base Arch installation.

Anyway, so I just updated my system 2 days ago (hadn’t done so for about a week so there were like 200+ updates and afterwards my proton vpn wasn’t working. It was late at night when I did the update so I just decided to just rollback my system using BTRFS Assistant because I just didn’t want to mess with figuring out what the issue was right then and there.

So for the first time the system no longer booted when I rolled back to my pre-update snapshot. I’ve rolled back updates in the past using BTRFS Assistant and there has never been an issue but for whatever reason after rolling back to my pre update snapshot. Instead when I booted I was presented with the following message:

Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

Press Enter for maintenance
(or press Control-D to continue):

So I guess this is what I get for being lazy and not figuring out the issue with my vpn client lol,
Anyway I now had to figure out how to manually restore a root snapshot so I could get my system working again so I dove into the arch wiki entry on snapper which gives and example of restoring home directory from a snapper snapshot so not really root but I figured I’d give the step by step here since it took a little playing around to wrap my head around how snapper lays out subvolumes for performing root snapshots which can be a little confusing if you aren’t familiar with BTRFS subvolumes. The most important thing that can be hard to conceptualize especially when subvolumes are located in sub-directories is that snapshotting is not recursive, so a subvolume or a snapshot is effectively a barrier and no files in the nested subvolumes appear in the snapshot.

So here is the step by step instructions to recovery using the CachyOS installation ISO to restore a root snapshot to a system that no longer boots. So for those of you using the CachyOS defaults of Systemd bootloader and setup Snapper here is how you recover from an update that makes your system no longer boot to your DE.

First things first, boot off the CachyOS installation USB

  1. Open up dolphin and select your BTRFS drive if you did a default CachyOS installation it should be named root, otherwise select the appropriate drive and this will automatically mount it when you click it.
  2. Click the > at the top of the dolphin window to see where the drive was mounted at, which will be (/run/media/liveuser/UUID) make a note of the first couple of digits of the UUID
  3. launch Alacritty terminal
  4. type the following:

cd /run/media/liveuser/UUID

(basically type the first couple of digits of the UUID then press tab to complete it unless you just really want to type out the whole UUID :wink: )

  1. view your directory to make sure you are in the correct place as the sample commands I am using are assuming you are in the root directory of your BTRFS drive.

sudo ls -al

This will give you a directory of the actual root of your BTRFS file system if you changed your directory correctly. You should see the various subvolumes such as @ (which gets mounted as your current root directory, along with a few others like @Home, @log, etc…) Also note if like in my case I’m recovering from a BTRFS Assistant snapshot root restore you’ll notice the “backup” it made of root before it restored your snapshot. This will be named @backup<date/time><description_ you_ gave_if_you_gave_one>

  1. if you don’t know the snapshot you want to restore you can list out the snapshot directory to see them with the date/time stamp as follows:

sudo ls -al @/.snapshots

  1. If you want to see the snapper metadata for a particular snapshot (i.ie. description snapper applied to it) then type:

sudo cat ./@/.snapshots/<snapshot#>/info.xml

(note the <snapshot#> is the number of the snapshot gotten from listing the directory in step 6.

  1. Once you have determined which snapshot you want to restore (for the purposes of this tutorial we will say the snapshot you want to restore is snapshot# 123) then type the following:

sudo mv @ @.broken

This will rename your existing root subvolume that isn’t booting to a temporary name. It is VERY important that you don’t delete this subvolume yet. While I don’t think the BTRFS filesystem would let you delete it since it has the subvolume ./snapshots buried in it I am not going to test this theory, if you are able to delete it before moving your snapshot subvolume out of it you would lose ALL of your root snapshots.

  1. All Snapper snapshots are Read-Only by default so next we need to make a RW snapshot of the snapshot we want to restore as root using snapshot 123 as our sample the command would be:

sudo btrfs subvolume snapshot @.broken/.snapshots/123/snapshot @

  1. The final step is to move your snapshot subvolume from the broken root directory into your new root snapshot as follows:

sudo mv @.broken/.snapshots @

This will move the .snapshots subvolume into your new Root directory

  1. Final step is to reboot your system and you will boot into the restore snapshot.
3 Likes

GREAT tutorial on how snapshots work, but you really do not have to do all this…

You can use grub, install grub-btrfs, install timeshift (relies on the same snapper application used in this tutorial) and just boot into the read only snapshot directly from grub boot menu, open timeshift gui, press “restore” on the snapshot you want, reboot and done. :slight_smile:

For simplicity, with systemd timers and hooks, also install AUR packages timeshift-systemd-timer (or set them up manually) and activate the timer: sudo systemctl enable --now timeshift-hourly.timer & install timeshift-autosnap (installs pacman hooks for you).

The timer is not making a snapshot every hour, it checks if there is something to do, the config for when to do snapshots is done in the GUI of timeshift (on top of making a snapshot before changing files when doing a system update, the pacman hook).

Using yay for AUR in this example:

$ sudo pacman -Syu
$ sudo pacman -S grub-btrfs timeshift
$ yay -S timeshift-systemd-timer
$ sudo systemctl enable --now timeshift-hourly.timer
$ yay -S timeshift-autosnap
$ sudo update-grub # or sudo grub-mkconfig -o /boot/grub/grub.cfg

The installation might complain about not being able to use cron, but you can just ignore that, you will use systemd instead.

I highly recommend NOT including /home in your snapshots, all you want is your root to be restored to a working condition.
Lets say you made a document, or downloaded something while trying to restore the computer to working condition and saved that in your home, and after a restore, all of that is removed, because you restore to an older point in time. Browser history, all gone.
All these settings are done within the GUI of timeshift.


/boot is not included

A caveat is that the boot partition on a std cachy install (and arch) is NOT included in the snapshot (it’s on a fat32 partition), so if the update that broke the installation used mkinitcpio you will also have to chroot into the now restored snapshot and run sudo mkinitcpio -P again.
This is true with the tutorial above as well, and should IMHO be mentioned that boot is most likely NOT INCLUDED IN THE SNAPSHOT.

On Manjaro for example, the boot is mounted in the now discouraged /boot/efi, witch actually means you will get everything in /boot (except /boot/efi) restored with the snapshot.

I am conflicted as if using /boot/efi is a bad idea or not. Manjaro obv does it for exactly the reason that “restoring a snapshot might make the computer end up in rescue mode rather than fix it”, they want the user to have to do as little as possible…
But at the same time, it is discouraged…
/efi is also a valid mountpoint.

Note:

  • /efi is a replacement[6][7] for the historical and now discouraged ESP mountpoint /boot/efi.
  • The /efi directory is not available by default, you will need to first create it before mounting the ESP to it.

Could also be worth mentioning that it is not unheard of pacman complaining and not working after a restore.
A removal of the db-lock file usually fixes it.

This guide is absolutely fantastic! :tada:
As a user of Btrfs and Snapper, I truly appreciate the clarity and logical structure of the steps provided.
The detailed explanation of working with subvolumes and commands makes the entire process straightforward, even for less experienced users. :computer::sparkles:

The emphasis on not deleting the original root subvolume and highlighting potential risks is incredibly valuable and adds an essential layer of safety. :rotating_light:
The way the guide explains everything – from working with UUIDs to restoring snapshots – is both intuitive and practical. :hammer_and_wrench:

The clarity and attention to detail make this guide an outstanding resource for the community. Such guides are exactly what’s needed to make efficient use of Btrfs and Snapper.

:star2: Amazing work! :clap: