ZFS Security issues with snapshots for (longer term) backups

I use ZFS snapshots and mirroring (both on a harddrive that is permanently connected and on harddrives that aren't always connected) as a means of keeping backups of my data.

Using the snapshots, I can go back in time if I accidentally delete files or something else happened to my data on the logical level. I really like that feature of ZFS.

However, keeping ZFS snapshots around for a long time seems to be a security issue – at least under certain circumstances – because snapshots of a mounted file system are always automounted and unprivileged users always have access to those snapshots (see PR 265625). That means whenever you change privileges, e.g. forbid a certain user to access some data, then these changes are effectively ignored because that user can still see the (old) data in the snapshots. The same problem arises when you, for example, remove some group's privileges to access a directory and then add a new user being member of that group whose privileges had been removed for that directory: the newly created user could read data that they weren't supposed to be able to read. So the user can access a directory even though the privileges for that directory were removed before that user has even been created and/or added to that group.

I originally brought this issue up on the FreeBSD mailinglist many years ago, but there doesn't seem to be a solution to that problem yet. It seems this issue is known already for over 8 years (see OpenZFS issue #3963).

Since this is a potential security issue, I think this needs to be fixed (or users should at least be discouraged from using this feature without knowing about the security holes it can cause). As of my current knowledge, there has been no security advisory on that problem yet.

Note that the directory can be hidden, see snapdir=hidden in zfsprops(7), but users may still access it.

What can I do in the meantime to deal with this? In some instances I ignore the security problem and just hope I won't run into any real trouble. Other times, I try to do a nullfs mount by adding something like this to my /etc/fstab:

Code:
# Device    Mountpoint           FStype  Options  Dump  Pass#
/var/empty  /home/.zfs/snapshot  nullfs  ro       0     2

Is this safe? Are there better ways to get rid of the .zfs/snapshot directory? I wonder what's the best practice. Do you think using snapshots for the purpose of backups is a misuse? I just don't see any other mechanism that's as handy as that to go back in time whenever you need to.
 
What about setuid binaries that had a security problem, were replaced with a secure version, but have the insecure version still visible via snapshots?
 
What about setuid binaries that had a security problem, were replaced with a secure version, but have the insecure version still visible via snapshots?
Yes, that is another issue indeed.

Note that in the linked discussions (e.g. on the mailinglist here), it has sometimes been argued that the problem of readable snapshots doesn't introduce new security holes, though, because any security issue would have been exploitable before. (Even if those arguments were true, I still consider the inability to fix wrong file permissions problematic.)

This is why I brought up the example of assigning a user to a group in my original post here, where then the (new) user may see files they were never supposed to read, just because the group previously had different access rights to files. That example clearly shows that even on a system where you never did anything "wrong", the readable snapshots can introduce security problems.

So far, the only way out that I can see is:

  • delete snapshots a few minutes after creation or at least whenever you change privileges on your filesystem (this sort-of makes snapshots unsuitable for long-term keeping)
  • some workarounds like the nullfs mount (which feels very "dirty", and I'm not even sure if this is secure; at least there should be a risk of race conditions during mounting)
 
Code:
# Device    Mountpoint           FStype  Options Dump Pass
/var/empty  /home/.zfs/snapshot  nullfs  ro      0    2
This was actually wrong and made my system failing booting. Instead of setting Pass to 2, I need to use the late option:

Code:
# Device    Mountpoint           FStype  Options  Dump  Pass#
/var/empty  /home/.zfs/snapshot  nullfs  ro,late  0     0

Still looking for any other (more clean) option to get rid of the .zfs/snapshot dir, especially as I have to do this for each ZFS file system separately. ?
 
Is there maybe a way to change the attributes of the .zfs directory, so that for example, only root can view it?
 
[…] Are there better ways to get rid of the .zfs/snapshot directory? […]
I am aware of the issue. My environment has a file server. There my workaround is exporting a dataset’s subdirectory via NFS (and quite obviously regular users cannot login to the NFS server).​
[…] Do you think using snapshots for the purpose of backups is a misuse? […]
Snapshots are not backups. A backup can be used to recover from a faulted pool. Using snapshot data to retrieve data from datasets in a fully-functional pool is unrelated to backup proper. (Nonetheless snapshots can be used to create backups.)​

In my opinion software should be designed for a use case. If you want to guard against “accidental deletion” a version control system such as git(1) would be more adequate. Using ZFS features has a bit of the downside that you are kind of compelled to delegate privileges: If users ought to use snapshots, you probably want to grant them privileges to create and destroy snapshots, too.​
 
Snapshots are not backups.

Sure they are. It's just a matter of what level of loss they protect against.

- snapshots on one machine = protection from accidental deletion, malware
- snapshots replicated to another machine in same building = protection from corruption of source machine
- snapshots replicated to an offsite machine = protection from the building burning down
- snapshots replicated to a machine on Mars = protection from nuclear armageddon on Earth

Saying "snapshots are not backups" over-simplifies things. Instead, we need to talk about what scenarios we want to protect against, and whether the backup strategy being used is adequate.
 
There my workaround is exporting a dataset’s subdirectory via NFS (and quite obviously regular users cannot login to the NFS server).
Hmmm, interesting. So a workaround on a local machine could be (in theory) to mount the file system to a path that is not readable by ordinary users and then mounting that filesystem through a NFS loopback mount. But I doubt this would come without performance impact, and in the local case it's even more messy than my current workaround.

I guess I could mount the filesystem to a non-readable path, stick all data of the filesystem in a subdirectory of that filesystem and then use nullfs to mount that subdirectory to a path that is readable by ordinary users. That seems to be safer than my current approach of using nullfs to "cover" the .zfs/snapshot directory. But it is still a configuration mess.

Snapshots are not backups.
Sure they are.
I wasn't sure if they are. The Oracle Documentation says:
A snapshot is a read-only copy of a file system or volume. […]

ZFS snapshots include the following features:
  • The persist across system reboots.
  • […]
  • Snapshots use no separate backing store. Snapshots consume disk space directly from the same storage pool as the file system or volume from which they were created.
  • Recursive snapshots are created quickly as one atomic operation. […]
Snapshots of volumes cannot be accessed directly, but they can be cloned, backed up, rolled back to, and so on.
On there, they don't really write about the purpose of them, but maybe it doesn't really matter what they are originally intended to be used for. The zfs-rollback(8) command allows me to go back to a previous state if my data got messed up on the logical level (e.g. due to an accidental rm by a user, or or a software failure in user land). Of course it doesn't protect against my machine burning down or unless I also send the snapshot somewhere (zfs-send(8)) or have a ZFS pool with some harddrives at a remote location and/or being offline (zpool-offline(8)). But even if I send a snapshot to a remote machine, I might want to keep it on my local machine for quick recovery if needed.

I found the feature of ZFS snapshots very pleasing – wouldn't there be the security implications.

But are the security implications really limited to (arguably) "abusing" ZFS snapshots as machine-local backups? I would say that the security vulnerability exists as long as snapshots are held over a longer period of time (and you could argue whether that is a few seconds, a few hours, a few days, or a few years), independent of why they are kept for a longer time. Perhaps in some use cases, a workaround could be to convert snapshots to bookmarks (zfs-bookmark(8)) as soon as possible, but I can't recover from bookmarks locally.

I do not understand why this security hole isn't fixed over years (over 8 years!). Besides, I would assume it's a rather trivial fix. I'm considering to try fixing it myself, but I feel like it would cause me a lot of headache as I never worked on anything close to a kernel module, not even to speak a filesystem. Plus, my code might not end up to be idiomatic and likely be not accepted upstream.

I also do not understand why this issue isn't escalated as a security relevant bug and addressed accordingly. But maybe that's a matter of definition? (Of course, you could say "don't use that command/feature in this way", but following that argumentation, a lot of security relevant bugs wouldn't be security relevant bugs). I even mailed the FreeBSD Security Team but never got a response. This topic mostly seems to be "ignored" from a security perspective (though some people seem to be aware of it, like pointed out in this thread).

In my opinion software should be designed for a use case. If you want to guard against “accidental deletion” a version control system such as git(1) would be more adequate.

If I used git instead, there would be MUCH more overhead for the purpose that I use snapshots for.

Using ZFS features has a bit of the downside that you are kind of compelled to delegate privileges: If users ought to use snapshots, you probably want to grant them privileges to create and destroy snapshots, too.

I could (in theory) do that using zfs-allow(8), right? Though I don't need that. In my use-case, I'm content with the root user doing all backup operations.
 
this is a mega crude hack that will block control dir (.zfs) for non root users
not very well tested
set sysctl vfs.zfs.ctldir_allow=0 to enable it (it should be per dataset property but...)
Diff:
diff --git a/sys/contrib/openzfs/module/os/freebsd/zfs/sysctl_os.c b/sys/contrib/openzfs/module/os/freebsd/zfs/sysctl_os.c
index 38ef590702cb..59da4afcb4e6 100644
--- a/sys/contrib/openzfs/module/os/freebsd/zfs/sysctl_os.c
+++ b/sys/contrib/openzfs/module/os/freebsd/zfs/sysctl_os.c
@@ -136,6 +136,9 @@ SYSCTL_DECL(_vfs_zfs_version);
 SYSCTL_CONST_STRING(_vfs_zfs_version, OID_AUTO, module, CTLFLAG_RD,
     (ZFS_META_VERSION "-" ZFS_META_RELEASE), "OpenZFS module version");

+
+extern uint16_t zfsctl_ctldir_allow;
+SYSCTL_U16(_vfs_zfs, OID_AUTO, ctldir_allow, CTLFLAG_RW, &zfsctl_ctldir_allow, 0, "Control .zfs dir mode bits");
 /* arc.c */

 int
@@ -596,6 +599,28 @@ SYSCTL_UINT(_vfs_zfs_metaslab, OID_AUTO, df_free_pct,
     " space map to continue allocations in a first-fit fashion");
 /* END CSTYLED */

+/*
+ * Percentage of all cpus that can be used by the metaslab taskq.
+ */
+extern int metaslab_load_pct;
+
+/* BEGIN CSTYLED */
+SYSCTL_INT(_vfs_zfs_metaslab, OID_AUTO, load_pct,
+    CTLFLAG_RWTUN, &metaslab_load_pct, 0,
+    "Percentage of cpus that can be used by the metaslab taskq");
+/* END CSTYLED */
+
+/*
+ * Max number of metaslabs per group to preload.
+ */
+extern uint_t metaslab_preload_limit;
+
+/* BEGIN CSTYLED */
+SYSCTL_UINT(_vfs_zfs_metaslab, OID_AUTO, preload_limit,
+    CTLFLAG_RWTUN, &metaslab_preload_limit, 0,
+    "Max number of metaslabs per group to preload");
+/* END CSTYLED */
+
 /* mmp.c */

 int
diff --git a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_ctldir.c b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_ctldir.c
index 420d887b661e..f030f38fee38 100644
--- a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_ctldir.c
+++ b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_ctldir.c
@@ -92,6 +92,28 @@
 const uint16_t zfsctl_ctldir_mode = S_IRUSR | S_IXUSR | S_IRGRP | S_IXGRP |
     S_IROTH | S_IXOTH;

+uint16_t zfsctl_ctldir_allow = 1;
+
+#ifndef _SYS_SYSPROTO_H_
+struct vop_access_args {
+    struct vnode *a_vp;
+    accmode_t a_accmode;
+    struct ucred *a_cred;
+    struct thread *a_td;
+};
+#endif
+
+#ifndef _SYS_SYSPROTO_H_
+struct vop_readdir_args {
+    struct vnode *a_vp;
+    struct uio *a_uio;
+    struct ucred *a_cred;
+    int *a_eofflag;
+    int *a_ncookies;
+    cookie_t **a_cookies;
+};
+#endif
+
 /*
  * "Synthetic" filesystem implementation.
  */
@@ -473,8 +495,9 @@ zfsctl_common_close(struct vop_close_args *ap)
 static int
 zfsctl_common_access(struct vop_access_args *ap)
 {
+    if (!zfsctl_ctldir_allow && ap->a_cred->cr_uid)
+        return (SET_ERROR(EACCES));
     accmode_t accmode = ap->a_accmode;
-
     if (accmode & VWRITE)
         return (SET_ERROR(EACCES));
     return (0);

at first i tought to implement a ctldir umask but then i saw access rights are never checked so it would not work even if .zfs was 0500 (dr-x------)
 
On there, they don't really write about the purpose of them, but maybe it doesn't really matter what they are originally intended to be used for. The zfs-rollback(8) command allows me to go back to a previous state if my data got messed up on the logical level (e.g. due to an accidental rm by a user, or or a software failure in user land).

The basic purpose is to provide a local, read-only copy of a filesystem.

I would not zfs-rollback to restore a few files. That's where snapshots shine - copy the file from the snapshot directory to your working directory. Rollback is a big hammer, when you've decided that things are pretty messed up and you just want to go back to the way they were yesterday.

But are the security implications really limited to (arguably) "abusing" ZFS snapshots as machine-local backups?

Machine-local backups are not "abusing" ZFS snapshots. That's literally what they are.

This is why I brought up the example of assigning a user to a group in my original post here, where then the (new) user may see files they were never supposed to read, just because the group previously had different access rights to files. That example clearly shows that even on a system where you never did anything "wrong", the readable snapshots can introduce security problems.

I don't think I agree with your assessment of it being a security issue, at least not the way I'm interpreting it from you. I also don't think it has anything to do with ZFS specifically.

The behavior you're describing is the basic property of file system permissions. The file has permissions corresponding to a user and group id. The fact that it's in a snapshot doesn't have any bearing.

I think it gets really complicated to try to determine who had access to what at what point in time. Do you even have the uid/gid database corresponding to that snapshot? What if it's on a different dataset? What do you do when you send those snapshots to a different machine? Are you saying that only myuser@myhost (who is really uid=500) should be able to view those files? Is there some mechanism for ensuring that joe@otherhost (who happens to be uid=500 there) can't see them?

It makes sense if you think of snapshots as what they are - read-only file systems. They're files on disk, with permissions. You can't change those permissions, because it's read-only. If you don't want the file permissions to apply, then you'll have to delete them (or avoid creating them in the first place).

If you keep that in mind, then you can perhaps find alternate ways to use snapshots to your advantage. So, we'll say that you don't want to just snapshot the datasets, because it doesn't give you enough control over the permissions down the line. Here are some other ideas:

1. Copy the snapshots, send them to a remote system, and delete the local snapshots.
2. Create a separate dataset for backups, with permissions only you have. Back files up to it, and snapshot it instead.
3. Back up files using tar, and only give yourself permissions to read the tar. Save the tar file on a dataset that you snapshot.

btw you can simulate rewriting snapshots to some degree, by copying files from a snapshot to a live dataset, making the changes you want, and snapshotting again.

So if I want to retroactively change the permissions on secrets.txt I can:

1. iterate over the snapshots
2. rsync the snapshot's contents to the live directory
3. make my changes
4. snapshot it
5. delete the original shapshot
6. repeat

It's a pain, yes, but it is the tradeoff for having a read-only file system. It's not that different from if you snapshot a giant ISO file unexpectedly. It's there, until you delete the snapshots that contain it.
 
(it should be per dataset property but...)
I agree. Ideally we would have at least a property in zfsprops(8). However, a quick fix using sysctl(8) would at least fix the security problem (I'll write more below in regard to why I think it's a security problem indeed).

Code:
+/*
+ * Percentage of all cpus that can be used by the metaslab taskq.
+ */ +extern int metaslab_load_pct;
+
+/* BEGIN CSTYLED */
+SYSCTL_INT(_vfs_zfs_metaslab, OID_AUTO, load_pct,
+    CTLFLAG_RWTUN, &metaslab_load_pct, 0,
+    "Percentage of cpus that can be used by the metaslab taskq");
+/* END CSTYLED */
Does this have anything to do with fixing the problem? I assume you had some other unrelated changes in your working directory when you created and copied the diff.

I don't think I agree with your assessment of it being a security issue, at least not the way I'm interpreting it from you.
I discussed my assessment with a colleague, and they came up with a short reasoning in that matter. They said:

"If I execute chmod g-rwx,o-rwx FILE, then I expect this command to take immediate effect. As long as snapshots exist on a system, this command will not take effect in its expected way."

In order for chmod g-rwx,o-rwx FILE to work as expected, snapshots must only exist for zero seconds. Any other positive time (whether it's one second, one hour, or one year) potentially introduces security implications, e.g. when, while any snapshot exists:
  • users are added to the group
  • users are added to the system
  • the actual set of persons who can use an account on the system changes
or when permissions have been set in error and are believed to be corrected. (Side note: Even if none of that were ever to happen, it can still be a compliance issue, e.g. because data that had to be deleted or made inaccessible due to legal requirements isn't really deleted or restricted, but that's a whole different story that I don't really care that much about. Yet it's another case where chmod g-rwx,o-rwx FILE doesn't work as you would expect it to work.)

It makes sense if you think of snapshots as what they are - read-only file systems.
Well, the issue here is that they are not just "read-only" file systems. They additionally:
  • are mounted automatically,
  • are mounted in a hidden but world-readable directory (unless the file system isn't mounted in a world-readable directory),
  • cannot be unmounted, not even by root(!) (unless the originating file system is unmounted).
This is what makes them differ from other "read-only" file systems. And that's also where they are a security problem. The only way to work around that (without using a crude workaround) is to not use them.

The issue can be mitigated by deleting them as soon as possible, but considering they might be around for some hours during an operation, this can be already a problem from a security p.o.v.

What adds to the problem is the hidden nature of .zfs. Many people will not be aware of the problem (beside that there often doesn't even seem to be a consensus about whether it is a security issue or not).
 
i diffed the changes against a not quite same version of the tree (they are 14-stable few days apart)
i modified a beta5 and the "old tree" is rc1 i think. ther arc.c is not meant to be there
i expected that the trees should be the same which was obviously not the case :(
 
"If I execute chmod g-rwx,o-rwx FILE, then I expect this command to take immediate effect. As long as snapshots exist on a system, this command will not take effect in its expected way."

This is, I think, the source of your confusion.

If you execute chmod g-rwx,o-rwx FILE, it does take effect. Are you suggesting that FILE is somehow readable after running that command?

What you’re describing as a security issue is essentially equivalent to:

Code:
cp FILE ORIG
chmod g-rwx,o-rwx FILE

And then claiming that ZFS has introduced a security issue because ORIG has the original permissions.

FILE is a different file from .zfs/snapshot/foo/FILE so no, I would not expect your command to have any effect on the snapshot file - it only takes effect on FILE, because that’s what you told it to do.

are mounted in a hidden but world-readable directory

They don’t change permissions just because you snapshotted them. All file permissions at the time of the snapshot are retained. This is like saying there’s a security issue because people can ls / and cat all kinds of files. It’s only a security issue if they have permission to see things they shouldn’t. If you accidentally make a secrets file world readable… that’s on you. And you need to remove the snapshots that include it, just like you need to modify the file permissions. But modifying permissions on the live file is not going to affect the snapshotted files, nor should it.

What adds to the problem is the hidden nature of .zfs. Many people will not be aware of the problem

The handbook chapter on zfs refers to .zfs/snapshot 11 times, with clear examples of how to use it.

Respectfully, I think you currently do not understand how snapshots are intended to work. If you do not want a file - in the exact state it was snapshotted - on the system anymore, then you need to remove all snapshots that include it.
 
Here’s another scenario: you’re sysadmin of a machine, and my HOME dir is in the same dataset as one of your secret files. You realize you screwed up, and so prevent anyone from viewing snapshots - but now you’ve locked me out of my backups.

So let’s say you can retroactively change permissions. Now you’ve violated the read-only property. I never have any reason to believe that the files in the snapshot are the original, because it’s possible for them to change. You might as well “snapshot” by copying from one dir to another.

Are the files in my weekly snapshot actually the files from a week ago? I have no way of knowing - now I have to bust out mtree(8) to detect changes, and unfortunately I can’t restore the originals.

This property of zfs requires me to be thoughtful about how I lay out datasets, and sometimes more careful in how I use them. In the sysadmin scenario above, the solution is not to lock people out of snapshots, or destroy them, or retroactively modify files - it’s to keep each user’s home dir on a separate dataset. And don’t make secrets world writable - and if you do, delete the snapshots that contain them, and rotate the secrets if possible.
 
If you do not want a file - in the exact state it was snapshotted - on the system anymore, then you need to remove all snapshots that include it.
this may be undesirable. its like defending a scenario where you have to reinstall the whole system if you made a mistake
also it makes boot environmets a security risk
say a setuid has a security hole and you can privilege escalate
you run freebsd-update and think you are safe
you are not, the vulnerability sits in your boot-environment clone snapshot and everybody with an account can exploit it
you may have vulnerable binaries on your system since freebsd 11 or something
now you have to destroy all of your be envs on multiuser systems because you cant restrict access to vulnerable binaries
 
Yeah, I get what you're saying. I'm just not sure how to reconcile it with ZFS design. That's one of the tradeoffs of ZFS - a dataset is a boundary, and it's all-or-nothing.

The vulnerable setuid binary example is a more sensitive item on the same continuum as my big ISO file. I accidentally snapshot an ISO a bunch of times, and now it's in all these snapshots, and I don't want it taking up this space. The only way to get rid of it is to get rid of the snapshots that contain it. Same with the setuid bin.

I wonder if you could "shun" checksums. So something like zfs shun zroot@foo/path/to/nastybin or zfs shun .zfs/snapshot/foo/path/to/nastybin. ZFS adds the checksum to a list, and then any time you try to read a file from a snapshot, if the checksum matches the checksum at zroot@foo/path/to/nastybin then ZFS refuses. That way you wouldn't have to identify all of the snapshots that contain the bad binary. You identify one snapshot that has it, and let ZFS reject future requests for the same content.
 
This is, I think, the source of your confusion.
I don't think I'm confused, but I may be wrong. Let me try to explain my point.

If you execute chmod g-rwx,o-rwx FILE, it does take effect. Are you suggesting that FILE is somehow readable after running that command?
Of course FILE is not readable, but ../../../.zfs/snapshot/.../.../FILE is (as you pointed out).

What you’re describing as a security issue is essentially equivalent to: cp FILE ORIG; chmod g-rwx,o-rwx FILE And then claiming that ZFS has introduced a security issue because ORIG has the original permissions.
As I tried to explain in my earlier post, the problem isn't that ORIG has the original permissions. The problem is that the "copy" that has to be created during the ZFS workflow (whether for backup or other reasons) will be world-readable. It is not just world-readable by default (which is already bad) but can't even be explicitly protected by the root user if need be.

So the issue isn't that a copy exists and chmod doesn't affect the copy. The issue is that the "copy" is always world-readable. That's the security issue.

I hope that made my point a bit more clear.

When making a backup (or a temporary copy for other purposes, under the hypothesis that ZFS snapshots shouldn't be used for backups), then the system shouldn't enforce to make that backup/copy world-readable. But that is currently the case.

It’s only a security issue if they have permission to see things they shouldn’t. If you accidentally make a secrets file world readable… that’s on you. And you need to remove the snapshots that include it, just like you need to modify the file permissions.
The thesis that there are no new security problems introduced has been brought up before (e.g. on the linked mailing list discussion and the first response there. On there, the same (in my opinion wrong) observations had been made:
  1. ZFS snapshots are just like any other backups.
  2. If there is a security problem, then the security problem already existed before. ZFS doesn't introduce new security problems.
And I think both is wrong (as I tried to explain above), because:
  • Other backups/copies aren't enforced to be world-readable. So ZFS snapshots do not behave like other backups.
  • If /etc/passwd or /etc/group or ~/.ssh/authorized_keys or similar conditions on the system change, then people may access files which they should never have been able to see.

Here’s another scenario: you’re sysadmin of a machine, and my HOME dir is in the same dataset as one of your secret files. You realize you screwed up, and so prevent anyone from viewing snapshots - but now you’ve locked me out of my backups.
I think giving users access to their snapshots is a whole different use case that requires other considerations (besides, users may not create snapshots by default either). In this post/thread, I tried to discuss a scenario where root takes snapshots for purposes of backup and/or system administration reasons and/or for sending file systems to other machines, etc.



Back to the original topic, and the year old mailinglist discussion, and the fact that this (alleged) security hole isn't fixed for almost a decade: I wonder WHY does this problem keep being ignored and there are even arguments that it isn't a security problem at all?

One reason certainly is that when looking at this problem at a first glace, it doesn't seem to be a security problem.

Another reason for this problem to be ignored is that it is impractical to fix. A clean fix might require introducing a new ZFS property, see zfsprops(8), which requires to introduce a new ZFS feature, right? Then people might argue it would be good to have fine-graded control over access to snapshots, and the whole problem is made even more complex to solve, and it gets dragged along over years and years and nothing happens.

I personally think that simply forbidding non-root users to access snapshots on a per-system basis is a better solution than keeping this security hole open for even longer. Maybe a sysctl(8) variable could be introduced to allow for backwards compatibility where needed (like in the patch posted above). But I would really appreciate if the FreeBSD security team would somehow react to this issue and/or if someone could make a corresponding PR. Simply opening an issue on this doesn't seem to help, as it didn't help for over 8 years either. ?

The open tickets are found here (links from the OP):
 
this is better
the previous patch had multiple problems (no per dataset setting), it only blocked access to the pseudo .zfs and .zfs/snapshot dir but if you knew a snapshot name you still could go there
because access was not restricted on actual snapshot files
now i added 2 prop options to prop snapdir and got rid of the sysctl oid
'snapdir' must be one of 'hidden | visible | prot-hidden | prot-visible
adding a new prop seems more complicated, more files to change
using a user prop like org.forums.freebsd.ctldir_protection looks more difficult to cache

needs rebuild of libzfs besides the module

Diff:
diff --git a/sys/contrib/openzfs/include/sys/zfs_ioctl.h b/sys/contrib/openzfs/include/sys/zfs_ioctl.h
index 91439d4b7861..fe7b6659ede2 100644
--- a/sys/contrib/openzfs/include/sys/zfs_ioctl.h
+++ b/sys/contrib/openzfs/include/sys/zfs_ioctl.h
@@ -57,6 +57,8 @@ extern "C" {
  */
 #define    ZFS_SNAPDIR_HIDDEN        0
 #define    ZFS_SNAPDIR_VISIBLE        1
+#define    ZFS_SNAPDIR_PROT_HIDDEN        2
+#define    ZFS_SNAPDIR_PROT_VISIBLE    3

 /*
  * Property values for snapdev
diff --git a/sys/contrib/openzfs/include/os/freebsd/zfs/sys/zfs_vfsops_os.h b/sys/contrib/openzfs/include/os/freebsd/zfs/sys/zfs_vfsops_os.h
index 56a0ac96ac19..6dbe01cfe8c4 100644
--- a/sys/contrib/openzfs/include/os/freebsd/zfs/sys/zfs_vfsops_os.h
+++ b/sys/contrib/openzfs/include/os/freebsd/zfs/sys/zfs_vfsops_os.h
@@ -96,6 +96,7 @@ struct zfsvfs {
     kmutex_t    z_znodes_lock;    /* lock for z_all_znodes */
     struct zfsctl_root    *z_ctldir;    /* .zfs directory pointer */
     boolean_t    z_show_ctldir;    /* expose .zfs in the root dir */
+    boolean_t    z_show_ctldir_prot;    /* protect .zfs in the root dir */
     boolean_t    z_issnap;    /* true if this is a snapshot */
     boolean_t    z_use_fuids;    /* version allows fuids */
     boolean_t    z_replay;    /* set during ZIL replay */
diff --git a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vfsops.c b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vfsops.c
index 8969fd6a54bd..96a2378ffa3e 100644
--- a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vfsops.c
+++ b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vfsops.c
@@ -600,7 +600,8 @@ snapdir_changed_cb(void *arg, uint64_t newval)
 {
     zfsvfs_t *zfsvfs = arg;

-    zfsvfs->z_show_ctldir = newval;
+    zfsvfs->z_show_ctldir = newval & 1;
+    zfsvfs->z_show_ctldir_prot = newval >> 1;
 }

 static void
diff --git a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c
index f672deed34dd..ce5b56015ccf 100644
--- a/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c
+++ b/sys/contrib/openzfs/module/os/freebsd/zfs/zfs_vnops_os.c
@@ -4340,6 +4340,10 @@ zfs_freebsd_open(struct vop_open_args *ap)
     vnode_t    *vp = ap->a_vp;
     znode_t *zp = VTOZ(vp);
     int error;
+    zfsvfs_t *zfsvfs = vp->v_vfsp->vfs_data;
+
+    if (zfsvfs->z_issnap && zfsvfs->z_parent->z_show_ctldir_prot && ap->a_cred->cr_uid)
+        return (SET_ERROR(EACCES));

     error = zfs_open(&vp, ap->a_mode, ap->a_cred);
     if (error == 0)
@@ -4498,6 +4502,10 @@ zfs_freebsd_access(struct vop_access_args *ap)
     znode_t *zp = VTOZ(vp);
     accmode_t accmode;
     int error = 0;
+    zfsvfs_t *zfsvfs = vp->v_vfsp->vfs_data;
+
+    if (zfsvfs->z_issnap && zfsvfs->z_parent->z_show_ctldir_prot && ap->a_cred->cr_uid)
+        return (SET_ERROR(EACCES));


     if (ap->a_accmode == VEXEC) {
diff --git a/sys/contrib/openzfs/module/zcommon/zfs_prop.c b/sys/contrib/openzfs/module/zcommon/zfs_prop.c
index 3db6fd13f4ae..eb69e27e8ae1 100644
--- a/sys/contrib/openzfs/module/zcommon/zfs_prop.c
+++ b/sys/contrib/openzfs/module/zcommon/zfs_prop.c
@@ -235,8 +235,10 @@ zfs_prop_init(void)
     };

     static const zprop_index_t snapdir_table[] = {
-        { "hidden",    ZFS_SNAPDIR_HIDDEN },
-        { "visible",    ZFS_SNAPDIR_VISIBLE },
+        { "hidden",        ZFS_SNAPDIR_HIDDEN },
+        { "visible",        ZFS_SNAPDIR_VISIBLE },
+        { "prot-hidden",    ZFS_SNAPDIR_PROT_HIDDEN },
+        { "prot-visible",    ZFS_SNAPDIR_PROT_VISIBLE},
         { NULL }
     };

@@ -421,7 +423,7 @@ zfs_prop_init(void)
         "COMPRESS", compress_table, sfeatures);
     zprop_register_index(ZFS_PROP_SNAPDIR, "snapdir", ZFS_SNAPDIR_HIDDEN,
         PROP_INHERIT, ZFS_TYPE_FILESYSTEM,
-        "hidden | visible", "SNAPDIR", snapdir_table, sfeatures);
+        "hidden | visible | prot-hidden | prot-visible", "SNAPDIR", snapdir_table, sfeatures);
     zprop_register_index(ZFS_PROP_SNAPDEV, "snapdev", ZFS_SNAPDEV_HIDDEN,
         PROP_INHERIT, ZFS_TYPE_FILESYSTEM | ZFS_TYPE_VOLUME,
         "hidden | visible", "SNAPDEV", snapdev_table, sfeatures);
 
probably yes but i can't tell if there is a definitive answer
i left it unprotected by default so there will be no change from the "classic behaviour" -- will break less setups

anyway it doesn't matter because i doubt it will ever make it upstream until an exploit of a vulnerable setuid binary sitting in the older boot envs will become a thing
otherwise it would have been already fixed by now. it took me a couple of hours to dig it and i have no experience with zfs or other fs implementations.
 
probably yes but i can't tell if there is a definitive answer
i left it unprotected by default so there will be no change from the "classic behaviour" -- will break less setups
What I'm worried more is breaking pool compatibility. How would such a modified dataset be interpreted by old/non-patched versions of ZFS?

anyway it doesn't matter because i doubt it will ever make it upstream until an exploit of a vulnerable setuid binary sitting in the older boot envs will become a thing
I don't know whether your patch is the best way to address this problem or not (and as said, I wonder if this can break pool compatibility). Nonetheless, I think that the issue needs to be fixed irregardless of whether there is a setuid case happening or not. This is a real problem for me, not a hypothetical one. I did tighten security on a system, and the changes didn't take effect, so I have to use the ugly nullfs mount workaround (or always delete all snapshots after certain events). There are other people in the corresponding GitHub issue, etc. who also noticed this problem and/or hope for a solution.

From the self-understanding of the FreeBSD project in regard to security:

FreeBSD takes security very seriously and its developers are constantly working on making the operating system as secure as possible.

otherwise it would have been already fixed by now. it took me a couple of hours to dig it and i have no experience with zfs or other fs implementations.
I would guess that the problem is that it's not clear what's the best solution to address the problem while keeping compatibility with old pools in mind.
 
well i tried the pool on 13.2 and won't import because of some klarasystems unsupported feature
trying it on vanilla 14 with lib and module unpatched it works just the snapdir option is not displayed correctly (shows a dash)
so it is usable on 14
it may cause some breakage on unpatched system because prot-hidden will be interpreted as ctrl dir visible so importing a pool with prot-hidden on an unpatched system will be seen as with snap_dir visible. that wont be fatal but still annoying (find will descend to snapshots and stuff)

probably using a custom xattr on / or some user-type property is better, that wont affect other code versions at all
 
using a *new* well known property will be the cleanest/easiest implementation but that requires max support upstream so it wont be collisions in the future
using a user defined one would be portable but kind of ugly hack
 
Since this is a potential security issue, I think this needs to be fixed (or users should at least be discouraged from using this feature without knowing about the security holes it can cause).
I very much disagree. It shall not be fixed, it shall stay exactly the way it is right now.

Why? Because there is a real purpose to having snapshots. They are an exact image of what the file system was like a while ago, including all metadata (such as permissions). And as they include permissions, any security problem that existed at the time the snapshot was taken must be retained in that exact copy. Part of many use cases for snapshots is that users of the system (note that I used the plural here!) can create and access snapshots. If you set up datasets correctly, individual users can snapshot their own directories, without interacting with the admin or other users. That's why snapshots must continue to be accurate representations of what the state of the file system was, and they need to remain accessible to users.

Now, in your workflow, you are using snapshots as a backup mechanism. And in your workflow, when you find a security problem that can be expressed as file access permissions, you fix that problem by changing the access control *FOR THE LIVE COPY ONLY*, and ignoring that snapshots (a.k.a. backups) exist. If you want to do this, then you certainly have a security problem with your backups. What this really amounts to is the following, which I will explain on a simpler example. Imagine there is a user that has data (say a file in their home directory) they should not be storing (for any reason, whether it be legal, policy, efficiency, whatever). You as the admin go in and delete the file. Since the admin is root, there is nothing the user can do about this. But what the admin ignores in this process is that the user may have created extra copies of the offending file. Some of those copies might be in snapshots (if you allow users to create snapshots). Some may have been simply renamed. The user might have encrypted the offending file, and/or moved it somewhere else. They can copy the file to removable media (like an USB stick), or to another machine. If you as the admin believe that deleting that one offending file makes the information go away, you have to check whether the user has created other copies. And exactly the same logic applies to making files inaccessible by changing permissions (or ACLs): If snapshots are present on the system, or if the user has used any other mechanism to make copies, you need to deal with this.

Breaking the ZFS snapshot mechanism that lots of people rely on, just to make your particular backup workflow easier, is not the solution.

What can I do in the meantime to deal with this?
Create a backup workflow that doesn't have this problem. Here is one suggestion (which is what I do at home): Create a ZFS snapshot, just to get a consistent point-in-time view of the file system you want to back up. Then run some backup software that reads from the snapshot, and transfers any updates to a more permanent and more secure location. Then destroy the temporary snapshot. This example is what my backup software does at home: every hour it runs a backup, and that takes about 2.5 minutes.

Snapshots are not intended to be a full backup mechanism. They can be an important ingredient in creating a backup implementation. (Obnoxious side remark: As good as ZFS is, it is missing one feature here, which is consistent point-in-time snapshots across multiple pools; doing that would allow snapshotting the state of the whole system at once, with atomicity. I know that implementing this is considerable extra work, having done it before. I guess for ZFS' user base, this feature is not important enough to justify.)

So the issue isn't that a copy exists and chmod doesn't affect the copy. The issue is that the "copy" is always world-readable. That's the security issue.
The whole meaning of "snapshot" is, as I said above: An exact copy of the state of the file system at the time the snapshot was taken. If something was world readable at that time, it must remain world readable. If an offending file (containing for example child porn) was stored at that time, that file must remain in the snapshot, bit identical. We can't start doing content- or permission sensitive censoring in snapshots, without breaking their fidelity guarantee If someone doesn't like that, destroy the snapshot.

Another reason for this problem to be ignored is that it is impractical to fix. A clean fix might require introducing a new ZFS property, see zfsprops(8), which requires to introduce a new ZFS feature, right?
Indeed, one potential fix is to teach ZFS to create two types of snapshots. The traditional one, which is an identical copy, with all the warts and side effects this creates. The new one is for your workflow: the snapshot copy is only readable by ... some select group of entities, which is hard to define. We could say that these "restricted snapshots" are only readable by root. Or only by certain groups. Or we could let the whole ACL mechanism loose on them (by having two kinds of ACLs, one that is used in normal production, plus an extra one that turns on for snapshots). In a nutshell, you are requesting that ZFS introduce a new feature to please your workflow. If ZFS were a commercial system, that would be a discussion you could have with your ZFS sales person; and yes, file system features in bespoke enterprise (commercial) file systems are regularly created for specific customers, and money changes hands.

But changing the default for everyone is a non-starter in my book. It breaks the existing snapshot mechanism for most users, just to please a (I think relatively small) set of users who think that snapshots are a complete backup solutions. Let's not do that.
 
I don't know what the right answer is but consider this scenario: a user stores all their passwords in their home directory but forgot to protect it from others. Even if he fixes this problem, the snapshot is readable by everyone. Now you may say storing passwords in a publicly readable file is a dumb thing to do and the OS vendor is not required to fix it. But the issue here is that as long as the snapshot is publicly readable, there is nothing he can do to fix the problem. This is very much like what happens on the social media, where your years old bigoted (as measured by today's standards) tweet can make you lose your job even if in the meantime you may have a more evolved and nuanced view of the world.
 
Back
Top