Things would have been however frustrating when running on limited free storage, and much more when using conventional filesystem.
I'm not sure what it is you think is more frustrating; I'm guessing you're thinking in terms of partitions. What I'm referring to is something like this: say I have a ZFS filesystem. There is a directory sitting on that filesystem, subject to that filesystem's properties. Perhaps I don't want this directory subjected to those properties, though---say, I have
copies=2
set, but this directory holds a large music collection or a bunch of virtual machine images that will eat up a lot of disk space. I don't want (and can live without) redundant copies of all that on my desktop/laptop, so I resolve to move them to a dedicated filesystem. Creating the filesystem can be simple enough, but then I have to move the files into that new filesystem. I'd have to mount the filesystem elsewhere, and either copy or move the files to it. I could use something like
rsync to cleanly copy the files, but would temporarily have
three copies of each file on the filesystem---hardly a solution to my original problem. I could also just
mv the directory, or use
rsync and immediately delete the original directory if I have enough space to do so. But once I've moved or deleted the original directory it would remain in the most recent snapshot of its original parent filesystem. I could get rid of it (most of it, anyway) by destroying the snapshot, but that would also entail destroying the recent backups of any other modified/deleted files that also reside on that filesystem. We might suppose that, for the time being, I have enough space to work with that I can just leave the snapshot be and wait for my automated snapshot schedule to rotate it out. But the number of snapshots containing data that refers to those files is a varaible that depends on both the snapshot schedule I've configured, and the span of time that directory has been subject to that snapshot schedule: I could have a half-dozen snapshots containing data from that directory, and they may run on a schedule that includes monthly or multiple weekly snapshots. So completely removing the original data once it's been copied to the new filesystem would entail either destroying many snapshots, or waiting an indefinite amount of time (possibly a month or more) for the old snapshots referring to the directory to be rotated out. In the meantime, that space will remain consumed. And that's about as simple as such a scenario gets. It can potentially get more complicated depending on how your ZFS dataset tree and filesystem organization interact, how frequently data is modified, what sort of backup scheme you want, etc.
Obviously, these things really only apply if you're working with a lot of data on a single disk that's, say, less than 1TB in size. If you're using a 1TB mirror you'll have much less to worry about, but that's often not the case on a desktop or laptop. It also drives home the point that while ZFS can certainly be used anywhere, it was really designed with high-capacity storage servers in mind. On the plus side, if you fall into the same traps I did you'll learn a hell of a lot about both the power and limitations of ZFS.
Well, that's well known but what are the consequences when upgrading/rollback systems when there are own datasets for /usr, /etc, /var?
They get excluded. Go back and look at my example:
workbox/ROOT/master is the root of the boot environment tree. Only filesystems under that are included in the boot environment.