Solved ZFS Set volsize data loss?

I have a 1TB zvol shared over iSCSI. When it's running out of space, I decided to expand it to 2T.
zfs set volsize=2T data/disk0

However, I made a typo and the command typed was:
zfs set volsize=2G data/disk0

The command successfully executed without any warning and now all my data in the ZVOL is gone.
I tried to set it back to 1T but the disk is still showing as uninitialized over iscsi.

Is there any way to get the data back instead of rolling back to last week's snapshot?
Is it possible to show some warning messagess about possible data loss for operation like this?
 
The only way to access the data beyond 2G is through an older snapshot. You can rollback to the snapshot or restore the volume from (other) backups. I agree that zfs() set should warn the user before shrinking volumes. I'm afraid I don't see a way to add this feature without breaking existing scripts or adding kludges to zfs().
 
Is there any way to get the data back instead of rolling back to last week's snapshot?
I'm afraid not.

Is it possible to show some warning messagess about possible data loss for operation like this?
There isn't any. Similar to doing rm -rf /, that won't show a warning either.
 
SirDice: rm -rf / could and IMO should start trying and failing to rmdir / first. Turning rm -rf / into a scary but harmless NOP.
 
Last edited by a moderator:
Unix(-like) systems generally don't do hand-holding. It will happily execute whatever you tell it to, including stupid and destructive commands.
 
From the rm(1) manual page:

Code:
-f      Attempt to remove the files without prompting for confirmation,
             regardless of the file's permissions.  If the file does not
             exist, do not display a diagnostic message or modify the exit
             status to reflect an error.  The -f option overrides any previous
             -i options.

Don't use the -f option unless you're sure you know what you're doing.
 
Of course but even a chainsaw shouldn't be designed to maim on purpose and that *nix systems let root fuck up his system in 1001 interesting ways on purpose is no reason to callously destroy data in response to a simple user error. The rm command is expected to delete files but changing other ZFS properties doesn't destroy data and recovery from wrong values trivial in most cases.
 
This does seem like something that ZFS should probably warn about. A lot of things that will actually destroy data, or might be a mistake, require -f (Although having said that destroy doesn't, but then the clues in the name). And as I've said before, people should only use -f with ZFS commands (or any command where it means 'force' for that matter) if they've already had an error from not using it, and are sure they want to go ahead anyway.

I'd be intrigued to know if going back to a previous snapshot does actually restore the original size and data in this case?
 
zfs destroy will not destroy a dataset that is mounted in use by the system, just as gpart delete will not destroy a partition that contains a mounted filesystem, and umount will not unmount a filesystem while it is in use. ZFS will actively avoid executing any command that could potentially damage a running system. It will not, however, prevent the user from making mistakes while performing operations not necessarily damaging to the system itself. There's an important distinction there. The rm(1) command contains the -i flag, which asks the user for confirmation before deleting a file. Some people think it's a good idea to set rm -i as an alias to rm, so as to warn the user every time a file is to be deleted. Now how long do you suppose a relatively competent user would tolerate that before they started reflexively hitting 'y' every time the confirmation prompt popped up? Or just disabled the alias entirely?

Crest says that a chainsaw "shouldn't be designed to maim on purpose." It's not, but it will maim all the same, because it's function is (in an immediate context) inherently destructive. But under most circumstances one only uses a chainsaw to destroy what is immediately present so as to construct a space for something believed to be better. In rare cases, someone might intentionally use a chainsaw to cut someone's leg off. Between the two extremes lies the possibility for someone intending to construct a space to instead destroy their own or someone else's leg. In the immediate context it makes no difference if the chainsaw is destroying a tree trunk or someone's leg. The function and act are the same; only the consequences are different. The chainsaw doesn't actively seek out human bodies---it just spins its chain of blades rapidly so as to cut through anything with which it comes into contact. This is just my long-winded, pretentiously philosophical way of saying it is up to the person using the chainsaw to ensure it gets used as intended.

It's generally true that the purpose of ZFS is to safeguard data, but to what extent should that be done? There are already utilities that help with this without the need to nag the user---snapshots, clones, disabling the modification of certain properties. In any case, partitions (or their zvol equivalent) should not be resized or moved without a proper backup in place beforehand, and to be blunt, if Jay_Jay had just cloned the dataset or taken a snapshot moments before doing this, there wouldn't be any problem. As a side note, I'd warn anyone using ZFS to beware the Peltzman Effect. Don't assume that next-gen tech running on a stupid machine will take care of things for you.

EDIT: Actually, zfs destroy will not avoid destroying a dataset just because it is mounted. This was a stupidly bad choice of words on my part. But it will not destroy a dataset that system processes are interacting with, just like a filesystem can't be unmounted or its partition destroyed while it is in use by the system.
 
Thank you all for your reply.

I'm very cautious when executing destructive commands like dd, rm or zfs/zpool destroy. However, it was my fault for not aware of setting a property in zfs could destroy data like this.

I was hoping ZFS still kept the referenced blocks somewhere so that I could at least get some data back by grow the ZVOL back and do a disk scan.

What I have tried last night:
1. Grew the volsize to 2T. Windows shows the ZVOL over iscsi as "uninitialized". No good.

2. Change the volsize back to 1T. Gparted detected the secondary GPT partition table was corrupted and suggested to do a recovery.
I restored the partition table and Windows was able to detect the NTFS partition, but it complains about MFT corrupt saying that a full disk scan may be able to recover the MFT. This certainly gave me some hope.
I did a chkdsk D: /f /r. After a few hours running, it still showed nothing on the disk.

At last, I just gave up rolled back to the snapshot from last week.
I also changed snapshot frequency from weekly to daily.
 
Back
Top