Safely unmounting (disconnecting) a ZFS filesystem

I've been running FreeBSD 9.1 for some months now and it works as expected for the most part. My setup: boot in 2 x mirror (encrypted using GELI, containing the main system) and, when required, unlock a 6 x RAID-Z2 (always encrypted using GELI) which contains the "real" data.

Problem is: I'm not sure if I'm unmounting it correctly. GELI always complains with error 16 (device busy?) and, while the system shuts down nicely (when I run halt -p now), I'm not sure that how I do it it's the best way.

I did a set of scripts to do all the work. I realize that the way the devices are unlocked can not be really labeled as safe, still that's not my main concern now.

mount_data.sh:
Code:
#!/usr/local/bin/bash
#

# ASK USER FOR PASSPHRASE
stty -echo
echo -n "Enter GELI passphrase for DATA zpool: "
read passphrase
stty echo

# DEFINE TMP file for storing password
export FILE=passphrase.txt
sudo rm -f $FILE
sudo echo "$passphrase" > $FILE

# DETACH DATA DISKS
sudo geli attach -j $FILE gpt/disk2_data
sudo geli attach -j $FILE gpt/disk3_data
sudo geli attach -j $FILE gpt/disk4_data
sudo geli attach -j $FILE gpt/disk5_data
sudo geli attach -j $FILE gpt/disk6_data
sudo geli attach -j $FILE gpt/disk7_data

# REMOVE TMP PASSPHRASE FILE
sudo rm -f $FILE

# Mount all ZFS filesystem (zdata & zdata/<other_mount_points>)
sudo zfs mount -a

unmount_data.sh:
Code:
#!/usr/local/bin/bash
#
# UNMOUNT DATA
sudo zfs umount zdata

# DETACH DATA DISKS
sudo geli detach gpt/disk2_data
sudo geli detach gpt/disk3_data
sudo geli detach gpt/disk4_data
sudo geli detach gpt/disk5_data
sudo geli detach gpt/disk6_data
sudo geli detach gpt/disk7_data

The error GELI gives is:
Code:
[CMD]sudo bash unmount_data.sh[/CMD]

geli: Cannot destroy device gpt/disk2_data.eli (error=16).
geli: Cannot destroy device gpt/disk3_data.eli (error=16).
geli: Cannot destroy device gpt/disk4_data.eli (error=16).
geli: Cannot destroy device gpt/disk5_data.eli (error=16).
geli: Cannot destroy device gpt/disk6_data.eli (error=16).
geli: Cannot destroy device gpt/disk7_data.eli (error=16).

Now I don't really know what to do. I suppose GELI complains because the device is still in use by ZFS (even though it's unmounted, the zpool is still there).

I tried to offline the zpool but obviously this is not the solution since you can't offline more drives than your number of parity drives (in my case two). I also tried the zpool import/export feature once but it complained quite a bit.

Isn't there something like: "don't do anything anymore with this pool, forget about it and let me lock down the drives with GELI"?

Any other tips on improving my current setup? Am I missing something really important or doing something completely crazy?
 
The correct thing would be to zpool export the pool. If nothing is using the file systems then it should unmount them all for you although there's nothing wrong with doing a zfs umount -a first.

The disks will be in use as long as the pool is imported.

If you get errors trying to export or import the pool then show us what problem you are seeing and we can try and see what's causing that. There's no reason this shouldn't work.
 
The proper order of shutting down things is the reverse order or initialization. Since GELI is started before the ZFS pool is imported it has to be stopped after the ZFS pool is exported.
 
Probably some other stuff is still running like a samba share? so the unmount fails - that is at least on my servers most of the time the case so I have to stop e.g. Samba first before unmounting.


OT: :D

By the way why are you using a tmp_file for the password? There is no need for this.

/bin/echo "$password"|/sbin/geli attach -p -k - [device] will do the same without having to use a tmp file.

You also might consider using SHA256 for the passphrase?
 
Back
Top