Do I need multiple hard drives (or unformatted space on a single hard drive) for ZFS?

Hello!


I am going to do a complete wipe of my laptop, install Windows 7, shrink it as much as possible, and install FreeBSD 8.0 again (I don't have it installed now, but I did for a little while before). I plan to use ZFS and all of its juicy goodness since I've heard so much about it, but all the documentation seems to suggest I need more than one hard drive. I just have one laptop with one 160GB hard drive. The Windows slice will probably be 40GB or less (if possible), and the rest will be devoted to FreeBSD. Should my FreeBSD slice take up the rest of the free space on the disk, or will I have to leave a significant amount of unused space for ZFS? Will it even work to have ZFS on one sliced up hard drive, and is it even worth having on a laptop with such a small hard drive?


Since FreeBSD cannot yet boot from ZFS partitions (or so I've read), which partitions are the most common to make into ZFS ones? I was thinking of just making /usr and /var ZFS, but is that a bad idea, or am I missing an opportunity to make other parts of my system ZFS?

Thank you,
Agi93
 
Interesting... I don't know if I really even want a full ZFS system (maybe I just want to wait until it officially becomes a supported option for FreeBSD). Do most people around here just make /usr ZFS or do they add /var too?

From the looks of the documentation and the link you posted, I'm going to have to leave some unused space on my disk to create a pool for ZFS. Would the partitioning scheme during the install look like:

Code:
/       2GB    UFS2
swap    4GB    SWAP
/var    2GB    UFS2+S
/tmp    2GB    UFS2+S
/usr    2GB    UFS2+S

I have about 100GB to dedicate to FreeBSD, and I only set the /usr partition to 2GB because that is the minimum the Handbook recommends. If I install in this manner, then create a ZFS pool with the remaining space and put /usr (and maybe /var) in it, would I be able to reclaim the space from the partitions I would have specified (above) during the install? Could I then extend the ZFS pool to use that space?

The handbook says to do this:

Code:
# cp -rp /home/* /storage/home
# rm -rf /home /usr/home
# ln -s /storage/home /home
# ln -s /storage/home /usr/home

Notice how it removes /home and /usr/home. If I did the same with /var and /usr, would it make that space unused and make ZFS able to claim it, or would the space be used and unavailable to expand ZFS into, just with nothing in it? I hope there aren't any other complications I'm missing (but if there are, please let me know!).
 
I got usr/home, usr/local, usr/src, usr/ports, usr/obj and var/db on zfs.
The base install never changes except with world build and the most updated parts are on zfs.
 
Agi93 said:
Code:
/       2GB    UFS2
swap    4GB    SWAP
/var    2GB    UFS2+S
/tmp    2GB    UFS2+S
/usr    2GB    UFS2+S

/ is too large, swap probably too. /usr is too small.

This is on a server with only a handful of ports installed:
Code:
dice@molly:~>df -h
Filesystem                            Size    Used   Avail Capacity  Mounted on
/dev/ad0s1a                           496M     94M    362M    21%    /
devfs                                 1.0K    1.0K      0B   100%    /dev
/dev/gvinum/temp                      5.8G    589M    4.8G    11%    /tmp
/dev/ad0s1d                           989M    144M    766M    16%    /var
/dev/ad0s1e                           3.9G    497M    3.1G    14%    /usr
/dev/ad0s2g                            15G     39M     14G     0%    /usr/home
/dev/ad0s1f                           989M    512M    398M    56%    /usr/src
/dev/ad0s1g                           2.4G    1.5G    748M    67%    /usr/obj
/dev/ad0s1h                           3.9G    474M    3.1G    13%    /usr/ports
/dev/ad0s2d                           2.9G    248M    2.4G     9%    /jail/j1
/dev/gvinum/raid5                     1.3T    1.2T     18G    99%    /storage
devfs                                 1.0K    1.0K      0B   100%    /var/named/dev
/storage/FreeBSD                      1.3T    1.2T     18G    99%    /jail/j1/exports/Freebsd
devfs                                 1.0K    1.0K      0B   100%    /jail/j1/dev
/dev/ad0s2e                           5.8G    1.5G    3.8G    29%    /jail/j2
/tmp/build                            5.8G    589M    4.8G    11%    /jail/j2/tmp/build
/usr/ports                            3.9G    474M    3.1G    13%    /jail/j2/usr/ports
/storage/FreeBSD/distfiles            1.3T    1.2T     18G    99%    /jail/j2/usr/ports/distfiles
/usr/src                              989M    512M    398M    56%    /jail/j2/usr/src
/usr/obj                              2.4G    1.5G    748M    67%    /jail/j2/usr/obj
devfs                                 1.0K    1.0K      0B   100%    /jail/j2/dev
/storage/FreeBSD/packages_20100108    1.3T    1.2T     18G    99%    /jail/j2/usr/ports/packages

Code:
dice@molly:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/ad4s1b        262144      100   262044     0%
/dev/ad5s1b        262144       64   262080     0%
/dev/ad6s1b        262144       52   262092     0%
/dev/ad7s1b        262144      116   262028     0%
Total             1048576      332  1048244     0%

This is my workstation with XFCE, Gnome2-lite and a few other applications:
Code:
dice@williscorto:~>df -h
Filesystem          Size    Used   Avail Capacity  Mounted on
/dev/ad4s2a         496M    102M    354M    22%    /
devfs               1.0K    1.0K      0B   100%    /dev
/dev/ad4s2e         5.8G     58M    5.3G     1%    /tmp
/dev/ad4s2d         989M    533M    377M    59%    /var
/dev/ad4s2f         5.8G    2.4G    2.9G    46%    /usr
procfs              4.0K    4.0K      0B   100%    /proc
linprocfs           4.0K    4.0K      0B   100%    /usr/compat/linux/proc
/dev/ad4s3h          27G     23G    1.7G    93%    /usr/home
/dev/ad4s3e         2.9G    1.7G    1.0G    62%    /jail/j1
/dev/ad4s3f         2.9G    4.0K    2.7G     0%    /jail/j2
molly:/usr/ports    3.9G    474M    3.1G    13%    /usr/ports
molly:/usr/src      989M    512M    398M    56%    /usr/src
molly:/usr/obj      2.4G    1.5G    748M    67%    /usr/obj
molly:/storage      1.3T    1.2T     18G    99%    /storage

Notice that /usr has about 2.4GB in use and that's without a /usr/src, /usr/obj, /usr/ports and /usr/home.
 
Matty said:
I got usr/home, usr/local, usr/src, usr/ports, usr/obj and var/db on zfs.

Why didn't you just put all of /usr and /var on ZFS?



And SirDice, this will probably be a better arrangement:

Code:
/       1GB    UFS2
swap    2.5GB  SWAP
/var    2GB    UFS2+S
/tmp    2GB    UFS2+S
/usr    12GB   UFS2+S
I made the root partition 1GB instead of 512MB because I've had a couple of errors when building world (especially when I transfer from 8-RELEASE to 8-STABLE) because of insufficient space, so I'd like to have this comfort zone. I have 2GB of RAM, and FreeBSD by default makes the swap partition 2-3x the RAM size, so I reduced it enough to have a bit of extra space as a cushion while not taking up too much extra space. I was surprised to see /usr needs so much space, so I ramped it up to 12GB just to be safe (I'll only really need it to get my system up and running, online, and tracking stable before I switch /usr or parts of it to ZFS). I hope that's enough!

That leaves me with about 88 gigs of completely unused, unformatted space on my hard drive. If I went with a setup like this, would I then create a pool on that 88 gigs and transfer all my /usr and /var data (or maybe just some subdirectories of those if that's a better idea) to it? I still don't see what happens to the UFS partitions I allocate to /var and /usr during install. That's 14GB in this setup that I would like to be able to remove (or shrink to insignificance) on the UFS partitions, then expand the ZFS pool to take up those 14 gigs. Is that even possible? I guess I want to most efficiently get space for my ZFS partitions on my relatively small hard drive, but I'm not 100% sure about the right way to go about it.


Thanks for your help so far, everyone!
 
Personally, for a split UFS+ZFS system, I'd put / and /usr on UFS, with everything else (/var, /tmp, /home, /usr/ports, /usr/obj, /usr/src, /usr/local) on ZFS.

That way, you can always boot into single-user mode, and have a full, working FreeBSD. It's a lot simpler to fix ZFS issues, if you can access the ZFS tools, which are under /usr. :)

My home computer has / on UFS and /usr on ZFS, and it's bitten me a couple times. :( I've had to boot with LiveFS CDs a couple times to fix screwups on my part. Having access to /usr on UFS would have made things so much easier.

Because of this, all our ZFS storage servers at work leave / and /usr on UFS.

You may want to create 3 slices on the disk:
  1. would be for Windows
  2. would be for / and /usr and swap
  3. would be for ZFS
 
phoenix said:
Personally, for a split UFS+ZFS system, I'd put / and /usr on UFS, with everything else (/var, /tmp, /home, /usr/ports, /usr/obj, /usr/src, /usr/local) on ZFS.

That way, you can always boot into single-user mode, and have a full, working FreeBSD. It's a lot simpler to fix ZFS issues, if you can access the ZFS tools, which are under /usr. :)

My home computer has / on UFS and /usr on ZFS, and it's bitten me a couple times. :( I've had to boot with LiveFS CDs a couple times to fix screwups on my part. Having access to /usr on UFS would have made things so much easier.

Because of this, all our ZFS storage servers at work leave / and /usr on UFS.

You may want to create 3 slices on the disk:
  1. would be for Windows
  2. would be for / and /usr and swap
  3. would be for ZFS

Thanks! That sounds like a great idea. Good thing you told be about having /usr available in single-user mode if it's UFS, or I probably would have run into some serious problems down the road.

I've got my plan now :)
 
Agi93 said:
Thanks! That sounds like a great idea. Good thing you told be about having /usr available in single-user mode if it's UFS, or I probably would have run into some serious problems down the road.

I've got my plan now :)

When I mention / and /usr being on UFS, I mean only having 1 filesystem for them, not having two separate ones.

You can separate them, there's nothing wrong with it. I just find it easier to have everything in FreeBSD available at the single-user prompt, without having to mount anything. If /usr is just a directory on the / filesystem, it makes things simpler. :)
 
Oh, so I should just skip creating an ad4s2f partition, which would just leave /usr in root (then add 12GB to the / partition). What about the UFS2 vs UFS2+S business? Which one would I then make this large root partition?

Then again, if mounting partitions when entering single user mode is as simple as

# fsck -p
# mount -u /
# mount -a -t ufs
# swapon -a

then I don't really mind doing that.

Wait, what about ZFS? If I set /var, /tmp, /home, /usr/ports, /usr/obj, /usr/src, and /usr/local to be ZFS, would I have to run
# mount -a -t zfs
or similar to get those partitions up to, say, rebuild and install world and the kernel or will I have everything I need outside of those partitions? It seems like I wouldn't, since /usr/src is included.
 
If /usr is on a separate filesystem, then the process for single-user mode would be:
Code:
# mount -u /
# mount -u /usr
# /etc/rc.d/hostid start
# /etc/rc.d/zfs start

The /etc/rc.d/hostid part is very important to do before running any ZFS commands.
 
Thanks for that! I'll probably have to drop into single user mode very soon after installing because I plan to track a new source branch, configure the kernel, etc. early on. You probably saved me some major headaches.
 
Ok now I have a problem with having two mountpoint for /var and /tmp. I set the mountpoints for tank/var and tank/tmp respectively, but my UFS partitions are still claiming those as mounted there. How do I make it so the ZFS datasets are used unless they are not available, in which case the UFS partitions will be used?
 
Agi93 said:
Ok now I have a problem with having two mountpoint for /var and /tmp. I set the mountpoints for tank/var and tank/tmp respectively, but my UFS partitions are still claiming those as mounted there. How do I make it so the ZFS datasets are used unless they are not available, in which case the UFS partitions will be used?

Edit your fstab with something like this (noauto is the magic word)
Code:
/dev/ad6s2a		/tmp		ufs	rw,noauto,noatime
/dev/ad6s2d		/var		ufs	rw,noauto,noatime

Then they won't be mounted by default, but you can easily mount them with an explicit mount cmd.

OTOH, running with alternating /var's is probably not a good idea.
 
Thanks! What do you mean by alternating /var? Having it mounted in the UFS and ZFS parts at the same time?
 
Agi93 said:
Thanks! What do you mean by alternating /var? Having it mounted in the UFS and ZFS parts at the same time?

No, I meant switching betweem them.(At boot time most likely).
You'll want to have a /var/db/ports tree for instance that's in sync. Alternating between two /var parts you will quickly mess up things like that.
 
So would the best solution to that be rsync my two /vars periodically? That seems pretty tricky since I just put /var on ZFS, and if something goes wrong with ZFS and I need to enter single user mode, will it really matter to have my ports database in sync if I'm just going to use the zfs tools in my ufs /usr?
 
If you just want to boot singleuser to fix any problems with your zfs you'll be fine. No need to worry about it then.
 
You really should empty out /var and /tmp (the directories on the UFS / partition). They are not needed for anything in single-user mode, which is the only time the ZFS filessytems for /var and /tmp won't be available.

Same with the /home (or /usr/home) directory is you want to use ZFS for /home (or /usr/home). The directory on the UFS partition should be blank (root's home directory is /root and is the only one you need in single-user mode).

Same for all your ZFS filesystems. The directories that are used for the mountpoints for the ZFS filesystems should be blank.
 
But is there a problem to just leaving the current files on my UFS partitions? I know blanking them would save some disk space, but I allocated a good amount to each partition, so none of them are really approaching getting full.


Also, isn't single user mode a security risk? If someone else I knew used FreeBSD and I was malicious enough to destroy their OS, couldn't I just boot into single user mode, run the commands to mount everything, and wreak havoc at will? I don't really face a problem with this since my computer is at home and nobody I know even knows what BSD (or UNIX, for that matter) is.
 
If you change the "secure" option to "insecure" for the console entry in /etc/ttys, then you will be prompted for a root password before you can load a shell in Single-User Mode.

However, if someone is sitting in front of your computer, and has the ability to boot off a floppy or CD-ROM drive, you have bigger things to worry about than single-user mode. ;)

Leaving files in the directories won't harm anything, but your df output will be wrong, as df will count the diskspace used by the files "hidden" under a mountpoint. And du output will not correspond to df output for that reason (du only checks filesystems, df checks space used on disk).
 
Back
Top