ZFS how do I create a home partition separate on zfs ?

an partition equal on linux
Well, ZFS uses (virtual) "filesystems" which more or less behave in the same way but which aren't physical partitions (note: on FreeBSD partitions are often referred to as slices).

Let me show you... This is my server's setup where storage is concerned:
Code:
breve:/home/peter $ sysctl kern.disks
kern.disks: ada1 ada0 da1 da0
I have 2 pools spread out over 4 disks (both mirrored) but for context sake we'll limit this to my main pool zroot which contains my entire base system. This is what the physical slices ("partitions") look like, obviously all GPT based:
Code:
breve:/home/peter $ gpart show -p da0
=>      34  71132892    da0  GPT  (34G)
        34        94  da0p1  freebsd-boot  (47K)
       128  71132798  da0p2  freebsd-zfs  (34G)
So... /dev/da0p1 is my boot partition which contains the boot code and all to fire up my system, whereas /dev/da0p2 contains my actual system.

Now we get to the ZFS part. As you can see the second slice is of type freebsd-zfs which indicates that this is being used for a pool. Here is my main pool:
Code:
breve:/home/peter $ zpool list -v zroot
NAME            SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot          33.8G  21.3G  12.5G         -    57%    63%  1.00x  ONLINE  -
  mirror       33.8G  21.3G  12.5G         -    57%    63%
    gpt/disk0      -      -      -         -      -      -
    gpt/disk1      -      -      -         -      -      -
So now onto your actual question....

This pool is divided into virtual file systems ("slices") which more or less behave in the same way as any other regular slice. With the main difference that they all share the same space: the one from the main ZFS pool.

Here's how it looks like from a plain Unix perspective:
Code:
breve:/home/peter $ df -lh | grep zroot
zroot                             10G    3.3G    6.8G    33%    /
zroot/home                        11G    4.3G    6.8G    38%    /home
zroot/tmp                        6.8G    6.1M    6.8G     0%    /tmp
zroot/doc                        7.2G    334M    6.8G     5%    /usr/doc
zroot/local                       13G    6.2G    6.8G    48%    /usr/local
zroot/ports                      7.0G    208M    6.8G     3%    /usr/ports
zroot/ports/distfiles            8.7G    1.9G    6.8G    21%    /usr/ports/distfiles
zroot/ports/packages             7.7G    861M    6.8G    11%    /usr/ports/packages
zroot/src                        8.7G    1.9G    6.8G    21%    /usr/src
zroot/var                        8.6G     40M    8.5G     0%    /var
zroot/var/db                     8.7G    176M    8.5G     2%    /var/db
zroot/var/db/mysql               9.0G    416M    8.5G     5%    /var/db/mysql
zroot/var/db/pkg                 8.6G     18M    8.5G     0%    /var/db/pkg
zroot/var/log                    8.5G    1.9M    8.5G     0%    /var/log
zroot/var/mail                   8.5G     61K    8.5G     0%    /var/mail
zroot/var/run                    8.5G     74K    8.5G     0%    /var/run
zroot/var/tmp                    8.5G     24K    8.5G     0%    /var/tmp
Don't concern yourself with the differences in free space. The only reason zroot/var has more free space is because I set up a reservation, in the unlikely case my system fills up due to unforeseen circumstances (say someone filling up their home directory beyond allowed limits) then this will never directly / immediately affect my actual system (for example: keeping my logs in /var/log running so that I can track the troublemakers ;)).

And here you have the actual ZFS details:
Code:
breve:/home/peter $ zfs list -r zroot
NAME                    USED  AVAIL  REFER  MOUNTPOINT
zroot                  25.9G  6.83G  3.30G  /
zroot/doc               334M  6.83G   334M  /usr/doc
zroot/home             4.57G  6.83G  4.26G  /home
zroot/local            6.32G  6.83G  6.21G  /usr/local
zroot/ports            2.90G  6.83G   208M  /usr/ports
zroot/ports/distfiles  1.86G  6.83G  1.86G  /usr/ports/distfiles
zroot/ports/packages    861M  6.83G   861M  /usr/ports/packages
zroot/src              1.86G  6.83G  1.86G  /usr/src
zroot/swap             4.13G  9.70G  1.26G  -
zroot/tmp              6.06M  6.83G  6.06M  /tmp
zroot/var               742M  8.55G  40.2M  /var
zroot/var/db            667M  8.55G   176M  /var/db
zroot/var/db/mysql      416M  8.55G   416M  /var/db/mysql
zroot/var/db/pkg       21.7M  8.55G  18.4M  /var/db/pkg
zroot/var/log          3.60M  8.55G  1.90M  /var/log
zroot/var/mail         73.5K  8.55G  60.5K  /var/mail
zroot/var/run           260K  8.55G  73.5K  /var/run
zroot/var/tmp            44K  8.55G    24K  /var/tmp
Yeah, it's a long story but it is important to understand the inner workings behind ZFS before you'll be able to do this yourself. As you can see I have a dozen filesystems sitting on one physical slice (partition). This is comparable to the old ways of FreeBSD using UFS, but since those have long changed I figured I'd do this approach.

Now, finally, how to create such a new filesystem? Simple! # zfs create zroot/src/old. This is all there is to it. This would create the zroot/src/old file system which would be automatically mounted on /usr/src/old because zroot/src was already mounted on /usr/src (settings get passed onto underlying file systems).

Back to your example of home? In my case that would be: # zfs create zroot/home. Because zroot sits on root (/) this would automatically create /home.

Careful though... This works in the same was as mounting a filesystem onto existing data. For example: if you already put files into /home then mount a new slice on top of /home then you'll end up with free space, but also used up space which has become out of reach.

So the full story would be to somehow back up /home, create your file system and restore said data. OR... Create the ZFS filesystem but using a different (temporary) mountpoint. Example being # zfs create -o mountpoint=/mnt zroot/home. This would create zroot/home, which would normally be made available as /home but my command line option temporarily overrides this and makes it available as /mnt. This would allow you to move all your data onto the new filesystem, then remount it as usual. For example: # umount /mnt && zfs mount zroot/home.

Hope this gives you some ideas. Don't let the length of my post fool you, ZFS may look a bit overwhelming now, but if you grasp the basics behind these operations then it's honestly very easy.

Easy enough for me to type this out from mind, even though I hardly use these commands myself.
 
Well, ZFS uses (virtual) "filesystems" which more or less behave in the same way but which aren't physical partitions (note: on FreeBSD partitions are often referred to as slices).

Let me show you... This is my server's setup where storage is concerned:
Code:
breve:/home/peter $ sysctl kern.disks
kern.disks: ada1 ada0 da1 da0
I have 2 pools spread out over 4 disks (both mirrored) but for context sake we'll limit this to my main pool zroot which contains my entire base system. This is what the physical slices ("partitions") look like, obviously all GPT based:
Code:
breve:/home/peter $ gpart show -p da0
=>      34  71132892    da0  GPT  (34G)
        34        94  da0p1  freebsd-boot  (47K)
       128  71132798  da0p2  freebsd-zfs  (34G)
So... /dev/da0p1 is my boot partition which contains the boot code and all to fire up my system, whereas /dev/da0p2 contains my actual system.

Now we get to the ZFS part. As you can see the second slice is of type freebsd-zfs which indicates that this is being used for a pool. Here is my main pool:
Code:
breve:/home/peter $ zpool list -v zroot
NAME            SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot          33.8G  21.3G  12.5G         -    57%    63%  1.00x  ONLINE  -
  mirror       33.8G  21.3G  12.5G         -    57%    63%
    gpt/disk0      -      -      -         -      -      -
    gpt/disk1      -      -      -         -      -      -
So now onto your actual question....

This pool is divided into virtual file systems ("slices") which more or less behave in the same way as any other regular slice. With the main difference that they all share the same space: the one from the main ZFS pool.

Here's how it looks like from a plain Unix perspective:
Code:
breve:/home/peter $ df -lh | grep zroot
zroot                             10G    3.3G    6.8G    33%    /
zroot/home                        11G    4.3G    6.8G    38%    /home
zroot/tmp                        6.8G    6.1M    6.8G     0%    /tmp
zroot/doc                        7.2G    334M    6.8G     5%    /usr/doc
zroot/local                       13G    6.2G    6.8G    48%    /usr/local
zroot/ports                      7.0G    208M    6.8G     3%    /usr/ports
zroot/ports/distfiles            8.7G    1.9G    6.8G    21%    /usr/ports/distfiles
zroot/ports/packages             7.7G    861M    6.8G    11%    /usr/ports/packages
zroot/src                        8.7G    1.9G    6.8G    21%    /usr/src
zroot/var                        8.6G     40M    8.5G     0%    /var
zroot/var/db                     8.7G    176M    8.5G     2%    /var/db
zroot/var/db/mysql               9.0G    416M    8.5G     5%    /var/db/mysql
zroot/var/db/pkg                 8.6G     18M    8.5G     0%    /var/db/pkg
zroot/var/log                    8.5G    1.9M    8.5G     0%    /var/log
zroot/var/mail                   8.5G     61K    8.5G     0%    /var/mail
zroot/var/run                    8.5G     74K    8.5G     0%    /var/run
zroot/var/tmp                    8.5G     24K    8.5G     0%    /var/tmp
Don't concern yourself with the differences in free space. The only reason zroot/var has more free space is because I set up a reservation, in the unlikely case my system fills up due to unforeseen circumstances (say someone filling up their home directory beyond allowed limits) then this will never directly / immediately affect my actual system (for example: keeping my logs in /var/log running so that I can track the troublemakers ;)).

And here you have the actual ZFS details:
Code:
breve:/home/peter $ zfs list -r zroot
NAME                    USED  AVAIL  REFER  MOUNTPOINT
zroot                  25.9G  6.83G  3.30G  /
zroot/doc               334M  6.83G   334M  /usr/doc
zroot/home             4.57G  6.83G  4.26G  /home
zroot/local            6.32G  6.83G  6.21G  /usr/local
zroot/ports            2.90G  6.83G   208M  /usr/ports
zroot/ports/distfiles  1.86G  6.83G  1.86G  /usr/ports/distfiles
zroot/ports/packages    861M  6.83G   861M  /usr/ports/packages
zroot/src              1.86G  6.83G  1.86G  /usr/src
zroot/swap             4.13G  9.70G  1.26G  -
zroot/tmp              6.06M  6.83G  6.06M  /tmp
zroot/var               742M  8.55G  40.2M  /var
zroot/var/db            667M  8.55G   176M  /var/db
zroot/var/db/mysql      416M  8.55G   416M  /var/db/mysql
zroot/var/db/pkg       21.7M  8.55G  18.4M  /var/db/pkg
zroot/var/log          3.60M  8.55G  1.90M  /var/log
zroot/var/mail         73.5K  8.55G  60.5K  /var/mail
zroot/var/run           260K  8.55G  73.5K  /var/run
zroot/var/tmp            44K  8.55G    24K  /var/tmp
Yeah, it's a long story but it is important to understand the inner workings behind ZFS before you'll be able to do this yourself. As you can see I have a dozen filesystems sitting on one physical slice (partition). This is comparable to the old ways of FreeBSD using UFS, but since those have long changed I figured I'd do this approach.

Now, finally, how to create such a new filesystem? Simple! # zfs create zroot/src/old. This is all there is to it. This would create the zroot/src/old file system which would be automatically mounted on /usr/src/old because zroot/src was already mounted on /usr/src (settings get passed onto underlying file systems).

Back to your example of home? In my case that would be: # zfs create zroot/home. Because zroot sits on root (/) this would automatically create /home.

Careful though... This works in the same was as mounting a filesystem onto existing data. For example: if you already put files into /home then mount a new slice on top of /home then you'll end up with free space, but also used up space which has become out of reach.

So the full story would be to somehow back up /home, create your file system and restore said data. OR... Create the ZFS filesystem but using a different (temporary) mountpoint. Example being # zfs create -o mountpoint=/mnt zroot/home. This would create zroot/home, which would normally be made available as /home but my command line option temporarily overrides this and makes it available as /mnt. This would allow you to move all your data onto the new filesystem, then remount it as usual. For example: # umount /mnt && zfs mount zroot/home.

Hope this gives you some ideas. Don't let the length of my post fool you, ZFS may look a bit overwhelming now, but if you grasp the basics behind these operations then it's honestly very easy.

Easy enough for me to type this out from mind, even though I hardly use these commands myself.

thank you, I will give a studied about zfs, I'll test in a virtual machine to see if I get some success
 
Back
Top