ZFS What's the magic of zpool?

toorski

Active Member

Reaction score: 49
Messages: 157

In my:
FreeBSD 12.0-RELEASE-p7 with boot /dev/ada1
I've added 2-nd HD /dev/hda0 (2TB HD) with gpart and I created zfs with:
Code:
# zfs create zroot/hdisk
# zfs set compression=gzip zroot/hdisk
per: https://www.freebsd.org/doc/handbook/zfs-quickstart.htm

Then I did:
Code:
zpool add striped ada0
EDIT:
I don't have anything related to zfs in /etc/rc.conf, such as
Code:
zfs_enable="YES"
But, after reboot the ada0 is there, part of my zpool in zroot/hdisk and working
:-/

It was/is there in /etc/rc.conf:oops:

My questions are:

Why after creating adding the zpool, gpart cannot see ada0 in gpart list and gpart show
But in zpool I have:
Code:
zpool status -v
  pool: zroot
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada1p3    ONLINE       0     0     0
          ada0      ONLINE       0     0     0
And then what happens after I do
Code:
zpool remove ada0
Should I assume that I'll only loose /zroot/hdisk data set on ada0?

Edit:
I was doing the same thing, adding 2nd HD in 11.* But I forgot to take notes how I did it there, so now I'm dazed&confused in 12 :confused:
 
Last edited:

usdmatt

Daemon

Reaction score: 527
Messages: 1,418

You added the entire ada0 disk to the pool rather than just a partition on it. ZFS will write metadata to the start and end of the device (in this case the start/end of the disk itself), which probably messed up the GPT partition table on it.

Generally speaking you can't remove a disk from a striped pool once you have added it (I think there may be changes coming in the future to allow this, but even that may only be applicable in some cases). Seeing as ZFS is now striping data across both ada1p3 and ada0, both disks may contain unique data and you won't be able to remove ada0 from the pool.

Were you trying to create a mirror or actually a stripe? Either way you probably wanted to add a partition to the pool, not the entire disk seeing as you appear to be booting off this pool.
 
OP
OP
toorski

toorski

Active Member

Reaction score: 49
Messages: 157

You added the entire ada0 disk to the pool rather than just a partition on it.
Yes, and that's where I've made a calculated risk, by adding entire drive to the pool.
Initially, I wanted to increase my storage space by adding new partion onto the 2nd HD. I did create partition with gpart and during ZFS setup I missed something and got confused when I couldn't mount my new ZFS data set onto that partition. I will go through the process of attaching sencondary drive again to learn how to add a partition to the pool and never loose my notes again 😡

Were you trying to create a mirror or actually a stripe?
Yes, after i jumped into zpool mode, I wanted stripe knowing well that I'm welding the the SSD and the HD together for good into entire fs
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 8,050
Messages: 31,631

It's a bad idea to mix an SSD and a HDD in this way. The speed of the pool is determined by the slowest device in the pool. So you never take advantage of the SSD.
 

Alexander Huemeyer

Member

Reaction score: 4
Messages: 28

It's a bad idea to mix an SSD and a HDD in this way. The speed of the pool is determined by the slowest device in the pool. So you never take advantage of the SSD.
Is this true? I alway thought it would write block 1 to drive A, block 2 to drive B, block 3 -> drive A, ..
 

garry

Member

Reaction score: 45
Messages: 55

Concatenation of two partitions could be done with gconcat() and then create a ufs filesystem on the new concat device. When I stripe drives I usually use UFS on gstripe(), I found that the write speed almost doubles but with zfs stripes I found little increase in write speed. Of course, no one here would ever stripe an ssd with an hdd and I don't know what the performance characteristics would be for a ufs filesystem built on top of (gconcat ssd hdd). Try it and report back. :-/
 
OP
OP
toorski

toorski

Active Member

Reaction score: 49
Messages: 157

All done, but now with 2 additional hd(s) with own zpools.

Code:
gpart status                                                                                       
  Name  Status  Components                                                                                                       
ada0p1      OK  ada0                                                                                                             
ada2p1      OK  ada2                                                                                                             
ada2p2      OK  ada2                                                                                                             
ada2p3      OK  ada2                                                                                                             
ada1p1      OK  ada1


Code:
zpool status                                                                                      
  pool: data                                                                                                                    
state: ONLINE                                                                                                                  
  scan: none requested                                                                                                          
config:                                                                                                                        
                                                                                                                               
        NAME        STATE     READ WRITE CKSUM                                                                                  
        data        ONLINE       0     0     0                                                                                  
          ada0p1    ONLINE       0     0     0                                                                                  
                                                                                                                               
errors: No known data errors                                                                                                    
                                                                                                                               
  pool: files                                                                                                                  
state: ONLINE                                                                                                                  
  scan: none requested                                                                                                          
config:                                                                                                                        
                                                                                                                               
        NAME        STATE     READ WRITE CKSUM                                                                                  
        files       ONLINE       0     0     0
          ada1p1    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada2p3    ONLINE       0     0     0

errors: No known data errors
 
OP
OP
toorski

toorski

Active Member

Reaction score: 49
Messages: 157

During my experiments with adding storge devices, such as SSD(s) and HD(s) with ZFS, I also checked on:
I'm assuming the above doc doesn't appply to ZFS, but only UFS1 and UFS2 files systems. Am I correct?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 8,050
Messages: 31,631

Partitioning is done in exactly the same way, only for ZFS you would use freebsd-zfs instead of freebsd-ufs. And newfs(8) is only for UFS.
 
OP
OP
toorski

toorski

Active Member

Reaction score: 49
Messages: 157

This is how I've done it, and how I summerized my notes after reading, more/less 100+ pages, on the subject of howto non-removable storge devices in FreeBSD :)

My notes:

Adding additional SSD/HD with ZFS, each with independent zpool, to existing FreeBSD installation on /dev/ada2 – no stripe, no raid.

Code:
gpart create -s GPT ada1
gpart add -t freebsd-zfs ada1p1

gpart create -s GPT ada0
gpart add -t freebsd-zfs ada0p1

zpool create data /dev/ada0p1
zpool create files /dev/ada1p1

zfs create data
zfs create files
Later, I'll add:
zfs set compression=gzip ......

For now, this is just a test, until I find my other 2TB HD. So, at the end I want to end up with 2x2TB HDs with one dedicated to iogace jails.

Code:
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
data on /data (zfs, local, nfsv4acls)
files on /files (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
files/iocage on /files/iocage (zfs, local, nfsv4acls)
files/iocage/download on /files/iocage/download (zfs, local, nfsv4acls)
files/iocage/images on /files/iocage/images (zfs, local, nfsv4acls)
files/iocage/jails on /files/iocage/jails (zfs, local, nfsv4acls)
files/iocage/log on /files/iocage/log (zfs, local, nfsv4acls)
files/iocage/releases on /files/iocage/releases (zfs, local, nfsv4acls)
files/iocage/templates on /files/iocage/templates (zfs, local, nfsv4acls)
files/iocage/download/11.3-RELEASE on /files/iocage/download/11.3-RELEASE (zfs, local, nfsv4acls)
files/iocage/releases/11.3-RELEASE on /files/iocage/releases/11.3-RELEASE (zfs, local, nfsv4acls)
files/iocage/releases/11.3-RELEASE/root on /files/iocage/releases/11.3-RELEASE/root (zfs, local, nfsv4acls)
fdescfs on /dev/fd (fdescfs)
files/iocage/download/12.0-RELEASE on /files/iocage/download/12.0-RELEASE (zfs, local, nfsv4acls)
files/iocage/releases/12.0-RELEASE on /files/iocage/releases/12.0-RELEASE (zfs, local, nfsv4acls)
files/iocage/releases/12.0-RELEASE/root on /files/iocage/releases/12.0-RELEASE/root (zfs, local, nfsv4acls)
 

usdmatt

Daemon

Reaction score: 527
Messages: 1,418

zpool create data /dev/ada0p1
zpool create files /dev/ada1p1

zfs create data
zfs create files
The root dataset (data/files in this case) is created by default. You don't need to run that zfs create command; In fact I expect it will error out.

zfs set compression=gzip
Obviously personal choice, and there's nothing wrong with gzip - especially if compression ratio is your primary concern, but lz4 tends to be the default choice these days. Nearly as good on compression and fast enough that enabling it is pretty much recommended by default in many zfs guides.
 
Top