ZFS ZFS questions

Neat! Did you use partitions or raw devices?
Assuming you're asking me, partitions because raw devices may not be exactly the same size and partitioning lets you set things up on good alignment.
Basically the 1TB drives were partitioned to be almost the whole drive (roughly 98%), aligned on 1M boundary. The 3TB drives were similar: aligned on 1M boundary, about 98% of the whole drive.
I never did any partition growing or explicit ZFS resize (but it's been a while so there may have been a zfs resize in there)

Michael W Lucas, FreeBSD Mastery: ZFS (the first one) pg 124 "Larger Providers".
There is a property on the zpool autoexpand that you set to on before you start and ZFS will automatically expand when all providers are replaced.
If you don't set that property first, you need to run a command (not sure what it is offhand).
Dug around, the command is "zpool online -e" (man zpool-online)
 
Hello Forum,

thank you for this thread!

This is the first article in months of research that brings me some clarity on the subject of ZFS (besides the book FreeBSD Mastery: ZFS by by Michael W Lucas and Allan Jude).
I am in a similar situation as fullauto2012 and I am not sure yet if I have understood everything.

I have an 13.1-RELEASE stripe install on a 256GB M.2 NVMe.
Now I have added two 1TB SATA SSDs to my system.

What I want:
The two SATA SSDs should simply serve as a data dump (mirror).
If the system on the NVMe is going south, I just plug in a new NVMe, install the new system and add the data dump afterwards.
So I do not want to add or attach the mirror to the system but only "mount" it.

What I have done so far:

gpart create -s GPT ada0
gpart create -s GPT ada1

gpart add -a 1M -l SSD_serial# -t freebsd-zfs ada0
gpart add -a 1M -l SSD_serial# -t freebsd-zfs ada1


What I plan to do next:

sysctl vfs.zfs.min_auto_ashift=12 (is already in my /etc/sysctl.conf)
zpool create datadump mirror gpt/SSD_serial# gpt/SSD_serial#

- Is it correct here that I use the GPT labels (SSD_serial#)?

- Will the mirror be automatically added to my existing system or do I have to create a "mountpoint" beforehand like e.g. /mnt?

- Does the lower speed of the ssd in any way affect my system except when reading and writing to the mirror?


Please excuse the wall of text but I just do not see the forest for the trees...

Many thx!
 
Will the mirror be automatically added to my existing system or do I have to create a "mountpoint" beforehand like e.g. /mnt?
Separating the OS and "data" is good, I've been doing that for a long time, it makes upgrading things easy. Consider what you want to put on the new mirror, I typically put /usr/home there (multiple users you can create a dataset for each one) and a generic "data" dataset.

The zpool itself may not be mounted by default (I simply don't recall because it's been a while), but you're also going to want to create a dataset (or more than one) on it with the ZFS create command, when you do that you can easily add a mountpoint.
After adding the dataset it gets mounted automatically, no need to put anything in /etc/fstab or no need to actually create the directories.
On a system reboot, all the zpools that the system knows about or finds are automatically "dealt with" based on their properties, this includes all their datasets.

Using the SSD serial number as the gpt label is good, your zpool create using them works fine (I did the same thing)

There may be some workloads that show a performance hit, but I'd guess you won't see them unless you measure for them. In theory, writes to a mirror aren't complete until both devices have written the data so if one is a lot slower than the other, you may be able to measure it. Reads are complete when one device has completed, so the faster device will complete the read more often. Again, maybe if you look for it, you may be able to measure it.
My opinion is if this is a user system where you are browsing, programming, etc you won't likely notice an impact.
 
Identifying the "disk" with a GPT label is a really good idea. It helps enormously when you have to replace or re-arrange the hardware. In particular, it defends against pulling the wrong disk when half a mirror breaks -- thus destroying the mirror completely (a surprisingly common mistake).

I use two schemes, depending on the device. For a root disk mirror, I partition each SSD identically, and label each partition with the (clipped) serial number, and partition number. If I have to remove an SSD, I need to check the serial number. They are mounted, flat, on the underside of the motherboard, and the serial numbers are visible:
Code:
# smartctl -a /dev/ada0 | grep Serial
Serial Number:    BTHC7410008H400VGN

# smartctl -a /dev/ada1 | grep Serial
Serial Number:    BTHC534204D4400VGN

# gpart show ada0
=>       40  781422688  ada0  GPT  (373G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   33554432     2  freebsd-swap  (16G)
   33556480  180355072     3  freebsd-zfs  (86G)
  213911552   25165824     4  freebsd-zfs  (12G)
  239077376  542345352     5  freebsd-zfs  (259G)

# gpart list ada0 | grep label
   label: 410008H400VGN:p1
   label: 410008H400VGN:p2
   label: 410008H400VGN:p3
   label: 410008H400VGN:p4
   label: 410008H400VGN:p5

# zpool status zroot
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:03:30 with 0 errors on Wed Apr  5 03:21:10 2023
config:

    NAME                      STATE     READ WRITE CKSUM
    zroot                     ONLINE       0     0     0
      mirror-0                ONLINE       0     0     0
        gpt/410008H400VGN:p3  ONLINE       0     0     0
        gpt/34204D4400VGN:p3  ONLINE       0     0     0

errors: No known data errors

# gmirror status    
       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)
                       ada1p2 (ACTIVE)

# swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/mirror/swap.eli  16777212        0 16777212     0%
Where whole disks are dedicated to ZFS, I create a single partition, and use the serial number and the disk position in the vertical stack (L0 - L7) to label that partition. If I have to remove and replace a disk, the serial numbers are not visible. I use the stack position (L0 - L7) to identify the spindle, and check the serial number when it's out. Note that da0 is at the bottom of the stack (L0):
Code:
# gpart show da0  
=>        40  5860533088  da0  GPT  (2.7T)
          40  5860533088    1  freebsd-zfs  (2.7T)

# smartctl -a /dev/da0 | grep Serial
Serial Number:    ZC135AE5

# gpart list da0 | grep label 
   label: L0:ZC135AE5
 
# zpool status tank
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 07:26:21 with 0 errors on Thu Feb  9 10:32:49 2023
config:

    NAME                      STATE     READ WRITE CKSUM
    tank                      ONLINE       0     0     0
      mirror-0                ONLINE       0     0     0
        gpt/L1:ZC1564PG       ONLINE       0     0     0
        gpt/L6:WMC1T1408153   ONLINE       0     0     0
      mirror-1                ONLINE       0     0     0
        gpt/L0:ZC135AE5       ONLINE       0     0     0
        gpt/L5:WMC1T2195505   ONLINE       0     0     0
      mirror-2                ONLINE       0     0     0
        gpt/L4:ZC12LHRD       ONLINE       0     0     0
        gpt/L3:WCC4N5CVZ6V4   ONLINE       0     0     0
      mirror-3                ONLINE       0     0     0
        gpt/L2:ZC1AKXQM       ONLINE       0     0     0
        gpt/L7:WE23ZTX9       ONLINE       0     0     0
    special   
      mirror-5                ONLINE       0     0     0
        gpt/34204D4400VGN:p5  ONLINE       0     0     0
        gpt/410008H400VGN:p5  ONLINE       0     0     0
    logs   
      mirror-4                ONLINE       0     0     0
        gpt/410008H400VGN:p4  ONLINE       0     0     0
        gpt/34204D4400VGN:p4  ONLINE       0     0     0

errors: No known data errors
 
The penny has dropped, the mirror is up and running; ZFS is awesome!

Many thanks to all for the detailed explanations, examples and the last piece of the puzzle.


Nandor
 
  • Like
Reactions: mer
Back
Top