Solved How to convert existing active zpool, raidz, with root on it to 4k aligned

So here is what I have done and it seems to be working but terminal response over ssh is incredibly slow now:

I have:
ada0 - pci-e flash storage
ada1-5 sata 3 magnetic disk
ada6 ssd

I basically adapted instructions from gkontos first post with some changes to work from my situation.

I booted into a regular freebsdFreeBSD install ISO.
Selected drop to shell for disk layout.
Started network like dhclient igb0
scp'd a disk setup script to /tmp on my machine to install.

Here is that script:
Code:
dd bs=1m if=/dev/zero of=/dev/ada0 &
pid0=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada1 &
pid1=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada2 &
pid2=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada3 &
pid3=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada4 &
pid4=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada5 &
pid5=$(echo $!)
dd bs=1m if=/dev/zero of=/dev/ada6 &
pid6=$(echo $!)

printf "\nLetting dd do some work!\n"
sleep 60
printf "\n4 minutes to go"
sleep 60
printf "\n3 minutes to go"
sleep 60
printf "\n2 minutes to go"
sleep 60
printf "\n1 minutes to go"
sleep 60

for i in "$pid0" "$pid1" "$pid2" "$pid3" "$pid4" "$pid5" "$pid6"
do
   printf "\nWe don't need to wait any longer -- Killing pid ${i}\n"
   kill "$i"
done

gpart create -s gpt ada0
gpart create -s gpt ada1
gpart create -s gpt ada2
gpart create -s gpt ada3
gpart create -s gpt ada4
gpart create -s gpt ada5
gpart create -s gpt ada6

gpart add -s 222 -a 4k -t freebsd-boot -l zroot-boot0 ada1
gpart add -s 222 -a 4k -t freebsd-boot -l zroot-boot1 ada2
gpart add -s 222 -a 4k -t freebsd-boot -l zroot-boot2 ada3
gpart add -s 222 -a 4k -t freebsd-boot -l zroot-boot3 ada4
gpart add -s 222 -a 4k -t freebsd-boot -l zroot-boot4 ada5

gpart add -s 20g -a 4k -t freebsd-swap -l swap0 ada0
gpart add -s 13g -a 4k -t freebsd-swap -l swap1 ada6

gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-log0 ada0
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-log1 ada6

gpart add -s 8g -a 4k -t freebsd-zfs -l Datastore-log0 ada0
gpart add -s 8g -a 4k -t freebsd-zfs -l Datastore-log1 ada6

gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-cache0 ada0
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-cache1 ada6

gpart add -a 4k -t freebsd-zfs -l Datastore-cache0 ada0
gpart add -a 4k -t freebsd-zfs -l Datastore-cache1 ada6

gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-zfs0 ada1
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-zfs1 ada2
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-zfs2 ada3
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-zfs3 ada4
gpart add -s 8g -a 4k -t freebsd-zfs -l zroot-zfs4 ada5

gpart add -a 4k -t freebsd-zfs -l Datastore-zfs0 ada1
gpart add -a 4k -t freebsd-zfs -l Datastore-zfs1 ada2
gpart add -a 4k -t freebsd-zfs -l Datastore-zfs2 ada3
gpart add -a 4k -t freebsd-zfs -l Datastore-zfs3 ada4
gpart add -a 4k -t freebsd-zfs -l Datastore-zfs4 ada5

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada5

# gnop create -S 4096 /dev/gpt/zroot-zfs0
# gnop create -S 4096 /dev/gpt/zroot-zfs1
# gnop create -S 4096 /dev/gpt/zroot-zfs2
# gnop create -S 4096 /dev/gpt/zroot-zfs3
# gnop create -S 4096 /dev/gpt/zroot-zfs4
#
# gnop create -S 4096 /dev/gpt/zroot-log0
# gnop create -S 4096 /dev/gpt/zroot-log1
#
# gnop create -S 4096 /dev/gpt/zroot-cache0
# gnop create -S 4096 /dev/gpt/zroot-cache1
#
# gnop create -S 4096 /dev/gpt/Datastore-zfs0
# gnop create -S 4096 /dev/gpt/Datastore-zfs1
# gnop create -S 4096 /dev/gpt/Datastore-zfs2
# gnop create -S 4096 /dev/gpt/Datastore-zfs3
# gnop create -S 4096 /dev/gpt/Datastore-zfs4
#
# gnop create -S 4096 /dev/gpt/Datastore-log0
# gnop create -S 4096 /dev/gpt/Datastore-log1
#
# gnop create -S 4096 /dev/gpt/Datastore-cache0
# gnop create -S 4096 /dev/gpt/Datastore-cache1


kldload zfs

sysctl vfs.zfs.min_auto_ashift=12

zpool create -f \
-o altroot=/mnt \
-O canmount=off \
-m none \
zroot raidz1 \
/dev/gpt/zroot-zfs0 \
/dev/gpt/zroot-zfs1 \
/dev/gpt/zroot-zfs2 \
/dev/gpt/zroot-zfs3 \
/dev/gpt/zroot-zfs4

zpool add zroot log mirror gpt/zroot-log0 gpt/zroot-log1
zpool add zroot cache gpt/zroot-cache0 gpt/zroot-cache1

zfs set checksum=fletcher4 zroot
zfs set atime=off zroot

zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
zfs create -o mountpoint=/tmp -o compression=lzjb -o setuid=off zroot/tmp
chmod 1777 /mnt/tmp

zfs create -o mountpoint=/usr zroot/usr
zfs create zroot/usr/local

zfs create -o mountpoint=/home -o setuid=off zroot/home
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages

zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src
zfs create zroot/usr/obj

zfs create -o mountpoint=/var zroot/var
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp
chmod 1777 /mnt/var/tmp
zpool set bootfs=zroot/ROOT/default zroot

cat << EOF > /tmp/bsdinstall_etc/fstab
/dev/gpt/swap0 none swap sw 0 0
/dev/gpt/swap1 none swap sw 0 0
EOF

exit

Then exited console.
Install proceeded till completion.

Bug I had to work around:

Since I had enabled DHCP on NIC 1, any attempt use that NIC during the remainder of the install crashed the install.
So, I used my second NIC to complete the install.
Not sure how you would resolve the issue if you only had one NIC.
Something equivalent to the Linux ifdown nic1 after you run the script might help.
Then answer remaining questions of install.
Dropped to shell for final tweaks and ran:
Code:
# mount -t devfs devfs /dev
# echo 'zfs_enable="YES"' >> /etc/rc.conf
# echo 'zfs_load="YES"' >> /boot/loader.conf
# zfs set readonly=on zroot/var/empty

After this rebooted
Reconfigured my NICs and created my main storage pool and did a bonnie benchmark, where the results looked pretty decent, despite my now very laggy ssh terminal.

I was getting read and write of around 540 MBps.

Here is my zpool config:

Code:
# zpool list && zpool status
NAME  SIZE  ALLOC  FREE  FRAG  EXPANDSZ  CAP  DEDUP  HEALTH  ALTROOT
Datastore  9T  1.12M  9.00T  0%  -  0%  1.00x  ONLINE  -
zroot  39.8G  4.54G  35.2G  26%  -  11%  1.00x  ONLINE  -
  pool: Datastore
state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   Datastore  ONLINE  0  0  0
    raidz1-0  ONLINE  0  0  0
    gpt/Datastore-zfs0  ONLINE  0  0  0
    gpt/Datastore-zfs1  ONLINE  0  0  0
    gpt/Datastore-zfs2  ONLINE  0  0  0
    gpt/Datastore-zfs3  ONLINE  0  0  0
    gpt/Datastore-zfs4  ONLINE  0  0  0
   logs
    mirror-1  ONLINE  0  0  0
    gpt/Datastore-log0  ONLINE  0  0  0
    gpt/Datastore-log1  ONLINE  0  0  0
   cache
    gpt/Datastore-cache0  ONLINE  0  0  0
    gpt/Datastore-cache1  ONLINE  0  0  0

errors: No known data errors

  pool: zroot
state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   zroot  ONLINE  0  0  0
    raidz1-0  ONLINE  0  0  0
    gpt/zroot-zfs0  ONLINE  0  0  0
    gpt/zroot-zfs1  ONLINE  0  0  0
    gpt/zroot-zfs2  ONLINE  0  0  0
    gpt/zroot-zfs3  ONLINE  0  0  0
    gpt/zroot-zfs4  ONLINE  0  0  0
   logs
    mirror-1  ONLINE  0  0  0
    gpt/zroot-log0  ONLINE  0  0  0
    gpt/zroot-log1  ONLINE  0  0  0
   cache
    gpt/zroot-cache0  ONLINE  0  0  0
    gpt/zroot-cache1  ONLINE  0  0  0

errors: No known data errors

One thing I am wondering after all this is whether I want 4k alignment for my disk.

camcontrol identify for each my disk returns:

Code:
sector size  logical 512, physical 512, offset 0

When I read online about this, it sounded like the general recommendation was till to use 4k in spite of what the drive advertises.

Does anyone have clarification on a best practice in this regard?

Also, I hoping I interpreted gkontos properly above where he wrote that gnop command is no longer required as long as this sysctl is set prior to pool creation. I also set my default sysctl to the same at boot too.

sysctl vfs.zfs.min_auto_ashift=12

I also noticed following a new install the zroot was 35% fragmented. I started a scrub.
 
sysctl vfs.zfs.min_auto_ashift=12

I also noticed following a new install the zroot was 35% fragmented. I started a scrub.

The fragmentation is not the same kind of fragmentation you would expect if you're familiar with other filesystems such as FAT as NTFS. With ZFS it means the amount of wasted space because the filesystem couldn't allocate small enough blocks for really small files but was forced to use larger blocks, 4096 bytes in this case because you forced ashift to 12.

The scrub operation is nothing like defrag, it is for checking and correcting data integrity and filesystem metadata issues on the pool.
 
One thing I am wondering after all this is whether I want 4k alignment for my disk.

camcontrol identify for each my disk returns:

Code:
sector size  logical 512, physical 512, offset 0

When I read online about this, it sounded like the general recommendation was till to use 4k in spite of what the drive advertises.

Does anyone have clarification on a best practice in this regard?

Some disks report wrong sectors. Your best bet is to look online at your manufacturer website for your drive specs. In any case, 4K alignment will not hurt performance on 512 disks.

Also, I hoping I interpreted gkontos properly above where he wrote that gnop command is no longer required as long as this sysctl is set prior to pool creation. I also set my default sysctl to the same at boot too.

sysctl vfs.zfs.min_auto_ashift=12

Yes, that is the correct way.

I also noticed following a new install the zroot was 35% fragmented. I started a scrub.

Irrelevant from fragmentation but it is always a good idea to scrub so that you can check the consistency of your pool.
 
Back
Top