Converting remote server from gmirror to ZFS

dvl@

Developer
About 9 months ago, I posted about going from gmirror to zfsroot. Ultimately, I did a fresh install. But this time, I do not have that choice. I'm converting an existing remote server, running FreeBSD 9.2-RELEASE, from its existing gmirror setup to zfsroot.

The idea for this came from Allan Jude during a talk we had at vBSDCon.

Given that this *is* a remote server and I have no chance of getting to the console, I want to plan this out for review.

Here is the existing setup:

Code:
$ cat /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/mirror/gm0s1b              none            swap    sw              0       0
/dev/mirror/gm0s1a              /               ufs     rw              1       1
/dev/mirror/gm0s1d              /tmp            ufs     rw              2       2
/dev/mirror/gm0s1f              /usr            ufs     rw              2       2
/dev/mirror/gm0s1e              /var            ufs     rw              2       2
/dev/acd0               /cdrom          cd9660  ro,noauto       0       0


$ gmirror status
      Name    Status  Components
mirror/gm0  COMPLETE  ada0 (ACTIVE)
                      ada1 (ACTIVE)


$ df -h
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/mirror/gm0s1a       2G    296M    1.5G    16%    /
devfs                  1.0k    1.0k      0B   100%    /dev
/dev/mirror/gm0s1d       2G    528k    1.8G     0%    /tmp
/dev/mirror/gm0s1f     273G     44G    207G    17%    /usr
/dev/mirror/gm0s1e     9.7G    1.8G    7.1G    20%    /var
/usr/jails/basejail    273G     44G    207G    17%    /usr/jails/mailjail/basejail
devfs                  1.0k    1.0k      0B   100%    /usr/jails/mailjail/dev
fdescfs                1.0k    1.0k      0B   100%    /usr/jails/mailjail/dev/fd
procfs                 4.0k    4.0k      0B   100%    /usr/jails/mailjail/proc

There is about 50G of data on the system, on two 300GB HDD:

Code:
ada0 at ata2 bus 0 scbus2 target 0 lun 0
ada0: <ST3320620AS 3.AAK> ATA-7 SATA 1.x device
ada0: 150.000MB/s transfers (SATA 1.x, UDMA5, PIO 8192bytes)
ada0: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4

ada1 at ata3 bus 0 scbus3 target 0 lun 0
ada1: <ST3320613AS CC2F> ATA-8 SATA 1.x device
ada1: 150.000MB/s transfers (SATA 1.x, UDMA5, PIO 8192bytes)
ada1: 305245MB (625142448 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6

My plan:

  1. remove ada1 from the gmirror
  2. install ZFS on ada1
  3. create various filesystems on ada1
  4. shutdown most public facing services
  5. copy the existing system from the gmirror to ada1 via tar | tar

Items I'm concerned about:

  1. getting the new fstab right to ensure the reboot happens
  2. getting the system to boot from ada1 next time (using nextboot(8)())

I will post more detailed steps as they develop.
 
The 'upgrade' script

At the end of this post is a a script based on something I have used for setting up my ZFS servers. In the past, it has been used after booting from a USB install drive.

The critical part of this script is the copying to the new ZFS disk:

Code:
export DESTDIR=/mnt
for file in / /tmp /usr /var
do
  tar --one-file-system -c -f - -C ${file} . | tar xpvf - -C ${DESTDIR:-/}${file}
done

Anyone see any issue with that?

The full script is here:

Code:
# Based on http://www.aisecure.net/2012/01/16/rootzfs/ and
# @vermaden's guide on the forums

DISKS="ada1"

gmirror load
gmirror stop swap


NUM=-1
for I in ${DISKS}; do
#        NUM=$( echo ${I} | tr -c -d '0-9' )
        NUM=$(($NUM + 1))
        gpart destroy -F ${I}
        gpart create -s gpt ${I}
        gpart add -b 34 -s 1024 -t freebsd-boot -l bootcode${NUM} ${I}

        gpart add -s 8g -t freebsd-swap -l swap${I} ${I}

        #
        # note: not using all the disk, on purpose, adjust this size for your HDD
        #
        gpart add -t freebsd-zfs -s 285G -l disk${NUM} ${I}
        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
        gnop create -S 4096 /dev/gpt/disk${NUM}
done

gmirror label -F -h -b round-robin swap /dev/gpt/swap*

zpool create -f -O mountpoint=/mnt -o cachefile=/tmp/zpool.cache -O atime=off -O setuid=off -O canmount=off system /dev/gpt/disk*.nop
zpool export system

NUM=-1
for I in ${DISKS}; do
#        NUM=$( echo ${I} | tr -c -d '0-9' )
        NUM=$(($NUM + 1))
        gnop destroy /dev/gpt/disk${NUM}.nop
done

zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache system

zfs create -o mountpoint=legacy -o setuid=on system/rootfs

zpool set bootfs=system/rootfs system

# there is no sys

#zfs set atime=off sys
zfs set checksum=fletcher4 system

mount -t zfs system/rootfs /mnt

zfs create system/root
zfs create -o canmount=off system/usr
zfs create -o canmount=off system/usr/home
zfs create -o setuid=on    system/usr/local
zfs create -o compress=lz4 system/usr/src
zfs create -o compress=lz4 system/usr/obj
zfs create -o compress=lz4 system/usr/ports
zfs create -o compress=off system/usr/ports/distfiles
zfs create -o canmount=off system/var
zfs create -o compress=lz4 system/var/log
zfs create -o compress=lz4 system/var/audit
zfs create -o compress=lz4 system/var/tmp
#
# I was getting failure on these chmod so I did them after the system booted
#
#chmod 1777 /mnt/var/tmp
zfs create -o compress=lzjb system/tmp
#chmod 1777 /mnt/tmp
#chmod 1777 /mnt/var/tmp
zfs create system/usr/home/dan
zfs create system/usr/home/bacula
zfs create system/usr/home/dvl
zfs create system/usr/websites
zfs create system/usr/jails
zfs create system/usr/jails/basejail
zfs create system/usr/jails/mailjail_langille_org
zfs create system/usr/jails/bsdcan.org
zfs create -o recordsize=8k -o primarycache=metadata -o compress=lz4 system/usr/local/pgsql

cd /mnt ; ln -s usr/home home

# copy everything over
export DESTDIR=/mnt
for file in / /tmp /usr /var
do
  tar --one-file-system -c -f - -C ${file} . | tar xpvf - -C ${DESTDIR:-/}${file}
done

cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache

# overwrite the /etc/fstab file and direct everything to ZFS
cat << EOF > /mnt/etc/fstab
system/rootfs        /    zfs  rw,noatime 0 0
/dev/mirror/swap.eli none swap sw         0 0
EOF

# update the new ZFS loader.conf
cat << EOF >> /mnt/boot/loader.conf

geom_eli_load="YES"
geom_label_load="YES"
geom_mirror_load="YES"
geom_part_gpt_load="YES"

zfs_load=YES
vfs.root.mountfrom="zfs:system/rootfs"
EOF

# update the old ZFS loader.conf
# because we're going to be booting from that first
cat << EOF >> /boot/loader.conf

# in case something goes wrong
# delete everything from this file after this line

geom_eli_load="YES"
geom_label_load="YES"
geom_mirror_load="YES"
geom_part_gpt_load="YES"

zfs_load=YES
vfs.root.mountfrom="zfs:system/rootfs"
EOF

echo WRKDIRPREFIX=/usr/obj >> /mnt/etc/make.conf

zfs umount -a
umount /mnt
zfs set mountpoint=/ system
 
The disk partitioning section allocates only 48K for the bootcode. gptzfsboot is already 41K. As always, I suggest giving it 512K and starting the data partition at 1M.
 
wblock@ said:
The disk partitioning section allocates only 48K for the bootcode. gptzfsboot is already 41K. As always, I suggest giving it 512K and starting the data partition at 1M.

48K? On what line did you see that? I'm confused. Is that this line?

Code:
gpart add -b 34 -s 94 -t freebsd-boot -l bootcode${NUM} ${I}
 
dvl@ said:
48K? On what line did you see that? I'm confused. Is that this line?

Code:
gpart add -b 34 -s 94 -t freebsd-boot -l bootcode${NUM} ${I}

In this case '94' is 94 sectors of 512 bytes = 48,128 bytes
 
nearsourceit said:
In this case '94' is 94 sectors of 512 bytes = 48,128 bytes

Ahh, there's the math. I get it now. I wasn't seeing the 94 as 'number of blocks'. I've amended the sript to:

Code:
gpart add -b 34 -s 1024 -t freebsd-boot -l bootcode${NUM} ${I}

For anyone checking later, 94 x 512 byte blocks is only 47.5k. 96 blocks is 48K.
 
dvl@ said:
Items I'm concerned about:

  1. getting the new fstab right to ensure the reboot happens
  2. getting the system to boot from ada1 next time (using nextboot(8)())

For item 1 above, I have admended the script to add the following:

Code:
# update the old ZFS loader.conf
# because we're going to be booting from that first
cat << EOF >> /boot/loader.conf

# in case something goes wrong
# delete everything from this file after this line

geom_eli_load="YES"
geom_label_load="YES"
geom_mirror_load="YES"
geom_part_gpt_load="YES"

zfs_load=YES
vfs.root.mountfrom="zfs:system/rootfs"
EOF

This is identical to what goes into /mnt//boot/loader.conf and this will make the system boot from the ZFS drive and not the single gmirror drive after we reboot.

This is the only command which changes the production drive. If necessary to rerun the script, I see no downside in repeatedly appending these directives to the file.
 
After booting to the new ZFS drive, I need to run the script again, this time on ad0, stopping at the '# copy everything over' section. This will ensure that both HDD have the same partition layout.

After partitioning, the other drive is ready to be mirrored with the ZFS drive we booted from. I issue this command:

Code:
zpool attach system disk0 disk1

Where disk1 represents ada1, and disk0 is ada0.

Then I wait for resilvering to finish. Then I can reboot. And hope.
 
After starting the process, I started to get concerned when I saw this. Everything is mounted at /mnt/mnt which makes me think the mount points will not be correct after reboot. Not to mention getting things copied into the right place.

Will that be fixed after:

Code:
zfs umount -a
umount /mnt
zfs set mountpoint=/ system

This is what I see just before we start copying the data over:

Code:
# zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
system                                  5.10M   280G   144K  /mnt/mnt
system/root                              144K   280G   144K  /mnt/mnt/root
system/rootfs                            288K   280G   288K  legacy
system/tmp                               144K   280G   144K  /mnt/mnt/tmp
system/usr                              2.29M   280G   144K  /mnt/mnt/usr
system/usr/home                          576K   280G   144K  /mnt/mnt/usr/home
system/usr/home/bacula                   144K   280G   144K  /mnt/mnt/usr/home/bacula
system/usr/home/dan                      144K   280G   144K  /mnt/mnt/usr/home/dan
system/usr/home/dvl                      144K   280G   144K  /mnt/mnt/usr/home/dvl
system/usr/jails                         600K   280G   168K  /mnt/mnt/usr/jails
system/usr/jails/basejail                144K   280G   144K  /mnt/mnt/usr/jails/basejail
system/usr/jails/mailjail                144K   280G   144K  /mnt/mnt/usr/jails/mailjail
system/usr/local                         296K   280G   152K  /mnt/mnt/usr/local
system/usr/local/pgsql                   144K   280G   144K  /mnt/mnt/usr/local/pgsql
system/usr/obj                           144K   280G   144K  /mnt/mnt/usr/obj
system/usr/ports                         296K   280G   152K  /mnt/mnt/usr/ports
system/usr/ports/distfiles               144K   280G   144K  /mnt/mnt/usr/ports/distfiles
system/usr/src                           144K   280G   144K  /mnt/mnt/usr/src
system/usr/websites                      144K   280G   144K  /mnt/mnt/usr/websites
system/var                               576K   280G   144K  /mnt/mnt/var
system/var/audit                         144K   280G   144K  /mnt/mnt/var/audit
system/var/log                           144K   280G   144K  /mnt/mnt/var/log
system/var/tmp                           144K   280G   144K  /mnt/mnt/var/tmp
 
You may want to consider the BEADM layout for your datasets. Also, if you import the pool using the -R option, such as this: # zpool import -R /mnt system, all mount points for this zpool will be relative to /mnt. Don't forget to copy /boot/zfs/zpool.cache to /mnt/boot/zfs/zpool.cache.

Edit: I think the -R option may be bad if you're going to reboot directly. Consider setting the mountpoint of your 'root' dataset to /mnt (so that all inherited datasets are placed under /mnt), and then:
  • export & import the pool without the -R option (now, the 'root' dataset should be located under /mnt instead of /mnt/mnt)
  • do your thing with copying files
  • copy zpool.cache as described earlier in this post
  • # zfs umount -a
  • # zfs set mountpoint=/ path/to/root/dataset

I believe that should be accurate.... But no guarantees!
 
dvl@ said:
Ahh, there's the math. I get it now. I wasn't seeing the 94 as 'number of blocks'. I've amended the sript to:

Code:
gpart add -b 34 -s 1024 -t freebsd-boot -l bootcode${NUM} ${I}

For anyone checking later, 94 x 512 byte blocks is only 47.5k. 96 blocks is 48K.

The bootcode can't handle partitions larger than 512K. So:
gpart add -a4k -s512k -t freebsd-boot -l bootcode${NUM} ${I}

And then start the first data partition at 1M:
gpart add -b1m -s 8g -t freebsd-swap -l swap${I} ${I}

However, for performance, I would put filesystems first, and then swap at the (slower) end of the disk.
 
Thanks. Before you posted, I amended my scripts to copy to the current amended mountpoints. That copy is underway.

I will copy over the zpool.cache, but the script uses an explicit location: /tmp/zpool.cache. I think this is to avoid conflict if these commands are being run on a ZFS system.

BEADM is something I have read about, but I'm not sure I want do introducing so many moving parts into a running production server.
 
wblock@ said:
The bootcode can't handle partitions larger than 512K. So:
gpart add -a4k -s512k -t freebsd-boot -l bootcode${NUM} ${I}

And then start the first data partition at 1M:
gpart add -b1m -s 8g -t freebsd-swap -l swap${I} ${I}

However, for performance, I would put filesystems first, and then swap at the (slower) end of the disk.

This is my current plan. Based on what you said about swap, I'm going to interrupt the copy now underway, and redo this:

Code:
NUM=-1
for I in ${DISKS}; do
#        NUM=$( echo ${I} | tr -c -d '0-9' )
        NUM=$(($NUM + 1))
        gpart destroy -F ${I}
        gpart create -s gpt ${I}
        gpart add -a4k -s512k -t freebsd-boot -l bootcode${NUM} ${I}

        #
        # note: not using all the disk, on purpose, adjust this size for your HDD
        #
        gpart add -b1m -t freebsd-zfs -s 285G -l disk${NUM} ${I}
        gpart add -s 8g -t freebsd-swap -l swap${I} ${I}
        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
        gnop create -S 4096 /dev/gpt/disk${NUM}
done
 
wblock@ said:
The bootcode can't handle partitions larger than 512K. So:
gpart add -a4k -s512k -t freebsd-boot -l bootcode${NUM} ${I}

And then start the first data partition at 1M:
gpart add -b1m -s 8g -t freebsd-swap -l swap${I} ${I}

However, for performance, I would put filesystems first, and then swap at the (slower) end of the disk.

FYI, here is the partition layout now:

Code:
 $ gpart show ada1
=>       34  625142381  ada1  GPT  (298G)
         34          6        - free -  (3.0k)
         40       1024     1  freebsd-boot  (512k)
       1064        984        - free -  (492k)
       2048  597688320     2  freebsd-zfs  (285G)
  597690368   16777216     3  freebsd-swap  (8.0G)
  614467584   10674831        - free -  (5.1G)
 
For the record, after completing the script. this is what I have:

Code:
# zfs list                                                                                                                                                             
NAME                                     USED  AVAIL  REFER  MOUNTPOINT                                                                                                              
system                                  42.9G   237G   144K  /                                                                                                                       
system/root                              640K   237G   640K  /root                                                                                                                   
system/rootfs                           11.5G   237G  11.5G  legacy                                                                                                                  
system/tmp                               648K   237G   648K  /tmp                                                                                                                    
system/usr                              31.3G   237G   144K  /usr                                                                                                                    
system/usr/home                          500M   237G   144K  /usr/home                                                                                                               
system/usr/home/bacula                   164K   237G   164K  /usr/home/bacula                                                                                                        
system/usr/home/dan                      500M   237G   500M  /usr/home/dan                                                                                                           
system/usr/home/dvl                      192K   237G   192K  /usr/home/dvl                                                                                                           
system/usr/jails                        6.17G   237G   469M  /usr/jails                                                                                                              
system/usr/jails/basejail               1.20G   237G  1.20G  /usr/jails/basejail                                                                                                     
system/usr/jails/bsdcan.org              317M   237G   317M  /usr/jails/bsdcan.org                                                                                                   
system/usr/jails/mailjail               4.20G   237G  4.20G  /usr/jails/mailjail                                                                                        
system/usr/local                        11.8G   237G  2.45G  /usr/local                                                                                                              
system/usr/local/pgsql                  9.30G   237G  9.30G  /usr/local/pgsql                                                                                                        
system/usr/obj                           608M   237G   608M  /usr/obj                                                                                                                
system/usr/ports                        1.78G   237G   871M  /usr/ports                                                                                                              
system/usr/ports/distfiles               954M   237G   954M  /usr/ports/distfiles                                                                                                    
system/usr/src                           556M   237G   556M  /usr/src                                                                                                                
system/usr/websites                     10.0G   237G  10.0G  /usr/websites                                                                                                           
system/var                              9.39M   237G   144K  /var                                                                                                                    
system/var/audit                         160K   237G   160K  /var/audit                                                                                                              
system/var/log                          8.86M   237G  8.86M  /var/log                                                                                                                
system/var/tmp                           244K   237G   244K  /var/tmp

And I put this /boot/loader.conf of the UFS disk:

Code:
# cat /boot/loader.conf
geom_mirror_load="YES"
kern.ipc.semmni=40
kern.ipc.semmns=240

# from http://forums.freebsd.org/showthread.php?t=17786
# because of Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable
vm.pmap.pg_ps_enabled=1

# in case something goes wrong
# delete everything from this file after this line

geom_eli_load="YES"
geom_label_load="YES"
geom_mirror_load="YES"
geom_part_gpt_load="YES"

zfs_load=YES
vfs.root.mountfrom="zfs:system/rootfs"

That comment in there in case remote support staff need to get involved.

Here goes the reboot.
 
dvl@ said:
Here goes the reboot.

The /etc/fstab on the ZFS side should have just your swap, everything else comes from ZFS automatically (make sure you have zfs_enable="YES" in /etc/rc.conf so it mounts everything not just /).

Also you can remove the vfs.root.mountfrom line from loader.conf once you've finished the conversion, this is just for the temporary situation where you are booting from ada0 but mounting root from the zpool on ada1.
 
nearsourceit said:
The /etc/fstab on the ZFS side should have just your swap, everything else comes from ZFS automatically (make sure you have zfs_enable="YES" in /etc/rc.conf so it mounts everything not just /).

Also you can remove the vfs.root.mountfrom line from loader.conf once you've finished the conversion, this is just for the temporary situation where you are booting from ada0 but mounting root from the zpool on ada1.

So I had:

Code:
system/rootfs        /    zfs  rw,noatime 0 0
/dev/mirror/swap.eli none swap sw         0 0

I'll remove the first line. I'll also add that line to /etc/rc.conf
 
Beadm

Savagedlight said:
You may want to consider the BEADM layout for your datasets.

The reboot did not work, so I'm redoing the ZFS setup. What kind of layout is associated with BEADM? I'm using something like this now:

Code:
# zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
system                                  42.9G   237G   144K  /
system/root                              648K   237G   648K  /root
system/rootfs                           11.5G   237G  11.5G  legacy
system/tmp                               672K   237G   672K  /tmp
system/usr                              31.4G   237G   144K  /usr
system/usr/home                          501M   237G   144K  /usr/home
system/usr/home/bacula                   164K   237G   164K  /usr/home/bacula
system/usr/home/dan                      500M   237G   500M  /usr/home/dan
system/usr/home/dvl                      192K   237G   192K  /usr/home/dvl
system/usr/jails                        6.18G   237G   469M  /usr/jails
system/usr/jails/basejail               1.20G   237G  1.20G  /usr/jails/basejail
system/usr/jails/bsdcan.org              317M   237G   317M  /usr/jails/bsdcan.org
system/usr/jails/mailjail_langille_org  4.21G   237G  4.21G  /usr/jails/mailjail_langille_org
system/usr/local                        11.8G   237G  2.45G  /usr/local
system/usr/local/pgsql                  9.30G   237G  9.30G  /usr/local/pgsql
system/usr/obj                           608M   237G   608M  /usr/obj
system/usr/ports                        1.78G   237G   871M  /usr/ports
system/usr/ports/distfiles               954M   237G   954M  /usr/ports/distfiles
system/usr/src                           556M   237G   556M  /usr/src
system/usr/websites                     10.0G   237G  10.0G  /usr/websites
system/var                              9.46M   237G   144K  /var
system/var/audit                         160K   237G   160K  /var/audit
system/var/log                          8.93M   237G  8.93M  /var/log
system/var/tmp                           244K   237G   244K  /var/tmp
 
I'll copypaste an example.
Code:
$ zfs list -r system
NAME                                 USED  AVAIL  REFER  MOUNTPOINT
system                              6.78G  24.5G   160K  none
system/ROOT                         6.75G  24.5G   152K  none
system/ROOT/default                 6.75G  24.5G   320M  legacy
system/ROOT/default/usr             3.14G  24.5G   284M  /usr
system/ROOT/default/usr/home         460K  24.5G   336K  /usr/home
system/ROOT/default/usr/obj          823M  24.5G   823M  /usr/obj
system/ROOT/default/usr/ports       1.11G  24.5G   832M  /usr/ports
system/ROOT/default/usr/src          962M  24.5G   962M  /usr/src
system/ROOT/default/var             1.86G  24.5G  1.36G  /var
system/ROOT/default/var/empty        144K  24.5G   144K  /var/empty
system/ROOT/default/var/log         4.15M  24.5G  1.14M  /var/log
system/ROOT/default/var/tmp          204M  24.5G   160K  /var/tmp
system/ROOT/default/var/tmp/ccache   204M   896M   203M  /var/tmp/ccache

Essentially, everything which has to do with your boot enviorenment (read: OS, applications... pretty much anything which isn't user data) goes under system/ROOT/default. You will then be able to create a new BE using the sysutils/beadm utility (example: # beadm create upgradeTo10). This lets you do all updates on the new BE while the old is still running, and to easily switch to the new BE using # beadm activate <beName> && reboot when ready. If things don't go too well, you can switch back to the old BE. Depending on circumstances, this might require a live boot media. :)
 
Savagedlight said:
I'll copypaste an example.
Code:
$ zfs list -r system
NAME                                 USED  AVAIL  REFER  MOUNTPOINT
system                              6.78G  24.5G   160K  none
system/ROOT                         6.75G  24.5G   152K  none
system/ROOT/default                 6.75G  24.5G   320M  legacy
system/ROOT/default/usr             3.14G  24.5G   284M  /usr
system/ROOT/default/usr/home         460K  24.5G   336K  /usr/home
system/ROOT/default/usr/obj          823M  24.5G   823M  /usr/obj
system/ROOT/default/usr/ports       1.11G  24.5G   832M  /usr/ports
system/ROOT/default/usr/src          962M  24.5G   962M  /usr/src
system/ROOT/default/var             1.86G  24.5G  1.36G  /var
system/ROOT/default/var/empty        144K  24.5G   144K  /var/empty
system/ROOT/default/var/log         4.15M  24.5G  1.14M  /var/log
system/ROOT/default/var/tmp          204M  24.5G   160K  /var/tmp
system/ROOT/default/var/tmp/ccache   204M   896M   203M  /var/tmp/ccache

Essentially, everything which has to do with your boot enviorenment (read: OS, applications... pretty much anything which isn't user data) goes under system/ROOT/default. You will then be able to create a new BE using the sysutils/beadm utility (example: # beadm create upgradeTo10). This lets you do all updates on the new BE while the old is still running, and to easily switch to the new BE using # beadm activate <beName> && reboot when ready. If things don't go too well, you can switch back to the old BE. Depending on circumstances, this might require a live boot media. :)

It looks like everything is just down two more levels. That is ROOT/default has been inserted into the paths.
 
@dvl@: That's the point. This way you can easily clone system/ROOT/default to system/ROOT/blah (using beadm) when you need to make larger changes and want an easy way out in case of errors. It's also easy to choose which one to boot from, using beadm. It also makes a distinction between system and user data. :)
 
Last edited by a moderator:
This task is on hold. I'm waiting to get some test hardware in place before I proceed with this again.
 
After the test hardware was obtained, I realized the production hardware is unsuited to the task.

It's i386 architecture with 1.5 GB of RAM. That's not well suited to a ZFS root system. New hardware has been obtained. We won't convert this system. We'll move to a new one.
 
Re:

wblock@ said:
dvl@ said:
Ahh, there's the math. I get it now. I wasn't seeing the 94 as 'number of blocks'. I've amended the sript to:

Code:
gpart add -b 34 -s 1024 -t freebsd-boot -l bootcode${NUM} ${I}

For anyone checking later, 94 x 512 byte blocks is only 47.5k. 96 blocks is 48K.

The bootcode can't handle partitions larger than 512K. So:
gpart add -a4k -s512k -t freebsd-boot -l bootcode${NUM} ${I}

And then start the first data partition at 1M:
gpart add -b1m -s 8g -t freebsd-swap -l swap${I} ${I}

However, for performance, I would put filesystems first, and then swap at the (slower) end of the disk.

While working on that script, I remembered your post, and amended it accordingly. Here is what the partitions look like now: https://twitter.com/dlangille/status/40 ... 24/photo/1
 
Back
Top