Help: Not enough space to write bootcode after root zpool upgrade

I upgraded freebsd from 10.4 to 11.1. All went fine. I then proceeded to upgrade my zfs pool.
Code:
~/> zpool status pool0
  pool: pool0
 state: ONLINE
  scan: none requested
config:

    NAME         STATE     READ WRITE CKSUM
    pool0        ONLINE       0     0     0
      gpt/disk4  ONLINE       0     0     0
      gpt/disk6  ONLINE       0     0     0

~/> zpool upgrade pool0

I then got a message indicating I may need to update boot code. On further examining these forums I realized I have to.

Code:
~/>  
gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432     2  freebsd-swap  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432     2  freebsd-swap  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

So based on this information I figure I need to update boot code on both ada0 and ada1 with these commands
Code:
~/> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
~/> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

when I execute the first command I get the following error:
Code:
[code]
~/> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart: /dev/ada0p1: not enough space

After further exploring the forums I came across this post:
https://forums.freebsd.org/threads/...rite-bootcode-after-root-zpool-upgrade.57955/

Based on the comments by generic on this post I should be able to fix the problem by allocating more space from my swap to the boot drive. Is that correct? Any idea what commands to use to do this?
I do not have any free space so I am stuck.


Finally, I am assuming it is NOT safe to reboot this machine as it will not boot back up. Is that correct?
 
The freebsd-boot partition is a bit on the small side. But fortunately you have a freebsd-swap partition right after it. What you could do is turn off swap: swapoff -a, remove the freebsd-swap and freebsd-boot partitions. Create a new, larger (I would recommend 512K) freebsd-boot and the remaining space a new freebsd-swap partition. That will decrease your swap just a little bit but this shouldn't cause to much problems.
 
Thanks SirDice. So here is what I have so far:
Code:
~/>  
gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432     2  freebsd-swap  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432     2  freebsd-swap  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

Code:
#Shutoff swap
swapoff -a

#resize for ada0
gpart delete -i 2 ada0
gpart resize -i 1 -s 512K ada0
#Do I need a -b 126 or whatever number after the resize instead of -i 2
#Also do I need both the -i and -b options
#Is there a way to tell gpart to use all available space for index 2 i.e. 16G+64K-512K
gpart add  -i 2 -s 15G -t freebsd-swap  ada0 

#resize for ada1
gpart delete -i 2 ada1
gpart resize -i 1 -s 512K ada1
gpart add  -i 2 -s 15G -t freebsd-swap  ada1

#copy boot code
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

#Restart swap 
swapon -a


Before I plug the trigger on this, could someone help me with the gpart add command above. I think the rest will work as expected. I have put my concerns in the comments above as well as below here:
1. Do I need a -b 126 or whatever number after the resize instead of -i 2 ?
2. Also do I need both the -i and -b options ?
3. Is there a way to tell gpart to use all available space for index 2 i.e. 16G+64K-512K ?

Thanks for all the help.
 
Remove the -s 15G for creating the swap partition. It will default to using all the space. I'd add -a 1M to it, though, to start it at the 1 MB boundary. That leaves a little bit of slack space between the two partitions (useful if you need to increase the boot part again in the future), but it aligns the partition better for optimal I/O.

Basically, you should never need to use -b for anything.
 
Thanks phoenix. Here is what I ran:
Code:
~> swapoff -a
swapoff: removing /dev/gpt/swap4 as swap device
swapoff: removing /dev/gpt/swap6 as swap device
~> gpart delete -i 2 ada0
ada0p2 deleted
~> gpart delete -i 2 ada1
ada1p2 deleted
~> gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432        - free -  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34         128     1  freebsd-boot  (64K)
         162    33554432        - free -  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

~> gpart resize -i 1 -s 512K ada0
ada0p1 resized
~> gpart resize -i 1 -s 512K ada1
ada1p1 resized
~> gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058    33553536        - free -  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058    33553536        - free -  (16G)
    33554594  1919970541     3  freebsd-zfs  (916G)

~> gpart add  -i 2 -a 1M -t freebsd-swap  ada0
ada0p2 added
~> gpart add -i 2 -a 1M -t freebsd-swap ada1
ada1p2 added
~> gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058         990        - free -  (495K)
        2048    33552384     2  freebsd-swap  (16G)
    33554432         162        - free -  (81K)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058         990        - free -  (495K)
        2048    33552384     2  freebsd-swap  (16G)
    33554432         162        - free -  (81K)
    33554594  1919970541     3  freebsd-zfs  (916G)

~> 
~> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
partcode written to ada0p1
bootcode written to ada0
~> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
partcode written to ada1p1
bootcode written to ada1
~> swapon -a
swapon: /dev/gpt/swap4: No such file or directory
swapon: /dev/gpt/swap6: No such file or directory
~> gpart show
=>        34  1953525101  ada0  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058         990        - free -  (495K)
        2048    33552384     2  freebsd-swap  (16G)
    33554432         162        - free -  (81K)
    33554594  1919970541     3  freebsd-zfs  (916G)

=>        34  1953525101  ada1  GPT  (932G)
          34        1024     1  freebsd-boot  (512K)
        1058         990        - free -  (495K)
        2048    33552384     2  freebsd-swap  (16G)
    33554432         162        - free -  (81K)
    33554594  1919970541     3  freebsd-zfs  (916G)

~>

Note the reactivation of swap did not work as the swap partition had changed. I then proceeded to reboot. This took a while and the boot seems to hang at the following point:
cd0: Attempt to query device size failed: NOT READY, Medium not present.


Now I am really confused. What is the connection between cd0 and my gpart comands?

Thanks
 
The cd0 is just saying there's nothing in the CD drive, but it looks like something isn't quite right on the boot process.

What is in /boot/loader.conf ?
Can you import your zpool? Output of zpool get all pool0 ?

To get the swap working again, you can either point it (in /etc/fstab) to /dev/ada0p2 and /dev/ada1p2, or you can add the labels back with gpart -l swap4 -i 2 ada0 && part -l swap6 -i 2 ada1
 
Code:
# cat /etc/fstab
#Device                   MountPoint FSType    Options            Dump      pass#
/dev/gpt/swap4        none             swap        sw                       0                 0
/dev/gpt/swap6        none             swap        sw                       0                 0
/dev/cd0                   /cdrom           cd9660     ro,noauto           0                 0

#cat /boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:/pool0/ROOT"
atapicam_load="YES"
hw.ata.atap_dma="1"
kern.vty=vt

I am doing all this under single user mode. I do not have access to vi and so cannot edit these files. The gpart commands gave a usage error. I even tried gpart add.
Code:
gpart   -l swap4 -i 2 ada0   #Usage Error
gpart add  -l swap4 -i 2 ada0 #Usage Error



I am not sure how to copy the output of zpool get all pool0. It is too big for the screen and I have a read only file system without access to pipes like more/less. So I decided to attach images. See attached files.
 

Attachments

  • IMG_zpool_all_1.JPG
    IMG_zpool_all_1.JPG
    100.8 KB · Views: 190
  • IMG_zpool_all_2.JPG
    IMG_zpool_all_2.JPG
    109.7 KB · Views: 226
I apologize; those labeling commands should have been [FONT=Courier New]gpart modify [flags...][/FONT] When in doubt, always check the man page.

You should have /rescue/vi available, right? Comment out the vfs.root.mountfrom line in boot.loader, the new bootloader shouldn't need that. What happens when you reboot with that setup? pool0/ROOT is the filesystem that you want mounted at '/', correct? (the bootfs property on pool0)
 
Thanks. Here is what I have so far:
I executed gpart modify successfully.
Code:
gpart modify -l swap4 -i 2 ada0
gpart modify -l swap6 -i 2 ada1

However, I cannot common vfs.root.mountfrom in /boot/loader.conf. When I boot in single user mode I cannot edit any files. It keeps saying I have a read only file system. You are correct: I do have access to /rescue/vi, but I cannot save my edits.


If I boot normally I still end up getting stuck as before at:
Code:
cd0: Attempt to query device size failed: NOT READY, Medium not present.
 
Eric Thanks for all the help. Here is what I did so far:
Commented out vfs.root.mountfrom in /boot/loader.conf. I then rebooted the machine and still got stuck with the cd0 message.
At this point I decided to try commenting out the mount point for cd0 in /etc/fstab. I then rebooted and everything worked as expected. Thanks again everyone for all the help.

I am still confused by the issue with the cd drive. On successful boot, heres what dmesg gives me:
Code:
cd0: Serial Number Q9896GEZ20286700
cd0: 150.000MB/s transfers (SATA 1.x, UDMA5, ATAPI 12bytes, PIO 8192bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closed

This is identical to what I was getting before except that the boot does not hang. So it looks like somehow the attempt to mount the cdrom causes the boot to hang and commenting the line in /etc/fstab prevents the mount attempt.

This might need another thread ...

Thanks again
 
What does your fstab line for cd0 like? On every boot when there is no cd in the drive, you will see this message, it is normal.

It should look something similar this:
Code:
/dev/cd0         /cdrom         cd9660  ro,noauto         0         0
 
pool0/ROOT is mounted as zfs legacy and is also my boot point. I wonder if this may cause issues when trying to use fstab.
Code:
# zfs list -t all
NAME                             USED  AVAIL  REFER  MOUNTPOINT
pool0                            144G  1.59T    23K  none
pool0/DB                         345M  1.59T   345M  /db
pool0/HOME                       120G  1.59T   120G  /home
pool0/ROOT                      23.5G  1.59T   454M  legacy
pool0/ROOT/tmp                  13.5M  1.59T  13.5M  /tmp
pool0/ROOT/usr                  20.0G  1.59T   544M  /usr
pool0/ROOT/usr/local            10.4G  1.59T  10.4G  /usr/local
pool0/ROOT/usr/ports            8.21G  1.59T   343M  /usr/ports
pool0/ROOT/usr/ports/distfiles  4.20G  1.59T  4.20G  /usr/ports/distfiles
pool0/ROOT/usr/ports/packages   3.68G  1.59T  3.68G  /usr/ports/packages
pool0/ROOT/usr/src               910M  1.59T   910M  /usr/src
pool0/ROOT/var                  3.03G  1.59T  83.4M  /var
pool0/ROOT/var/crash            31.5K  1.59T  31.5K  /var/crash
pool0/ROOT/var/db               2.95G  1.59T  2.86G  /var/db
pool0/ROOT/var/db/pkg           91.9M  1.59T  91.9M  /var/db/pkg
pool0/ROOT/var/empty              31K  1.59T    31K  /var/empty
pool0/ROOT/var/log               361K  1.59T   361K  /var/log
pool0/ROOT/var/mail              941K  1.59T   941K  /var/mail
pool0/ROOT/var/run                72K  1.59T    72K  /var/run
pool0/ROOT/var/tmp               142K  1.59T   142K  /var/tmp


# zpool get bootfs
NAME   PROPERTY  VALUE       SOURCE
pool0  bootfs    pool0/ROOT  local
 
Folks,
I may need some help as it looks my happiness was short lived. I was able to boot into this a few times. I figured it was time to examine some other issues I was having with ssh. One of the recommendations I saw was to to not have the line below in your /etc/rc.conf
Code:
hostname="name.domainname"
instead use
Code:
hostname="name"


I made this change and I have now once again entered the dreaded boot hang:
Code:
cd0: Serial Number Q9896GEZ20286700
cd0: 150.000MB/s transfers (SATA 1.x, UDMA5, ATAPI 12bytes, PIO 8192bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closed

This hang is a little different than the previous one. I am no longer able to even boot in single user mode. In single user mode I hang at the line above. In multi user mode the system reboots at the line above.


What are my options? At boot time I have the following options
Code:
1. Boot Multi-User
2. Boot Single User
3. Escape to loader prompt
4. Reboot

Options:
5. Kernel: default/kernel (1 of 2)
6. Configure Boot Options
7. Select Boot Environment
 
So I am going to recommend this thread be closed. I woke up today and strangely the machine booted without the hang! I am not sure how the hanging behavior is connected to the initial call to zfs upgrade. Note I did not experience this issue post the upgrade from 10.4 to 11.1. The nondeterministic behavior is definitely strange and maybe specific to my hardware. For completeness I am including the output of my zfs pool history:
Code:
# zpool history 
History for 'pool0':
2011-03-08.16:18:01 zpool create pool0 /dev/gpt/disk4 /dev/gpt/disk6
2011-03-08.16:30:33 zfs set checksum=fletcher4 pool0
2011-03-08.16:44:52 zfs set mountpoint=none pool0
2011-03-08.16:45:52 zfs create pool0/ROOT
2011-03-08.16:47:48 zfs destroy pool0/ROOT
2011-03-08.16:47:51 zfs create -o mountpoint=/mnt pool0/ROOT
2011-03-08.16:48:24 zfs create pool0/ROOT/var
2011-03-08.16:52:17 zfs create -o compression=on -o exec=on -o setuid=off pool0/ROOT/tmp
2011-03-08.16:53:38 zfs create pool0/ROOT/usr
2011-03-08.16:55:02 zfs create pool0/ROOT/usr/local
2011-03-08.16:56:06 zfs create -o compression=lzjb -o setuid=off pool0/ROOT/usr/ports
2011-03-08.16:56:41 zfs create -o compression=off -o exec=off -o setuid=off pool0/ROOT/usr/ports/distfiles
2011-03-08.16:56:47 zfs create -o compression=off -o exec=off -o setuid=off pool0/ROOT/usr/ports/packages
2011-03-08.16:57:57 zfs create -o compression=lzjb -o exec=off -o setuid=off pool0/ROOT/usr/src
2011-03-08.16:58:55 zfs create -o compression=lzjb -o exec=off -o setuid=off pool0/ROOT/var/crash
2011-03-08.16:59:52 zfs create -o exec=off -o setuid=off pool0/ROOT/var/db
2011-03-08.17:00:31 zfs create -o compression=lzjb -o exec=off -o setuid=off pool0/ROOT/var/db/pkg
2011-03-08.17:01:00 zfs destroy pool0/ROOT/var/db/pkg
2011-03-08.17:01:09 zfs create -o compression=lzjb -o exec=on -o setuid=off pool0/ROOT/var/db/pkg
2011-03-08.17:01:26 zfs create -o exec=off -o setuid=off pool0/ROOT/var/empty
2011-03-08.17:01:59 zfs create -o compression=lzjb -o exec=off -o setuid=off pool0/ROOT/var/log
2011-03-08.17:02:13 zfs create -o compression=gzip -o exec=off -o setuid=off pool0/ROOT/var/mail
2011-03-08.17:02:39 zfs create -o exec=off -o setuid=off pool0/ROOT/var/run
2011-03-08.17:03:09 zfs create -o compression=lzjb -o exec=on -o setuid=off pool0/ROOT/var/tmp
2011-03-08.17:04:58 zfs create -o mountpoint=/home pool0/HOME
2011-03-08.17:05:28 zfs create -o mountpoint=/db pool0/DB
2011-03-08.17:06:45 zpool set bootfs=pool0/ROOT pool0
2011-03-08.17:24:52 zfs set readonly=on pool0/ROOT/var/empty
2011-03-08.19:16:45 zfs set mountpoint=legacy pool0/ROOT
2011-03-08.19:16:55 zfs set mountpoint=/tmp pool0/ROOT/tmp
2011-03-08.19:17:02 zfs set mountpoint=/usr pool0/ROOT/usr
2011-03-08.19:17:12 zfs set mountpoint=/var pool0/ROOT/var
2011-03-08.19:17:27 zfs set mountpoint=/home pool0/HOME
2011-03-08.19:17:34 zfs set mountpoint=/db pool0/DB
2011-03-08.19:18:20 zpool set bootfs=pool0/ROOT pool0
2012-02-27.00:26:22 zpool upgrade -a
2016-01-10.17:47:50 zfs set readonly=off pool0/ROOT/var/empty
2016-01-10.21:42:27 zfs set readonly=on pool0/ROOT/var/empty
2018-03-01.10:17:59 zpool upgrade pool0
2018-03-01.23:00:58 zpool scrub pool0

I may take the hanging issue to another thread.

Thanks everyone for your input
 
Back
Top