FreeBSD 13 annoyances?

This is what I have in /boot/loader.conf:
Code:
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
opensolaris_load="YES"
zfs_load="YES"

Since my root partition is on zfs pool if I disable everything will the system boot ?
 
Since my root partition is on zfs pool if I disable everything will the system boot ?
Probably not. Like I said, only disable non-essential things. This is all essential, so don't disable those. You can remove the opensolaris_load="YES" though, it was never required (if a module depends on a another module it will get loaded automatically).

With non-essential I mean things like graphics drivers, virtualbox, etc. You don't need those to boot the system.
 
I am planning to upgrade server with zfs root running 12.2 to 13.0
Any precautions I should take care of? Or should I follow the standard route upgrading with freebsd-update?
Just did it yesterday - the last one of my 3 desktop machines. ZFS had no problems. Today I have also upgraded the pool and everything is working. Built world, kernel, removed graphics/drm-fbsd12.0-kmod, installed graphics/drm-fbsd13-kmod after that and replaced all EFI bootloaders.

Everything is working. Did not even rebuild all the ports yet. Just the first one ports-mgmt/pkg. After that I am able to use ports-mgmt/portupgrade. My desktop with all the applications is working.

Do not perform zpool upgrade -a before you have at least one new bootloader present.
 
I am planning to upgrade server

Besides what SirDice in post # 25 suggested check also /etc/fstab. Are there base system or third-party application filesystem mounts (i.e. network , clustered, fuse file systems). Those can interrupt the boot process if the necessary kernel module is missing (disabled in /boot/loader.conf or /etc/rc.conf), or the network connection gets lost.

Those mounts should made sure in case they can't be mounted the system ignores them and continue to boot (failok - fstab(5)).
 
If I'd read SirDice's advice before upgrading, I could have saved myself some trouble, especially the part about disabling modules. Plus upgrading packages before the first reboot, which would have also solved the problem. You ought to put something like this in the faqs and howtos, if you have the time and inclination.
 
I am planning to upgrade server with zfs root running 12.2 to 13.0
Servers are usually easier to maintain than desktops. If you have kernel bound loadable modules, you should rebuild/install them after upgrade. Upgrade the UEFI loader in boot partitions before you commit zpool upgrade -a. Wait with pool upgrade until everything is working and you can be sure that you are not going to move back to 12.2.
 
I could have saved myself some trouble, especially the part about disabling modules. Plus upgrading packages before the first reboot, which would have also solved the problem. You ought to put something like this in the faqs and howtos, if you have the time and inclination.
I've upgraded so many systems over the years it more or less became standard practice for me. If I have some time I'll try and whip up something, with some additional notes. I see quite a few people getting caught with the renaming of fusefs(5) for example too.
 
Hmm, I remember that I read something about "delete opensolaris_load="YES" " when you use zfs - it anyhow works on my box without removing. But how about "zfs_load="YES" " - is that still needed ?
 
Hmm, I remember that I read something about "delete opensolaris_load="YES" " when you use zfs - it anyhow works on my box without removing. But how about "zfs_load="YES" " - is that still needed ?
I can see from /usr/src/sys/amd64/conf/NOTES that there is a section

Code:
#####################################################################
# ZFS support

# NB: This depends on crypto, cryptodev and ZSTDIO
options         ZFS

#####################################################################

Assume that ZFS can be compiled into kernel, but this is not included in GENERIC. Have no idea if or how this option works.
 
it anyhow works on my box without removing.
Before 13.0 (9.x, 10.x, 11.x, 12.x) it was a dependency of zfs.ko. So it's normally automatically loaded when you load zfs.ko. That's why it doesn't matter if you add it or not, it's going to get pulled in anyway. On 13.0 zfs.ko doesn't depend on opensolaris.ko anymore at all, so it's useless to load it (for ZFS at least, it has some other uses too).
 
13.0 zfs.ko doesn't depend on opensolaris.ko anymore at all, so it's useless to load it (for ZFS at least, it has some other uses too).
Just asking out of my curiosity - what are the other use cases of opensolaris? Did not find much documentation about that.
 
what are the other use cases of opensolaris?
It used to be an ABI layer just like linux(4) but for Solaris executables. There was a time we had a System V R4 and iBCS2 ABI compatibility. That SRV4 layer kind of morphed into opensolaris.ko and it's primary use was for ZFS. To be honest I'm not sure if it supports much else nowadays.
 
There was more of a surprise, than annoyance; when blacklisted certificates were listed during the upgrade to 13.0-RELEASE. I went back to the man pages and found a command recently added called
certctl(8). This is really a welcome surprise, since management of certificates seems pretty basic for system security. Also briefly read blacklistd man page to enhance understanding of functionality.
 
I don't use UFS, so does this even apply to me?

If so, are there any instructions on how to do this?
Whatever you use, you should do that.
You have here for efi booting.
And concerning legacy BIOS booting, you can look in gpart(8), command bootcode.

But, it's tricky for a beginner. Someone should write a howto...
If you want suited instructions for you, post the output of gpart show.
 
I meant to say that I do use UFS.

I don't think I have EFI .. it's been so long so I created the partitions. Anyway to check?

Home backup server :

Code:
$ gpart show
=>       34  976773101  ada0  GPT  (466G)
         34       1024     1  freebsd-boot  (512K)
       1058  968883200     2  freebsd-ufs  (462G)
  968884258    7888876     3  freebsd-swap  (3.8G)
  976773134          1        - free -  (512B)


Live server :
Code:
$ gpart show
=>       63  500118128  mirror/gm0  MBR  (238G)
         63          1              - free -  (512B)
         64  500118120           1  freebsd  [active]  (238G)
  500118184          7              - free -  (3.5K)

=>        0  500118120  mirror/gm0s1  BSD  (238G)
          0  490733568             1  freebsd-ufs  (234G)
  490733568    8388608             2  freebsd-swap  (4.0G)
  499122176     995944                - free -  (486M)


$ gmirror list
Geom name: gm0
State: COMPLETE
Components: 2
Balance: load
Slice: 4096
Flags: NONE
GenID: 0
SyncID: 2
ID: 2236105550
Type: AUTOMATIC
Providers:
1. Name: mirror/gm0
   Mediasize: 256060513792 (238G)
   Sectorsize: 512
   Mode: r2w2e5
Consumers:
1. Name: ada0
   Mediasize: 256060514304 (238G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
   Priority: 0
   Flags: DIRTY
   GenID: 0
   SyncID: 2
   ID: 4267686746
2. Name: ada1
   Mediasize: 256060514304 (238G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
   Priority: 1
   Flags: DIRTY
   GenID: 0
   SyncID: 2
   ID: 2938659242
 
Back
Top