What's happening to my system ? I see some ugly errors of various type.

Hello. What's happening to my system (FreeBSD 14.2-RELEASE) ? It does not seem nice at all :

Code:
pid 75587 (bhyve-win), jid 0, uid 0: exited on signal 6 (no core dump - other error)
tap14: link state changed to DOWN
tap14: link state changed to UP
pid 75663 (bhyve-win), jid 0, uid 0: exited on signal 6 (no core dump - other error)
tap14: link state changed to DOWN
tap14: link state changed to UP
pid 75742 (bhyve-win), jid 0, uid 0: exited on signal 6 (no core dump - other error)
tap14: link state changed to DOWN
tap14: link state changed to UP
tap14: link state changed to DOWN
tap14: link state changed to UP
tap14: link state changed to DOWN
tap17: link state changed to UP
pid 75981 (bhyve-win), jid 0, uid 0: exited on signal 6 (no core dump - other error)
tap17: link state changed to DOWN
tap14: link state changed to UP
tap14: link state changed to DOWN
tap14: link state changed to UP
pid 5558 (firefox), jid 0, uid 1001: exited on signal 10 (core dumped)
linux: jid 0 pid 88595 (ping): unsupported setsockopt level 255 optname 1
tap14: link state changed to DOWN
pid 5357 (qemu-system-x86_64-), jid 0, uid 1001, was killed: a thread waited too long to allocate a page
tap20: link state changed to DOWN
vm_fault: pager read error, pid 5391 (Xorg)
pid 5391 (Xorg), jid 0, uid 0: exited on signal 6 (no core dump - bad address)
tap20: link state changed to UP
pid 13591 (firefox), jid 0, uid 1001: exited on signal 10 (core dumped)
pid 14106 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)
pid 14572 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)
pid 14028 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)
tap14: link state changed to UP
pid 17346 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
Fence expiration time out i915-drmn1:CanvasRenderer<190747>:2!
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd46fff, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e7577efe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e7577efe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for preemption time out
drmn1: [drm] GPU HANG: ecode 9:1:e757fefe, in CanvasRenderer [190747]
drmn1: [drm] Resetting rcs0 for CS error
drmn1: [drm] GPU HANG: ecode 9:1:7bd4efff, in CanvasRenderer [190747]
pid 18204 (chrome), jid 0, uid 1001, was killed: a thread waited too long to allocate a page
pid 18272 (chrome), jid 0, uid 1001, was killed: a thread waited too long to allocate a page

I suspect that it happens when I chroot linux with the linuxulator,but I'm not sure...
 
Code:
vm_fault: pager read error, pid 5391 (Xorg)
{...}
pid 18204 (chrome), jid 0, uid 1001, was killed: a thread waited too long to allocate a page
I suspect you're running out of memory. Which then causes several processes to get killed, including a couple of bhyve(8) VMs, in a desperate attempt to free up some memory. The VMs appear to be automatically restarted (by the VM management tool you're using?). But then get killed again, restarted, killed, etc.
 
Yes,it seems like this. But I haven't understood why it happens so often to go out of memory...I did some tests and I didn't find errors....when I create some vm I don't use a lot of cpus and memory..do you see something strange ?

Code:
marietto# dmesg | grep memory

real memory  = 34359738368 (32768 MB)
avail memory = 33141432320 (31606 MB)
pci0: <memory, RAM> at device 20.2 (no driver attached)
[drm] Got stolen memory base 0x3b800000, size 0x4000000

marietto# sysctl -h hw.physmem hw.realmem hw.availmem

hw.physmem: 34156097536
hw.realmem: 34359738368
sysctl: unknown oid 'hw.availmem'
 
You're probably running out of memory.

You should be using drm-61 with >= 14.1. Use graphics/drm-kmod. It will install the correct drm-*-kmod for your system.

If you need to use drm-515, use FreeBSD 13.4.
 
I can't use drm-61 shipped with the 14.2-RELEASE ; my monitor does not turn on. It does not work.
I don't want to use 13.4. I would like that you fix the problem with the drm-61 on 14.2. thanks.
 
If you use any official packages of kernel modules, they are still built 'only' against 14.1 until it goes EOL and then 14.2 specific versions are built+packaged. Upgrading during this window may require building your own copy from the ports tree until 14.2 packages are out. This often impacts users of graphics/drm-*-kmod though users of 515 seemed to have had an easier run in this last upgrade where rebuilding may not be necessary. Non-kernel packages are rarely impacted by such minor version changes and will continue to work through this timeframe despite any 14.1 package being on a 14.2 machine. If concerned about the kernel module incompatibilities, there is work underway for package repos to finally and properly address this overlap...

Is the work to make the drm-61 module compatible with 14.2-RELEASE finished and ready to go ? can I try to reinstall it again without having issues ?
 
Is the work to make the drm-61 module compatible with 14.2-RELEASE finished and ready to go ?
14.1 is still supported, packages are thus still being built for 14.1. Just build it from ports and be done. At the end of March 14.1 will be EoL and the package builders will start building for 14.2.
 
I've compiled the drm-61 module from ports : it worked,but now I see a lot of errors like these ones :

Code:
pid 5884 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)
pid 6548 (firefox), jid 0, uid 1001: exited on signal 11 (core dumped)

I would like to see how much memory is available,but the command to have this information does not work :

Code:
marietto# sysctl -h hw.availmem

sysctl: unknown oid 'hw.availmem'

anyway,it does not seem I have a low level of memory available :

Code:
marietto# sysctl hw | egrep 'hw.(phys|user|real)'

egrep: warning: egrep is obsolescent; using /usr/local/bin/ggrep -E

hw.physmem: 34156097536
hw.usermem: 16944857088
hw.realmem: 34359738368
 
You might have a combination of problems, running out of memory and perhaps some of that memory is bad (which causes things to crash and coredump). Have you tried running a memory test?

I've had a server once that had some bad memory, errors would only appear when I pushed it hard and used up a lot of it (by compiling a big port for example). When not using a lot of memory server ran just fine. Turned out somewhere at the top of the memory there were some bad bits. Replaced the memory modules and things went back to normal again.
 
Screenshot_2025-01-08_11-31-17.png
 
I will implement the swap space. If I don't get wrong,it should be two times the memory in the system. So for me 32 gb.
 
I will implement the swap space. If I don't get wrong,it should be two times the memory in the system. So for me 32 gb.

That rule is very obsolete, if it ever was valid.

On FreeBSD a good rule is just RAM + a little extra so that you can save any kernel core dump.

Otherwise it depends on what you are doing with your system. A good reason to have at least a little bit of swap is that you can use it to figure out when you are running out of RAM.
 
Back
Top