Disable delay on ctrl-alt-delete in vt

For a custom minimal live FreeBSD system, I'm looking for a way to do a instant reboot
when ctrl-alt-delete is pressed, on the vt/bash command-line. In X.org, this works without problems by setting
Code:
  sysctl kern.panic_reboot_wait_time=0
  sysctl debug.kdb.panic=1
to simulate a panic + instant reboot

However, in the vt CLI there's a delay of something like 10 seconds before rebooting. Is there any possibility to skip this?

Setting debug.kdb.panic to 1 works in the vt CLI, but how do I have a command or script triggered by ctrl-alt-delete?
 
This is an inherently flawed design. 😰 A kernel panic(9) should not be part of regular operations. ❗ Having said that it says right in the manual page of vt(4):​
kern.vt.kbd_panic
Enable panic key combination.​
The phrase panic key combination refers to panic in a keymap(5) file so you need an appropriate keyboard layout loaded. 🪛 For further information see Thread 81273.​
 
  • Thanks
Reactions: MG
Flawed or not, the delay is a (apparent) empty loop or delay call that's activated somewhere on ctrl-alt-delete. The reboot works. No keymap files are present on this system. I only want it immediate instead of after a delay.
From an X graphical screen with openbox, I just added my hardreset command in the openbox configuration file. Ctrl-alt-del inside X means instant reset. This is what I would like in vt too. Kind of a minor detail, though. A complicated workaround would be to start X with a vfb driver invisible and send ctrl-alt-delete to it's root window. I didn't try it yet.

In which src C file is this present? Something in vt_core.c? It's pretty long...
 
The delay is NOT an empty loop. It is an orderly shutdown of the system.

Why not hit the power switch? From a software point of view, a kernel panic is equivalent to that. Now, you might argue that your system survived a power failure = kernel panic once, or a few times, therefore you are good to go. If that's your belief, you need to learn something about how OSes and storage systems work. Let me be honest: In your situation, I would suggest that you use a sledgehammer to shut down the computer. Because after that, at least you won't complain that it didn't reboot.
 
I could imagine a specific scenario where this is safe: The system itself is always read-only, and all runtime data (including possible "user files") is always considered entirely "scratch" (never needs to be saved) and lives in RAM (e.g. on tmpfs). This could e.g. be true for some "browser kiosk system" (do they still exist?). Even then, you should consider running some shutdown scripts (like e.g. saving the system clock to the hardware RTC).

In all other cases, I'm with ralphbsz ... this is just a crazy dangerous idea.
 
The system itself is always read-only, and all runtime data (including possible "user files") is always considered entirely "scratch" (never needs to be saved) and lives in RAM (e.g. on tmpfs).
Such systems exist. They used to be called "stateless computers". In a nutshell, they boot from read-only storage (over the network), they only use their RAM for internal storage, and any output is ever sent over network protocols to other nodes (which have their own consistency mechanisms, for example databases) or written to shared file systems using very strict consistency guarantees. I saw this being used late in the 90s or early 2000s when the first blade servers showed up. It might have been when I worked at HP around then.

The idea is that you set up a computer on "disposable" hardware, such as a very low-cost blade, and make it run a task (like a web server or database server or business logic front-end). If you don't need it right now (for example for load balancing), you simply crash it and boot again and start a different workload. I remember even thinking through schemes where we had a larger set of blades (1000s) in a data center, with a few of them set up with hardware accelerators (for example for RAID parity calculations, checksum calculations, or cryptography) and others with storage (every 5th blade had 2 disks attached), and we worked out how to do optimal placement of workloads onto these partly inhomogeneous blade server farms.

I think the use of VMs has made that somewhat obsolete. Then came Docker, Kubernetes and all that stuff. Today, large data centers have very complex load balancing and workload placement mechanisms, which underneath rely on a combination of VMs and stateless computers.

But this is not something an amateur with 1 computer running 1 OS should mess with. Do not meddle in the affairs of dragons, because you are crunchy and taste good with ketchup.
 
Back
Top