uptime

I think I have discovered the root cause of FreeBSD unpopularity:
Code:
~> uptime
 8:30PM  up 247 days,  3:38, 6 users, load averages: 0.90, 0.96, 0.93

Nothing really happens...

Code:
  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
 2635 root         17  20    0  2082M  1922M kqread   6 5213.0  56.71% bhyve
 1707 root         17  20    0  2082M  1908M kqread   2 245.5H   1.37% bhyve
19103 root         13  20    0  2073M  1781M kqread   2 201:57   0.53% bhyve
 2409 root         13  20    0  2073M  1791M kqread   6  46.1H   0.44% bhyve

This is not any record, this is just an average...
 
So, your kernel has unpatches security issues? nice :p
I know, but I cannot reboot it right now.

Fortunately it has custom built kernel with most unnecessary stuff removed and Bhyve kernels even more tuned...

And there is no urgent need to reconfigure:

Code:
# zpool status; zpool iostat -v
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 03:44:47 with 0 errors on Fri Oct  1 06:44:48 2021
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0
            ada2p3  ONLINE       0     0     0
            ada3p3  ONLINE       0     0     0

errors: No known data errors
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       1.82T  34.4T     10    200   608K  13.4M
  raidz1    1.82T  34.4T     10    200   608K  13.4M
    ada0p3      -      -      2     36   209K  4.66M
    ada1p3      -      -      2     34   204K  4.63M
    ada2p3      -      -      2     36   209K  4.66M
    ada3p3      -      -      2     34   204K  4.63M
----------  -----  -----  -----  -----  -----  -----
 
I'm not the only one I see.
"Windows" the number is related to how many days between reboots.

As for unpatched security stuff:
I always look at what the CVE says. If it is something that does not apply to me (say "inbound web PHP malformed traffic" when I don't have any webserver) I simply ignore it until one pops up I care about.
So yes, if whatever Argentum is running does not have the attack vector exposed, it should be a non issue.
 
I always look at what the CVE says. If it is something that does not apply to me (say "inbound web PHP malformed traffic" when I don't have any webserver) I simply ignore it until one pops up I care about.
And I do not have to reboot it to upgrade the webserver....
And the webservers are on the Bhyves, not on the hypervisor.

And after all of this painstaking agony and boredom I type on the terminal:
Code:
~# fuck
No fucks given

~# pkg info thefuck
thefuck-3.31
Name           : thefuck
Version        : 3.31
Installed on   : Fri Oct  8 21:24:26 2021 EEST
Origin         : misc/thefuck
Architecture   : FreeBSD:12:*
Prefix         : /usr/local
Categories     : python misc
Licenses       : MIT
Maintainer     : ygy@FreeBSD.org
WWW            : https://github.com/nvbn/thefuck
Comment        : App that corrects your previous console command
Annotations    :
Flat size      : 659KiB
Description    :
Thefuck is a magnificent app which corrects your previous console command.
It tries to match a rule for the previous command, creates a new command
using the matched rule and runs it. Thefuck comes with a lot of predefined
rules, but you can create your own rules as well.

You should place this command in your shell config file:

eval $(thefuck --alias)

WWW: https://github.com/nvbn/thefuck
 
I think I have discovered the root cause of FreeBSD unpopularity:
Nothing really happens...
No, it just keeps running and running. That's the way I like my .mp3 players.

306 days uptime my best screenshot record for a desktop on my X61 .mp3 player.
The W520 that took over that job when the X61 fan died currently at 172 days.
 
  • Like
Reactions: mer
But yeah, nothing really happens. Even with the amount of tinkering I do. It borders on boring. And I mean 'boring' in a positive sense, it "Just Works™"
Tried to create a joke - people like systems that are no so boring and keep them busy...
 
As for unpatched security stuff:
I always look at what the CVE says. If it is something that does not apply to me (say "inbound web PHP malformed traffic" when I don't have any webserver) I simply ignore it until one pops up I care about.
That's what you should do (and what I do as well). Still, a patchlevel-release with none of the SAs affecting my system doesn't happen that often.

It happens a bit more often that the kernel isn't affected, so, in theory, a reboot is not required. But then, you'd have to make sure any service e.g. linking to some lib from base is restarted. In practice, it's more reliable to just find a timeslot for a reboot.

Getting an uptime of several 100 days without having any unpatched security issues would IMHO, if at all possible, require an extremely stripped down system (many WITHOUT_* in /etc/src.conf, many nodevice et al in your kernel config).
 
That's your Microsoft training kicking it.
Nah, uptimes are overrated. When I make drastic changes I like to reboot to make sure everything comes up correctly in case of a power failure or some other reason that may cause the system to restart. Some people just seem to have something against rebooting a system, I don't. I have no problems whatsoever to reboot a system.
 
uptimes are overrated.
Very much! If you think about it, there's never a way to ensure you'll never need a reboot – sooner or later, there will be some issue (e.g. security) forcing you to do it. So, the only way to have something like a "zero downtime service" is to operate that service with redundant instances. But then, reboots don't hurt anyways.
 
We had to move to another service provider.
Well, all the different providers here put the power onto the same set of wires, when you get an ice storm that brings down trees and snaps wires, it doesn't matter who is supplying the power :)
 
So, the only way to have something like a "zero downtime service" is to operate that service with redundant instances.
Key difference here is that you can guarantee uptime of your service, not the server. The service you provide is important, the server that runs that service isn't.
 
I don't. I have no problems whatsoever to reboot a system.
Agreed. On some installs (especially those using X11 and the GPU and are public facing) I have it reboot over night in a cronjob.

It may be excessive but forcing a cold boot is really handy to know the next morning when something has gone wrong and debug the faulty change rather than in a month (or more!) time when there could be so many things that have changed that could cause the issue.

That said, this thread reminded me that I set up a tunnel into work re-using an old server just as COVID was hitting:

Code:
Last login: Mon Oct  4 15:12:02 2021 from x.x.x.x
OpenBSD 6.5 (GENERIC.MP) #3: Sat Apr 13 14:48:43 MDT 2019

Welcome to OpenBSD: The proactively secure Unix-like operating system.

path$ uptime
12:27PM  up 783 days, 20:55, 1 user, load averages: 0.01, 0.06, 0.07
path$

I wish I used a Raspberry Pi to be honest. Would have saved a heap of electricity haha.
 
But sometimes the clients are using the service non-stop and there no need to reboot...
Ahh, the "service" vs "server" argument :) Telephony with the "5 nines" requirements: that's for the service, not an individual server. So you have redundancy to give you high availability on the service, which lets you upgrade/fix/replace individual servers.

I like seeing a long uptime, but reboot when it's needed. Upgrades(including security patches) and power outages are really the only reasons I typically reboot. As SirDice points out, rebooting after major changes that could affect reboot after a power failure, is a good thing. Make sure it's correct when you control it, not scream into the darknes that it doesn't work when you can't control it.
 
Ahh, the "service" vs "server" argument :) Telephony with the "5 nines" requirements: that's for the service, not an individual server. So you have redundancy to give you high availability on the service, which lets you upgrade/fix/replace individual servers.

I like seeing a long uptime, but reboot when it's needed. Upgrades(including security patches) and power outages are really the only reasons I typically reboot. As SirDice points out, rebooting after major changes that could affect reboot after a power failure, is a good thing. Make sure it's correct when you control it, not scream into the darknes that it doesn't work when you can't control it.
No need to criticize. This is just what happened. I did not have another server back then and because the physical access to the data-center was also difficult I just did not reboot some time:
Screenshot from 2021-10-13 18-50-58.png


After that I moved it...
 
  • Like
Reactions: mer
Argentum Sorry, no criticism was intended. I was simply leveraging you to point out there is a distinction between a service and the server(s) it runs on. If you only have a single non redundant server, of course not rebooting is often the correct thing.

CVEs, security patches, a good sysadmin will actually read, research and evaluate the "does this actually affect me" instead of simply "OMG MUST PATCH NOW!!!".
 
Back
Top