What are the freebsd O.S. things you don't understand ?

In fact this question is to reveal a bit of weakness.
I don't understand:
-Porters handbook
-PAM : Pluggable authentication module
-/etc/ttys.

Maybe you find other things different? Being difficulty relative , just like space-time.
 
I have
Code:
kern.sched.preempt_thresh=120            #80
I think it's not a bad idea. But what it's really doing ... ?
I have the feeling (not the science), it's better for desktop operations, but worse for server operations.
I think it gives more "tasks switching".
 
That's not related to the VFS but obviously to the scheduler. Yes, the description is a bit terse:
Code:
$ sysctl -d kern.sched.preempt_thresh
kern.sched.preempt_thresh: Maximal (lowest) priority for preemption

A higher number means lower priority, so I interpret it as lowest priority (number!) a process must at least have to cause preemption. Not sure whether this is fully correct, but on thing is for sure, increasing that number will cause more preemption. This might be beneficial for a desktop workload (more preemption reduces latency), but would be bad for most server workloads (it also reduces throughput).
 
Zirias, for completeness my vfs sysctls,
Code:
vfs.zfs.min_auto_ashift=12               #9
vfs.usermount=1                          #0,Unprivileged users may mount and unmount file systems
vfs.zfs.arc_min= 1500000000              #0
vfs.zfs.arc_max= 2500000000              #0
vfs.zfs.txg.timeout=5
If i'm correct a lower txg.timeout means less data lost in case of power-outage. At a price,less bulk writes ?
 
-PAM : Pluggable authentication module

Never really got that or ldap auth (although I only really tried briefly once or twice)
Often amazes me that some things, such as centralised auth for a group of systems, has always been so massively confusing and over complicated. Central auth is basically assumed in commercial windows environments and has been for ~30 years.

I'd love to be able to easily create a central authentication server and use that on all my servers with a couple of rc.conf settings, +1 to also support enabling 2fa per server/server-group for ssh.
 
Alain De Vos my understanding:

txg timeout is related to "how often ZFS looks to write data to a device (device is being used loosely here)". There are a couple of variables that relate to writing a txg to the device, basically "how much data is outstanding" and the timeouts. If you are doing something that causes lots of data needing to be written you may never hit the timeout. The timeout is basically a failsafe: "if we have not written any data in that time go check to see if there is data to be written and write it out".

Could setting it lower prevent some data loss? Sure, but like everything else with tuning, you have to weigh it against everything else. I/O bandwidth to the devices plays a part here too; if your timeout is too long or your memory limit too great you can saturate your I/O to the devices which will stall the overall system until the data is flushed.

I think the default value of txg.timout is 5 secs, at least it is on my 13.1-RELEASE (I don't have a value set in sysctl.conf for it)
 
Some sysctl settings. I occassionally run into problems where the solution is to set certain kernel parameters different by using sysctl. If you don't know about it, it often looks like the solution appeared out of thin air and be thankful that you got it working then.

I really would like to see some centralised wiki/page in the handbook, which explains the most important sysctl variables, and can be used all the time.
 
hardworkingnewbie sysctl -d gives the description but it often isn't enough. I'd like to have it at least print out the default value, but agree that something in or off the handbook along with "what are the implications of me changing it".
I'm sure the documentation team would love to have a patch for it.
 
I don't understand ZFS because I have never used it before, I have used BTRFS before though on Linux but I don't have any experience with ZFS at all and at first glance I find it alot more confusing than BTRFS which has simpler commands.
 
-/etc/ttys.
I understand this control file but device assignments in /dev/cua* and /dev/tty* can be confusing.
When should I use which? Why do we have a call-in interface?
Much of this relates to the old days of teletype it seems.

How about sysctl settings in loader.conf versus sysctl.conf.
Why? Do they hit earlier in bootup?
 
Setting up nat in ipfw. Rather, I set up a router with nat and stuff. Authoritative dns, mail server, nginx in jails also work on this router) But still there is no complete understanding, nat settings for ipfw were obtained somewhat experimentally (((
 
How about sysctl settings in loader.conf versus sysctl.conf.
Why? Do they hit earlier in bootup?
Some of those make sense only at startup and become read-only after the system is booted; while the ones in sysctl.conf are rw all the time.

Behavior depends (among other things) on the following defines used when creating an OID:
Code:
     CTLFLAG_RD       This is a read-only sysctl.

     CTLFLAG_RDTUN    This is a read-only sysctl and tunable which is tried
                      fetched once from the system environment early during
                      module load or system boot.

     CTLFLAG_WR       This is a writable sysctl.

     CTLFLAG_RW       This sysctl is readable and writable.

     CTLFLAG_RWTUN    This is a readable and writeable sysctl and tunable
                      which is tried fetched once from the system environment
                      early during module load or system boot.
 
Setting up nat in ipfw. Rather, I set up a router with nat and stuff. Authoritative dns, mail server, nginx in jails also work on this router) But still there is no complete understanding, nat settings for ipfw were obtained somewhat experimentally (((
It might be smart to segregate those networking services to jails on another host rather than cram them all on the gateway.
 
  • Thanks
Reactions: dnb
The chain of software layers that keyboard and mouse go through until they reach X11. Somebody posted the list here a couple months back, but I lost the reference. I think it is 7 layers :eek:
 
I never managed to write my own device driver. I'm sure I could do it if I try hard enough, but I push it along for almost 30 years now.

In the early times I actually wanted to do it, because we had adapter cards built by ourselves and talked to them on ms-dos. So I got the Egan/Teixeira book, and somebody said that should do for BSD, but there were still many strange things there, and I didn't get around to it. And then PCI arrived, and things didn't get simpler.
 
1. Makefile to build everything. I’m slowly coming to grips with it some more (and reading make(1) a ton), but holy moly it seems like wacking through a thick jungle with a machete. I wonder how the devs would approach it if they could start over.

2. How to identify compatible hardware, specifically integrated GPUs. “What laptop works well?” is a super common question, and answers are all over the place. A related question - of prime interest to me and anyone else using FreeBSD as a development platform - is “what is the most powerful laptop I can buy today that works well?” Nobody knows!

3. The development process. I see commits, and discussion on bugs.freebsd.org and reviews.freebsd.org, and on hackers@ mailing list. It seems to me that there’s not much clear discussion of the things people will be working on - not much direction, or roadmap. My understanding is that there are devs-only mailing lists (is that true?) and I wonder if some of that discussion takes place there. Are there people with a general sense of the direction FreeBSD is headed? If so, who are they communicating that with, and how? I would love to get better insight into the development process. I follow bugs and reviews a bit, but there’s a lot of action and it’s overwhelming and not sure where to focus.
 
  • Thanks
Reactions: dnb
1. Makefile to build everything. I’m slowly coming to grips with it some more (and reading make(1) a ton), but holy moly it seems like wacking through a thick jungle with a machete. I wonder how the devs would approach it if they could start over.
Make is a very old, well-tested and well-understood, and nearly universally despised and maligned technology. It is hard to use. Following common patterns makes it easier, but by no means trivial.

There are efforts out there for better build systems. I've used bazel a little bit (and personally don't like it either).

2. How to identify compatible hardware, specifically integrated GPUs.
FreeBSD consists of two parts. First comes the base, which only covers a CLI-based system. It doesn't know nor care what a GPU is, and is compatible with nearly all hardware. On top of the base come packages. One particularly complex set of packages are graphics drivers, GUIs/DEs, and graphical applications. Those are maintained by volunteers, and there compatibility with hardware becomes difficult.

“What laptop works well?” is a super common question, and answers are all over the place. A related question - of prime interest to me and anyone else using FreeBSD as a development platform - is “what is the most powerful laptop I can buy today that works well?” Nobody knows!
Define "well". This is a multi-faceted question. Do you want high graphics performance? Use all features of the built-in WiFi? Great ACPI support with suspend/hibernate/resume/...? Or just raw CPU power? Depending on what question you ask, you will get different answers. And in reality, the answers are often partially unknown or unclear.

3. The development process. ... It seems to me that there’s not much clear discussion of the things people will be working on - not much direction, or roadmap. My understanding is that there are devs-only mailing lists (is that true?) ...
Yes, developer mailing lists exist. Also, the core set of developers communicates directly among each other. And the foundation people participate in these discussions, to decide what to fund. I've never seen any of this, and I don't think it's public.
 
(Warning: Topic shift, to make systems and Bazel)

Wondering what you don't like about it. Not to argue but to try to understand why.
If you are trying to build a user-space application (which requires compiling and linking from many source files), and you want to end up with one or more binaries, it is quite easy to use. It can build Android and iOS apps, and Docker and Kubernetes "packages" (none of which I happen to be interested in). The language used to describe the sources and targets is very simple, the syntax is clean and Python-like. So far, so great.

Problem 1: The location of the sources will be a single hierarchical directory tree. The output (for objects and executables) will be in a location determined by Bazel. If you want to put executables in interesting places (for example for a complex system test harness that you have written and that lives in a source directory, or for tools needed for further stages of the compilation), then either you adapt your existing workflow to the location where Bazel wants to put them, or you go into the barely documented corners of Bazel.

Problem 2: Using the same source base, I perform builds on 4 machines (one FreeBSD, two RPi running Debian, and one Mac), and installs on 3 machines (all but the Mac). On different machines, different build and install rules need to be used. I have to be able to modify compiler flags in a platform-specific way. I need to select different targets. For example, on the FreeBSD machine I need to build and install executables A, B and C, and install one /usr/local/etc/rc.d/A script and the corresponding /usr/local/etc/A.conf file. On the two RPis, I need to build executables U, V and C, and on each of them a different version of a systemd configuration file for U or V. On the Mac, everything gets built, nothing gets installed, but there the test harness needs to be build and executed (my Mac laptop is the fastest machine I have, and the test harness uses about 2-3 hours of CPU time). In addition to supporting different targets per machine/architecture, I also have different external dependencies. For example, on the Mac I don't need to depend on GRPC, instead I use a local RPC-stub library; my personal Python libraries are in different locations on FreeBSD versus Debian. Putting "conditionals" (if statements) into the build file is at the edge of what Bazel is comfortable with, and the documentation is not to my liking.

Note: We're not talking about cross-compiling here. Most of the code is actually in Python (so it needs little compilation). We're talking about having different targets and build rules, and those selections are platform- and node-specific.

Problem 3: Bazel is good at building artifacts that reside within the "environment", a set of source- and target directories. But I need the assembled artifacts to be installed in many places (see above). What I can't find is support for "make install", in particular with dependency tracking.

I'm sure any of these problems can be worked around within Bazel. But I happen to have a functioning (barely functioning, overly complex, and brittle) system of makefiles with if statements that call other makefiles, and maintaining that is currently easier than finishing switching.

YMMV. Matter-of-fact, if you are using a build cluster, and you are building large artifacts (systems that rely on thousands or tens of thousands of source files), Bazel is going to be much better than make.
 
Define "well". This is a multi-faceted question. Do you want high graphics performance? Use all features of the built-in WiFi? Great ACPI support with suspend/hibernate/resume/...? Or just raw CPU power?

I want raw CPU performance for programming, on a machine that is quiet and has no problem feeling responsive when browsing websites, and suspends/resumes without issue. Faster wifi is better, of course, but I understand the general driver limitations today and if I need faster internet I can plug in via ethernet. The graphics card just needs to handle xfce, firefox, emacs, etc.

fwiw I am doing exactly that on a Thinkpad E495 at this very moment. It is a very pleasant little machine. The screen is a bit too dim for my liking (250 nits), and the CPU is quite a bit weaker than I'd like (Ryzen 3500U) - but I get to run FreeBSD on it. So far wifi, suspend/resume, and sound have all been smooth.

I have an X1 Carbon Gen 9 (i7-11857G) on the way. It should have a brighter screen and more powerful CPU. If I can get my work done on it, great. If not, I'll send it back and keep using the E495 until FreeBSD 14 is released and re-evaluate hardware options then.

in reality, the answers are often partially unknown or unclear.

Which is exactly what I said: nobody knows. I don't think there's a cabal of FreeBSD users running super-powerful laptops that are keeping that information to themselves. There's a spectrum ranging from "works well" (old) to "powerful" (new, prob doesn't work). I am trying to maximize power while staying in the "works well" space - which is pretty much the default for any professional programmer, and is why so many have been using Intel Macs for 15+ years (assuming they want a unix-like OS).
 
Back
Top