32 bit tier 2?

So, I inherited a Dell Inspiron N4110 that has a pentium 4 2ghz processor and 3gb ram. I'm trying out OSes on it and FreeBSD's turn came up. But, then I noticed 32 bit is now tier 2, I'm not sure what that means, effectively. Does it mean that stuff's already breaking, or that stuff is not being maintained, or what? I don't wanna put it on there only to have packages not working two weeks from now (or six months from now, for that matter) and I'm ok looking elsewhere, but I love FreeBSD and would like to use it if it'll keep working for a while.
 
32-bit hasn't been deprecated yet. But as tier-2, kernel build errors aren't fixed nearly as quickly.

15-CURRENT fails to build under i386 ATM due to a patch that added some assembler that assumes (only works under) amd64 (using 64-bit registers that obviously don't exist on i386).

i386 was dropped by Red Hat over ten years ago and Fedora at least five years ago. My reason for pushing for its deprecation was twofold. First, it takes a significant amount of developer effort to remain cognizant of 32-bit issues when writing code on a 64 bit system. Most src build failures are due to 32-bit issues. And secondly, all Linux distros that I was aware of at the time had already deprecated their 32-bit support many moons ago at the time. It only made sense to open a discussion about deorbiting 32-bit. People didn't really want to talk about it until they saw that almost all Linux distros had deprecated support.

BTW, i386 is the only platform that has a 32 bit time_t. All others, including all other 32-bit platforms, use a 64-bit time_t. OpenBSD updated their i386 to support 32 bit time_t. FreeBSD did not because this would have resulted in significant breakage of backward compatibility.

I'd suggest people who absolutely need long-term 32-bit support migrate to NetBSD. They pride themselves with support of over 50 different hardware platforms. Even then, the 2038 32-bit time_t will certainly cause a fair bit of pain like Y2K did. Even though Y2K was a non-issue, people put in a lot of work to make sure it didn't cause any problems. I was there. It was a lot of hard work. I don't know what NetBSD's 2038 plan for 32-bit platforms is.

2038 time_t is not an issue with 64-bit platforms or any 32-bit platform that uses 64-bit time_t. Because with 64-bit time_t there is no overflow.

A person would be well advised to migrate off 32-bit platforms before 2038. This is probably why most Linux distros have dropped 32-bit support. The time investment will not pay for itself.
 
I think the biggest effect of 386 being tier 2 on version 12 is: no more pre-built packages. You either move to version 13 (which will supposedly have pre-built packages in 32 bit mode until it is EoL), or built software from ports, or move to 64 bit if possible.

On my server at home, I grudgingly reinstalled into 64 bit mode. Don't like it, but I respect the decision the maintainers made.
 
I think the biggest effect of 386 being tier 2 on version 12 is: no more pre-built packages. You either move to version 13 (which will supposedly have pre-built packages in 32 bit mode until it is EoL), or built software from ports, or move to 64 bit if possible.

On my server at home, I grudgingly reinstalled into 64 bit mode. Don't like it, but I respect the decision the maintainers made.
OK, I'm honestly curious. Why do you dislike 64-bit?
 
I have a 32bit system running 14. But I also use openbsd on some very old 32bit systems and systems that require 32bit efi.

FreeBSD is great but I find openbsd to be a good solution for some systems as well.

If 32bit support ends I will just be using openbsd for my 32bit systems. That would be my recommendation for others who enjoy using bsd on their 32bit systems.
 
Well, y'alls posts got me thinking - is this machine strictly 32 bit? As it turns out, later Pentiums (this one included) are 64 bit. So, false alarm. It runs the 64 bit stuff just fine (it's low on mem, but that's manageable).
 
OK, I'm honestly curious. Why do you dislike 64-bit?
I know, as a scientist I should be basing my opinions on data. Meaning CURRENT and ACCURATE data.

My dislike of 64-bit mode comes from the early days of it, about 90s and early 2000s. We all used 32-bit machines on our desks and in our data centers until about 199x. CPUs with 64-bit instruction sets showed up in servers around the mid 90s (Power PC and PA-RISC), and then Intel came out with the Itanium (my neighbor Nick, a famous CPU architect, actually coined the name "Itanic" for it). One of the first things we noticed was that executables became significantly larger. That sadly makes some sense (and the Itanic is a specially bad case, as its instruction set was qualitatively different, neither CISC nor RISC). Then we started getting the first good machines for benchmarking, and we noticed that the CPI was actually somewhat worse. Our educated guess (later confirmed by experts, I'm not a CPU person) is that the machine used twice as much memory bandwidth for simple integer operations that went to memory, and for certain workloads the CPU was memory starved.

Add to that: I grew up in the era of 8-bit micros; I learned how to do productive work on a Z80 with 64K under cp/m, on a VAX 11-780 with a whole megabyte. And I shared an IBM 370/168 with 500 other users on 8 MB of memory (it did get pretty slow when all 500 people were logged in). So my instincts were honed to be as conservative of memory use as possible, which is why it emotionally hurts me to waste 32 extra bits in a pointer or integer on something that will never be used.

BUT: Those negative experiences and emotional fears are founded on experiences from 25 years ago. And on RISC machines plus that architectural abomination Itanic. In those days, performance-critical code was written in C, usually by people who looked at the assembly listings of the compiler. Today, the architecture I run on is the AMD 64-bit adaptation of the Intel x86 instruction set (different animal). And much code today is written to not be CPU efficient anyway (a lot is run in a JVM or a Python interpreter). Yes, I know shops where the biggest consumer of CPU cycles are Python programs ... so worrying about a 10% or 20% effect on CPI is completely irrational. My dislike of 64 bit mode is based on EMOTIONS and FEAR.

In the particular case of my server at home (a physically small machine, the size of a shoebox, with an Atom 4-core CPU running at 1.8 GHz), what really worried me that it was memory starved to begin with: It physically has only 4 GB of RAM (and I can't install more), and when running FreeBSD 12.X in 32-bit mode, I could only use 3 GB of that. So before I installed FreeBSD 13.1 in 64-bit mode, I was worrying that I would run out of memory (and have to swap more). In reality, that was DUMB: in 64-bit mode, the OS can actually use all 4 GB of the hardware, so I got 33% more memory for free.

You don't have to bother telling me that I need to upgrade (the hardware of) my home server, since I already know that. I'm busy with other projects right now, and it is working fine.
 
I personally dislike dropping 32bit support for two reasons.

The first is really personal: I have well-working 32bit hardware (i386 and armv7). The i386 one is an old laptop. Not really usable with "modern" desktop software of course, but I keep it as an emergency fallback ... why throw it in the bin when nothing is broken? And I actually used it recently when my main laptop temporarily didn't boot and I needed to access the serial console of my server.

The second is concerning (older) Windows software. There's still a LOT of i386-only stuff out there you might want to run. Still, to have a WoW64-capable wine, you currently need to build i386 packages, in a jail running an i386 version of FreeBSD. But I kind of expect (hope?) there will be a better solution until i386 support is indeed gone.
 
The second is concerning (older) Windows software. There's still a LOT of i386-only stuff out there you might want to run. Still, to have a WoW64-capable wine, you currently need to build i386 packages, in a jail running an i386 version of FreeBSD. But I kind of expect (hope?) there will be a better solution until i386 support is indeed gone.
Isn't the whole purpose of wine converting all libraries to PE format that when it's done you don't need i386 packages for WoW64 support anymore?
 
I personally dislike dropping 32bit support for two reasons.
And the third reason are exploits. While 64bit was quite new it was a certain protection for a while. Now it is the other way round: 64bit code does not run on 32bit archs.
 
Isn't the whole purpose of wine converting all libraries to PE format that when it's done you don't need i386 packages for WoW64 support anymore?

As far as i know its still even not working correctly on Linux. So we have to sit still with an native i386 build.
 
The second is concerning (older) Windows software. There's still a LOT of i386-only stuff out there you might want to run. Still, to have a WoW64-capable wine, you currently need to build i386 packages, in a jail running an i386 version of FreeBSD. But I kind of expect (hope?) there will be a better solution until i386 support is indeed gone.

Windows is an absolute shitshow when it comes to 64/32bit binaries. That Toy OS still drags cruft around that has been fudged to run on 32bit, being originally written back in 16bit MS-DOS times...
It's absolutely common on that platform to have programs that use partly 32bit and partly 64bit binaries (e.g. installers and helpers or background services are old 32bit binaries but the main program is a 64bit application...), so in essence you need wine to handle both architectures.... Or just completely leave that dumpster fire of an ecosystem behind and if nothing else works use the Linux layer and binaries - most software nowadays is also available at least for that, if not for FreeBSD.
 
The first is really personal: I have well-working 32bit hardware (i386 and armv7)
Isn't that the weird part? So many Linux are dropping i386 because "it is 32-bit and thus old", and yet they are keeping armv7. Strangely I would even suggest that a Linux distro capable i386 is far more common in the wild than a Linux distro capable armv7 box. Particularly in some of our poorer countries.

FreeBSD moving to Tier 2 for both i386 and armv7 makes sense rather than outright dropping unlike the Linux world.

I could only use 3 GB of that. So before I installed FreeBSD 13.1 in 64-bit mode, I was worrying that I would run out of memory (and have to swap more). In reality, that was DUMB: in 64-bit mode, the OS can actually use all 4 GB of the hardware, so I got 33% more memory for free.
Surly the PAE kernel would have that issue covered. Typically it is quite rare for a *single* process (on a home server) to consume the full 4 gigs!
 
… pre-built packages …

1704399687756.png
 
Isn't the whole purpose of wine converting all libraries to PE format that when it's done you don't need i386 packages for WoW64 support anymore?
It's a bit more complex than that: https://gitlab.winehq.org/wine/wine/-/releases/wine-9.0#wow64. The new wow64 mode involves loading i386 code into the first 4 GB of address space, allocating a stack for each thread here in addition to the normal stack that the system's C runtime already provides and switching between 64- and 32-bit parts with far jump / iret instructions (changing cs, rsp and rip registers; also fs/fsbase are kind of important on Linux/FreeBSD, so Wine swaps that as well). This resembles kernel/userspace boundary in a way.

I keep an appropriately patched version of emulators/wine-devel there if you want to give it a try. I'll probably send the patches upstream once I'm fully content with them. The official FreeBSD packages are likely not going to switch until the Wine devs themselves enable the new wow64 mode by default. I'd expect that to be the next year.
 
Surly the PAE kernel would have that issue covered. Typically it is quite rare for a *single* process (on a home server) to consume the full 4 gigs!
It is not unheard of, however. I could in the past do that easily with FEM codes, or once I got the message that the core dump of the solaris cc was too big for my quota as it had tried to allocate more than the address space would give it.
But the split is usually 1:3 or 2:2. MacOS does a 64k/rest split IIRC. You don't get the full 4G.
 
It is not unheard of, however. I could in the past do that easily with FEM codes, or once I got the message that the core dump of the solaris cc was too big for my quota as it had tried to allocate more than the address space would give it.
That's fair, especially if the machine is functioning as a "build" server. When embedding binary resources as encoded .c files, I have run into similar. That said, typically my user limits gets hit well before the PAE ~4gig limits.
 
On my server at home, I grudgingly reinstalled into 64 bit mode. Don't like it, but I respect the decision the maintainers made.

Well, y'alls posts got me thinking - is this machine strictly 32 bit? As it turns out, later Pentiums (this one included) are 64 bit. So, false alarm. It runs the 64 bit stuff just fine (it's low on mem, but that's manageable).

I had a Pentium D that I assumed was 32 bit only but - to my amazement - it booted the 64 bit version of FreeBSD 12.x and 13.x.

If you run old hardware - take a hard look at the processor. For example, a "Core Duo" is 32 bits but a "Core 2 Duo" is 64 bits. Intel produced the Pentium D from 2005 to 2008. That's pushing twenty years old.
 
When embedding binary resources as encoded .c files, I have run into similar.
It happens when you compile a source file with pretty complex logic and a length of 8MB in one(!) function, this is gonna happen.
 
I had a Pentium D that I assumed was 32 bit only but - to my amazement - it booted the 64 bit version of FreeBSD 12.x and 13.x.

If you run old hardware - take a hard look at the processor. For example, a "Core Duo" is 32 bits but a "Core 2 Duo" is 64 bits. Intel produced the Pentium D from 2005 to 2008. That's pushing twenty years old.
I have some core 2 duo and core 2 quad systems can confirm they are 64bit.
 
Back
Top