Does 32-bit have any limitations?

I've run FreeBSD on several systems since 2010 or so, but recently ended up with an Intel Atom based system that I'd like to use as a low power server. I've never run 32-bit FreeBSD before, and I was wondering whether at this point in 2017, it has any limitations.

One obvious one I suppose is RAM and the 4GB addressing limit. The system has 6GB but I'm not sure how much is being used. Also, does the 4GB limit affect swap at all? For example can the system use more than 4GB of virtual RAM?

I'm more concerned about software limitations though - are there any important apps that just won't run on 32-bit? Or device drivers that won't be available?
 
No limitations on software that I have found. There is the bonus of 32bit not supporting EFI.
That's just a personal opinion of mine. I don't care for EFI and its mechanisms.
can the system use more than 4GB of virtual RAM?
I doubt it. I do think there was something about PAE on 32 bit. I see no need for more than 4GB on anything I run.

"A kernel with the PAE feature enabled will detect memory above 4 GB and allow it to be used by the system. However, using PAE places constraints on device drivers and other features of FreeBSD"
https://www.freebsd.org/doc/handbook/bsdinstall-hardware.html
https://forums.freebsd.org/threads/44824/
Looks like your software would need to be compiled with PAE.
 
Any single virtual address space of a process is limited to 4 GiB. However, the total virtual memory available for user-space processes depends on the amount of swap space and the utilizable RAM. If your x86 CPU supports PAE, it can utilize up to 64 GiB of RAM.

Another hardware limitation of 32-bit x86 CPUs is that they are slower. They have less numerous and narrower registers and don't have some of the optimizations and extensions that newer x86-64 CPUs have.

Your "low power server" sounds like the memory and performance limitations won't be much of a problem, unless you are doing something very computationally demanding stuff. I think open-source applications should generally work fine, unless they do some platform-dependent low-level stuff. I've had problems trying to make the Chromium web browser work on an old 32-bit PC running FreeBSD, as the CPU doesn't support SSE2 but the browser is configured to require it. If you want to run closed-source software, you will need a version that is compiled for 32-bit x86. You can try to run 64-bit programs in an emulator, but that will be slow.
 
Big files (larger than 2GiB and than 4GiB) works fine; that has been true for about a decade now.

As geek said: Any process can have at most 4GiB address space; in practice it's a bit less, because some is reserved for libraries, mmap, and such. For most people, this makes no difference, since very few people (outside HPC and crazy scientific computing) run programs that use more memory than that.

The real hardware limit is, as said above, a difficult question. Without PAE, it is actually not even 4GiB; on my Atom system, I only get 3GiB recognized of the 4 installed in hardware. And PAE used to be gambling, with restrictions and frequent crashes. I think it is more stable now, but I haven't started using it (my system works so well with 3GiB, why put any effort into messing with it).

ZFS works fine on these systems; I've been using it for years, and have not encountered any problems. Follow the instructions in the handbook for low-memory configuration of ZFS. There are rumours around that beginning with 11.1, ZFS has become even more memory hungry; I've not seen any issue from that. Performance of ZFS is appropriate for a home server (depending on workload, it ranges from 50 MB/sec to being disk limited).

My home server is a 1.8 GHz 32-bit Atom (an Intel D525, on a JetWay tiny motherboard), and it does home service just fine: Firewall, PF, routing, network services (DNS/DHCP/SMPD/...), a handful of file systems (5 disks total), NFS, and similar things. Neither lack of memory nor lack of CPU speed are a problem for it at all. This is not a giant cluster for HPC, but a system the size of a shoebox.

EDITed to add an anecdote: Two weeks ago, there was a large wildland fire near our house. So near that the fire department ordered us evacuated, since they had given up on defending our neighborhood. One of the first things that went into the trunk of the car was the external backup disk (that went at the same time as passports, credit cards, and other such documents). Since we had another 5 minutes to spare, I ended up putting the whole home server in: if our house burns down, that would save me many hours of having to buy a new server, reinstall, and restore the backup. Unfortunately, the next thing I threw into the car was a box of cookies and a case of soda (in case we get stuck in a hotel parking lot overnight, we'd have something to eat and drink). That's unfortunate because servers have corners and sharp edges, which caused one of the cans of lemonade to explode, and spray sticky lemonade all over the trunk, including the server. BIG OOPS! DON'T DO THAT!

So after we evacuation was lifted (fortunately, none of the houses in our neighborhood were damaged, except for heavy smoke smell for a week), I took the home server apart, and carefully wiped it clean. Fortunately, none of the sticky sugary lemonade got on the motherboard, fans or disks; the external connectors I cleaned with Q-tips, water, and isopropyl alcohol. Works fine again: uptime is now over 14 days, so it has not gone down since we got back home.
 
Another hardware limitation of 32-bit x86 CPUs is that they are slower. They have less numerous and narrower registers and don't have some of the optimizations and extensions that newer x86-64 CPUs have.
By definition 64 bit instructions execute in the same amount of time as their 32 bit counterparts. So an ADD instruction for example takes the same amount of clockcycles to execute on 32 bit as it does on 64 bit. You are correct about the registers and extensions, but those are only beneficial in certain situations. So in short, 64 bit isn't always faster than 32 bit. In a lot of cases both are executing at the same speeds.
 
Actually, in some cases 64 bit is slower. Why? People are always hung up on clock speeds and instructions. As if executing many instructions per unit time were the bottleneck. Traditionally, that used to be true, when CPUs were much slower: Even a VAX 11-780 (a very fast and pretty expensive machine) only executed one million instructions per second, and for most applications the only bottleneck was the CPU.

Today, for many applications the bottleneck is either the memory interface, or the IO interfaces (disk or network). And using 64-bit mode stresses those more. To begin with, 64-bit code seems to use slightly longer instructions (instructions are not word-aligned on intel, so it is not an automatic factor of two like on RISC-style machines). But more importantly, nearly all data structures get larger: Every integer or pointer that needs to be stored goes from 4 bytes to 8 bytes. Now this is not a uniform factor of two: A lot of data in memory are strings and similar byte-aligned things, and smart programmers use the smallest integer type that has the required range; but not-so-smart programmer just use the default, or the fastest data type. That creates more memory pressure, effectively slowing the system down. And these larger data structure also sometimes end up on disk, when applications (such as databases) store and load binary data, so there is more disk traffic.
 
So in short, 64 bit isn't always faster than 32 bit. In a lot of cases both are executing at the same speeds.
A lot of data in memory are strings and similar byte-aligned things, and smart programmers use the smallest integer type that has the required range; but not-so-smart programmer just use the default, or the fastest data type. That creates more memory pressure, effectively slowing the system down.
Haven't compared recently, but video encoding used to be slower on 64-bit systems for those mentioned reasons several years ago.
I remember that AMD used to offer special math libraries to utilize the benefits of 64-bit architecture.
 
Wasn't there also an issue with compilers? A 64 bit CPU may have more registers than a 32 bit CPU but compilers, in general, only use a couple of registers? So all those extra registers aren't even used most of the time?
 
Wasn't there also an issue with compilers? A 64 bit CPU may have more registers than a 32 bit CPU but compilers, in general, only use a couple of registers? So all those extra registers aren't even used most of the time?
Early on in the history of x86-64 = amd64, that might have been true. But I think today compilers are really good at analyzing this, and keeping needed data in registers, because the benefits are so large: the speed difference between registers and memory (even L1 cache) are huge. And the technology with many registers has existed for decades, long before it became "mainstream" (=commodity), with most RISC machines having dozens. So compilers have known how to handle this.

Regarding architecture i386 new exploits for that are decreasing due to decreasing spread of installations. :)
And for this reason, we should all use VAXes and IBM mainframe instruction set machines as our personal computers, because there are few viruses for them. It's unfortunate that there is no port of Chrome for the Burroughs B5000.
 
And for this reason, we should all use VAXes and IBM mainframe instruction set machines as our personal computers, because there are few viruses for them.
That won't take long if they would become popular again. There wasn't such an incentive to exploit weaknesses as we have today. So it would just be a matter of time ;)
 
Back
Top