Currently 6G is swapped out on my desktop PC , but there is something newer. zswap.

Did this,
Code:
zfs create -V 52G -b 8K -o compression=lz4 -o logbias=throughput -o sync=always -o primarycache=metadata  -o secondarycache=none -o com.sun:auto-snapshot=false -o volmode=full MySwap/Swap
No idea what all options mean
 
I don't think you need sync. Let it compress swapouts and let the compressed data sit there in RAM. If a crash occurs, your swap content is worthless anyway. Now all we need to do is find the sysctl that increases the swapout of inactive memory. But here comes the downside of the memory compression. You can't really cache that or share it between processes. Only your private memory can be compressed.
 
Reading that article brought back some memories; I had one of the original Macs, so had the limited physical memory.

Below my opinion, based on pretty much, nothing :)

CPU performance has increased to the point where "cost to compress/decompress" is negligable. Of course that depends on the algorithms used and how things are tuned, but most people never notice.

The usefulness of compressing data to RAM really hits more as RAM gets used up. There is likely a lot of magic tuning going on for you, think about how Java garbage collection runs to reclaim memory for you. The compressed memory needs to be uncompressed to actually use it, which has a theoretical lag to the user (yes, likely hidden by other things such as keystrokes, mouse movements). I think it's a more useful feature on systems with limited RAM (typical smartphone) vs servers with tons of RAM (of course the specific workload on the server matters)

I wonder how it affects overall memory fragmentation in the system. Simplistically, all allocations are multiples of page size, so if an allocation gets compressed, does it get compressed in place or compressed to a new location (a smaller overall allocation)? If compressed to new and the original deleted, is there a system level hit on reclaiming that? Now when you go to use it, you need to decompress it which winds up needing more blocks to uncompress into.

ZFS storing the data in a compressed manner on the device uses less bandwidth on the I/O system so has a theoretical advantage reading/writing to the device.
Compressed ARC means for a given size of ARC it can hold more items, so the system can have a greater hit rate instead of going back to the device. Uncompressing something pulled from ARC is the same penalty as uncompressing what is read from the device, so the ability to hold more in ARC, giving a greater hit rate gives better overall system performance.
 
DragonFly BSD is still the king of RAM efficiency in my experience. It was already very competitive with ZFS in 2009: https://oda.oslomet.no/oda-xmlui/bi..._MartinChristoffer.pdf?sequence=2&isAllowed=y

In terms of performance, I think HAMMER2 is much faster than HAMMER in some situations, and in some common situations faster than all Linux filesystems.

On a standard desktop with XFCE installed, DragonFly BSD seems to me to use about 300MB less RAM than FreeBSD, at least according to neofetch. On the other hand, FreeBSD has fewer bugs in apps and ZFS can be faster than HAMMER2 in some situations. I also think FreeBSD's CPU performance is slightly higher than DragonFly BSD.
 
  • Like
Reactions: mer
This is my current RAM usage on DragonFly BSD:

2023-01-08-153655_1920x1080_scrot.png


If I understand it correctly this is less than 9MB of active RAM usage. A truly spectacular result and better than 99% of Linux/BSD systems. The only system I can think of that does even better is Alpine Linux, but as far as I know BSD can manage memory better than Linux systems when RAM usage increases.
 
I think Matt has done some excellent work with DragonFlyBSD; it shows the beauty of Open Source (I think his website has the history).
Hammer/Hammer2 has it's design goals, some overlap with ZFS, some not. Like a lot of others, I'd love to see is OpenZFS could be ported to DragonFly (Yes I know it's a lot of work, may not even be possible), but as Hammer2 gets more miles on it, it will likely wind up as stable UFS/ZFS.

Memory management when approaching "End Of RAM" is always interesting and responses typically fall into one of two camps (yes this not binary and there are many points in between):

Fail spectaculary and reboot quickly
Run severely degraded until a human can intervene.

I hate reboots for unknown reasons but you need to be able to log in to fix a degraded system. Neither is 100% correct, neither is 100% wrong. Reality seems to be a mix of the two.
 
It's 80 MB of active RAM. However, if dragonfly is like FreeBSD, it's likely to be a meaningless figure because the high value of Free+Cache+Inact creates no incentive to trim memory from the active queue.
You're right.

I have about 89MB of active RAM usage right after I boot FreeBSD after logging into dwm. On both FreeBSD and DragonFly I have a very limited number of services started, eg dbus and hald on DragonFly, and about three other services. DragonFly and FreeBSD are ultimately quite similar in terms of active RAM usage right after login. I thought DragonFly scored better here, because in neofetch I see that DragonFly with XFCE is using over 200MB less RAM than my particular FreeBSD setup for some reason.

I had a crash with swap on zfs-zvol with compression.

FreeBSD's default layout works for me. I never had a full system crash on FreeBSD, it's the only system I can say that about.
 
I heard openbsd is also good when it comes to not wasting memory.

Note : I tend to push my system to the limit with rule of thumb, the cpu should not sleep.
 
A very normal traditional crash. A sudden reboot, without any further warning,notice.
It happened when i was using poudriere aggressive, so it was eating swap-space.
I have 50G swapspace. But the crash happened at around 6G swap-space usage.
I did not analyse the crash i just reverted back to traditional swap-space to be certain.
Note : a crash is not exceptional when the swap is compressed.
 
I have a "feeling" not "a proof" there might be a relation.
I have read other persons had similar issues...
But there is another parameter i did not thought about , "zvolblocksize"...
 
There are a few things I don't quite understand reading the Lifewire article:
-Won't this require more energy? RAM memory requires virtually no energy, doesn't it require more energy to let the CPU do compression very frequently?
-Haven't we gotten to the point where this is largely redundant for desktop systems? I have very little problem even with 4GB system RAM at the moment. I use dwm which uses extremely little RAM. Many desktops now have 16GB of RAM. When do you ever use that on a desktop system? Only with (heavy) flight sims have I ever seen such high RAM usage on desktops. But I hardly know anyone who frequently plays a flight simulator, so that seems more of a niche to me.
-What are the numbers behind this story. There is talk of 'performance improvements' in this article, but no hard numbers of common scenarios are mentioned.

Seems like something that was invented to make use of the large amount of cores that CPUs now have, not because desktop users really need RAM compression.

This kind of optimization seems more useful to me: https://www.phoronix.com/news/OpenZFS-Uncached-Prefetch
 
Compiling vscode,iridium,chromium takes very long when not done in memory.
That's true, but why would a desktop user, more specifically a macOS user, ever compile one of these apps themselves?
I recently compiled ZFS myself on Linux and it actually went pretty fast on my primitive system.
Most apps require little RAM for compilation.
 
Voltaire my opinion, it's moderately useful on low memory systems as RAM gets "full". A lot is based on workload and intent of the system; something building ports that's effectively kick off a batch job and walk away. How long it takes is really the only metric you care about, so use all the memory.
user based graphical workstations? I think the load is a lot more variable, opening/closing apps, tabs on a browser, quick editor over here, so subjective responsiveness is the driving metric (is it fast enough for the user). Would RAM ever get full enough start desiring compression? I don't know.

A lot of time it's not really about how many resources you have, it becomes how efficiently do you use them. The original Macs with 9inch B&W screen, 128K of memory, 3.5inch floppy? They were pretty responsive for the day, the compression utilities helped because the system didn't need to go out to the floppy :)
 
Voltaire my opinion, it's moderately useful on low memory systems as RAM gets "full". A lot is based on workload and intent of the system; something building ports that's effectively kick off a batch job and walk away. How long it takes is really the only metric you care about, so use all the memory.
user based graphical workstations? I think the load is a lot more variable, opening/closing apps, tabs on a browser, quick editor over here, so subjective responsiveness is the driving metric (is it fast enough for the user). Would RAM ever get full enough start desiring compression? I don't know.

A lot of time it's not really about how many resources you have, it becomes how efficiently do you use them. The original Macs with 9inch B&W screen, 128K of memory, 3.5inch floppy? They were pretty responsive for the day, the compression utilities helped because the system didn't need to go out to the floppy :)
I assumed that systems like the MacBook Pro would always have a minimum of 16GB of RAM as a base configuration for their price. But I checked that and Apple is asking € 1,619.00 for 8 GB of central memory and 256 GB of SSD storage. It's amazing how people waste their money on that. It also largely explains the article, I translate it in my head anyway as: Apple are so stingy that they give 8GB RAM to their most premium products.

Even then I don't think you often go over that 8GB. It is mainly recent games that need more than 8GB. And none of the best games of 2022 will work on macOS. Think Elden Ring, Ghostwire: Tokyo, A Plague Tale: Requiem, and Stray. None of those games were developed for macOS.
 
  • Like
Reactions: mer
I just need to compile one port, chromium in poudriere with option TMPFS all, and my system uses more than 16G of memory ...
 
Back
Top