ZFS vs HAMMER

Status
Not open for further replies.
This thread spiralled into crap. :( I was hoping to read further about this comparison. However I don't use storage the same way that the use cases here indicate.

I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.

I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.
 
This thread spiralled into crap. :( I was hoping to read further about this comparison. However I don't use storage the same way that the use cases here indicate.

I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.

I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.

If you're looking for a in-depth CompSci analysis of each file system, I'd read the whitepapers on HAMMERs implementation, then watch this (and parts 2 and 3), then make your owns conclusions based on use cases.

Jeff Bonwick and Bill Moore (Storage Gods) does a very good thorough, detailed explanation of ZFS. It's a good watch.
 
This thread spiralled into crap. :( I was hoping to read further about this comparison. However I don't use storage the same way that the use cases here indicate.

I'm a firm believer in disaggregation of the SVC and storage, unless you're willing to shell out serious dollars for enterprise DS.

I guess I would prefer a comparison that would use the same hardware as say an IBM V7000 because essentially it is using FreeBSD w/UFS.

I agree about the thread. The main issue I'm seeing with ZFS is that grub2 doesn't play well with it. And I haven't seen anybody with a working solution to it. Or at least not one that's working under 10.2 Actually, it's one of the big reasons why I'm coming back to FreeBSD. Having all my stuff on ZFS and self-healing solves a lot of my personal data corruption concerns.

To the person wondering about using this at home, the big reason to use ZFS at home is self-healing. I've got 2 1 TB disks which were probably $150 between the two of them and I can do my install to those. I'd love to have ECC, but this is for home use and the data gets backed up anyways.

Hammer is something that I'm curious about, but ZFS is quite good and is already available on FreeBSD. Even early on, ZFS was working pretty well.
 
Were you using HAMMER previously? Can you comment on whether the historical access functionality offers some protection against corruption? For example if a file gets corrupted on disk is it possible to roll back to a version before the corruption happened?
 
  • Thanks
Reactions: Oko
I am resurrecting this infamous thread in order to preserve from the loss some of my personal finding as a consumer of ZFS and HAMMER file system. I will try to stick to technical details as much as possible. Just to make sure HAMMER means HAMMER1 which exists and is fully functional. I am not going to speculate anything about HAMMER2 which is in works.

The purpose of a file system is to keep your data. In this post I will try to address the following points typically encountered in production

1. Protection against data corruption
2. Journaling
3. Backup and recovery
4. Inquiry
5. Monitoring
6. Alerting

ZFS is a combined file system and logical volume manager originally designed by Sun Microsystems. HAMMER is a file system written for DragonFly which provides instant crash recovery (no fsck needed!).

ZFS is designed for large data centers where people live by high availability and redundancy. Redundancy means that the data is typically stored on a volume consisting of multiple physical HDDs in such a fashion that malfunction of a single or even several drives doesn't affect data consistency and availability of data. Classical approach to this problem is using hardware or software RAID. In that respect one can think of a ZFS as a software RAID. The following RAID disciplines are available for ZFS, mirror, RAID-Z, RAID-Z2, RAID-Z3. In the layman one picks up typically 6, 7 or more drives and combine them using ZFS into the single volume which in ZFS lingo is known as a ZFS pool. Those drives are physically attached to a computer with a Host Bus Adapter and exposed to the ZFS as JBOD (Just a Bunch of Disks). In typical deployment file servers with multiple ZFS pools as large as 40-50 TB are common. Hardware RAID cards should not be used with ZFS even if they support JBOD mode. ZFS pools are pretty trivial to monitor and FreeBSD has excellent integration with S.M.A.R.T. daemon. ZFS on FreeBSD is hands-down enterprise grade product. ZFS pools are portable and easy to import from computer to computer and even across OS. One has to be mindful of the version of ZFS. Linux version is older than FreeBSD so a ZFS pool created on a FreeBSD will not be importable into Linux? It is possible to use ZFS Volume as a iSCSI Target. FreeBSD does support ZFS Volume growth.

On the other hand HAMMER is just a file system. That means that if one wants a large logical volume one should be using HAMMER in combination with hardware RAID. Two brands of hardware RAID cards come to mind: Areca and LSI MegaRAID cards. Areca cards are on FreeBSD supported by arcmsr() driver while newer LSI MegaRAID cards are supported by mfi() driver. I have not tested any of these two drivers on DragonFly BSD and that is one of things on my TODO list (I have high end/$700 LSI MegaRAID cards in my lab). The immediate questions will be how does one monitor those cards and if it is possible to pass the status of HDD to SMART daemon. I am aware of the two set of tools to monitor LSI cards mfiutil() and proprietary sysutils/storcli. Areca card should be supported even better than LSI as they are open hardware. There is proprietary tool sysutils/areca-cli for Areca for inquiry/monitoring of Areca cards. I am not sure if there is a open source version. One would have to be very mindful of the support by DF BSD community if to use hardware RAID cards. I am not going to speculate how much testing is done with hardware RAID cards but all DF RAID drivers come from FreeBSD. In my experience those drivers some time work some time not quite. DF BSD has a spotty support for various monitoring tools just because of the size of community. I am not aware of any special tool that can monitor HAMMER file system itself. DF uses /dev/serno for drives which enables volumes to be imported from one machine into another. I have not played with that feature enough. DragonFly BSD has a support for Linux Volume Manager. I am not sure if there is any integration between HAMMER and LVM. Theoretically one should be able to use LVM to grow HAMMER file system. However I have not seeing any evidence on DF mailing lists to support this statement. On the contrary I have seen some of main project contributes stating that HAMMER can't be grown.

Once you have a ZFS pool or HAMMER file system on the top of Hardware RAID you will need to create ZFS datasets or HAMMER pseudo file systems PFS for short. In that respect both systems are similar. A single ZFS pool might contain multiple ZFS datasets with different properties. The really cool feature of ZFS includes data compression. I personally like lz4. HAMMER volume can also contain multiple PFSs with different properties (master/slave) but no nested PFSs. I think that support for compression on HAMMER was in works.

Data Protection

ZFS has Copy-on-Write, Check-sums, and Consistency. Depends on the type of the pool multiple HDD failures are permitted. RAID-Z3 discipline allows pool to remain fully functional even in the case when 3 HDD are dead. Depends on the HBA one could theoretically swap HDD on the sever which is up and running. ZFS has the ability to self-heal. In the past IIRC FreeBSD was version of ZFS was not supporting hot spare HDD. I am not sure if the things have changed. I personally have the luxury of taking my server down to replace failed HDD. That is also safer because if you removed wrong HDD you can shut down server again and put the HDD back. Nothing bad will happen to ZFS pool(unlike Linux software RAID which would not survive such surgery). ZFS is preforming continuous integrity checking and automatic repair.

We should also talk about encrypting data. FreeBSD supports GELI full disk encryption when creating ZFS volumes. Using GELI is beyond the scope of this document.

I should also write something about log (ZIL) and L2ARC and in particularly address the use of ZFS on SSD.

HAMMER is supposed to sit on the top of hardware RAID. Theoretically for a fully supported ARECA or LSI card DF BSD should be able to tolerate 2 HDD disk failure. We should be able to have hot spare drive and to be able to replace failed HDD while server is running. Hardware RAID is suppose to heal after it. I am not sure how that will work with HAMMER. I have lots of experience with Linux XFS on the top of hardware RAID card and things work as advertised. Another interesting question is ability of HAMMER to self heal itself in the case of damaged file system. One could think of a hardware RAID with a dead drive as partially degraded volume. What happens once the RAID is healed? Will HAMMER self-heal and expend onto the replaced HDD? I am not aware of such capability. On another hand I think continuous integrity checking in HAMMER is in par with ZFS.

DragonFly has a device mapper target called dm_target_crypt (compatible with Linux dm-crypt) that provides transparent disk encryption. It makes best use of available cryptographic hardware, as well as multi-processor software crypto. DragonFly fully supports LUKS (cryptsetup) and TrueCrypt as disk encryption methods. tcplay, is a free (BSD-licensed) 100% compatible TrueCrypt implementation built on dm_target_crypt.

DF features SWAPCACHE - Managed SSD support. This DragonFly feature allows SSD-configured swap to also be used to cache clean filesystem data and meta-data. The feature is carefully managed to maximize the write endurance of the SSD. Swapcache is typically used to reduce or remove seek overheads related to managing filesystems with a large number of discrete inodes. DragonFly's swap subsystem also supports much larger than normal swap partitions. 32-bit systems support 32G of swap by default while 64-bit systems support up to 512G of swap by default.


Journaling

Contrary to popular opinion in Linux community ZFS and HAMMER are the only existing file systems which support journaling. What is Journaling? You accidently delete a file or a whole directory. You would want to be able to pull that file/directory from a Journal. Even better. Let suppose you alter the file in undesirable fashion. It would be nice to revert file to its original state. One can think of Journaling as a version control system built into the file system.

ZFS supports journaling via periodic snapshots. Those are typically done as cronjobs. There are multitude of tools in FreeBSD ports which can be used to do snapshots I personally like sysutils/zfsnap but people might like others. If you delete a file/dir before the snapshot is taken too bad. You will not recover your file. In my Lab we take snapshots every 3 hours during the work days.

HAMMER also supports snapshots. The default installation takes snapshot via daily periodic scripts and keeps them in /var/pfs for sixty days. On the top of it HAMMER support fine grained journaling via history. That is absolutely the killer feature of HAMMER. Hammer history is fully functional version control system built into the file system. One can use Slider port on DF as a front end to history. You have to see it to believe it. Nothing is ever lost.

I should mention that ZFS and HAMMER journaling are both NFS and Samba aware which in practical terms means that you can continue to use your Windblows or OpenBSD desktop (like and my case) and still have Journaling. One should mentioned though that DF people have given up on NFSv4 but we should also mention that their implementation of NFSv3 seems very robust and the fastest I am aware of.


Backup and recovery

One typically uses ZFS replication to backup ZFS pools. Replication is of course network aware. It is done in deltas and is extremely efficient. One can do deduplication of blocks on the fly during ZFS replication. One can also use additional file system level compression to send deltas. Remote replicates of the file system are fully writable. However note that snapshots are needed before you can replicate your system. Multiple targets are allowed. Remote replications are fully functional remote clones.

HAMMER uses hammer-stream for backup. It is network aware. One can have multiple targets (PFS slaves). Those are not writable. Note that a slave PFS can be promoted into the master. However one has to be aware of the problem with the time. Only t-time is preserved. PFS slaves are clones but not fully functional until promoted into the masters.

Inquiry

Making inquiry about ZFS pool or HAMMER FS is very easy. So both system are enterprise level. Example

zpool status

or

hammer pfs-status /data


Monitoring


FreeBSD has enterprise level ZFS monitoring. IPMI, SNMP, S.M.A.R.T all work as expected. Tools like Nagios or Collectd have plugings for ZFS monitoring even the things like L2ARC.

Monitoring in DragonFly BSD is challenging to say at least. I was appalled that net-mgmt/collectd5 fails to compile on DF.


Alerting


(to be written)


Miscellaneous remarks

It is possible to use ZFS as boot environments. One could use ZFS mirror for a root partition. sysutils/beadm is a killer feature of FreeBSD and ZFS. It allows one to role to pre-update/up-grade fully functional version of OS in the case something goes wrong.

DF BSD uses UFS for /boot. Boot is typically less than a 1GB. The rest of the system is HAMMER. DF installer doesn't support installation of PFS (master/slave) or for that matter on the pair of disks. Personally I hold a view that a large file server running DF will have OS installed on small SSD drive and use hardware RAID or physical drives for data.

It is possible to use ZFS Volume as a iSCSI Target. I have no clue what is the state of iSCSI support on DF BSD. I have not seen any evidence of iSCSI support in HAMMER man pages.

One of favorite FreeBSD are Jails. Tools as sysutils/iocage enable great integration of Jails and ZFS pools. Taking hot ZFS snapshot of a Jail and cloning it remotely is really cool. Similar tool is in works for Bhyve which will be really cool.

DF Jail infrastructure has not being touched for a long time. I am not aware of DF Jails being able to take advantage of HAMMER.


Final Remarks

ZFS Deduplication seems better more stable than HAMMER deduplication. However HAMMER boost off line deduplication. ZFS deduplications require tons of ECC RAM.

HAMMER also likes ECC like any other OS intended to run 24/7/365. However it is great choice for cash strapped people like me.

HAMMER is 4-bit file system while ZFS is 128-bit file system. In practical term both systems should be use on 64-bit machines only.

ZFS is tamed by CDDL license and the fact that Oracle is ultimate gatekeeper of the technology. For example native ZFS encryption is possible only on the Oracle versions of ZFS. HAMMER is a BSD-licensed file system.

ZFS is a no-brainer for large data centers (couple hundred servers). Actually everything considering DF and HAMMER are not usable even in the small shop like mine (300-400 TB of data on the handful of file servers).

For home users in particularly those who have no more than 2-3 TB of data having data on the pair of PFS mirrors is very tempting and probably much more cost effective than similar FreeBSD set up. One should be mindful of the fact that ZFS regardless of the number of HDD requires (or at least many people recommend) at least 16 GB or RAM. 16 GB is not too much and most people will be OK with 8GB or even less but similar DF rig with 2 GB of RAM will probably outperform FreeBSD file server.

I would like to see a project like FreeNAS focusing on DF and HAMMER. I think that such a project is unlikely before HAMMER2 release and full code stabilization of DF base. HAMMER2 looks like a radically new advanced file system. It will be the first fully distributed file system. In practical terms data will be spread over multiple master/slave PFS on different physical locations connected over the network. The system will be able to self-heal even if one of those physical location completely disappears from the face of the Earth.
 
Last edited:
Contrary to popular opinion in Linux community ZFS and HAMMER are the only existing file systems which support journaling. What is Journaling? You accidently delete a file or a whole directory. You would want to be able to pull that file/directory from a Journal.
I think your description of a journal is completely different to the rest of the computer industry. A journal is purely a log of in-flight changes being made to a file system so it can be quickly made consistent after a unclean mount. It has nothing to do with maintaining versions or history. This is more akin to the ZIL in ZFS, although ZFS should never be inconsistent on disk, even without replaying the ZIL. I'm intrigued how HAMMER handles consistency after a crash; Apparently it's available immediately without fsck but as far as I'm aware it (HAMMER1) isn't CoW?

Of course there's also Btrfs that has ZFS-style snapshots. I'm no fan of it, but I believe it's fairly commonly used in Linux distributions now. May even be default in one or two now?

ZFS replication actually isn't network aware (at least doesn't seem to be to me). Fortunately it ties in nicely with the UNIX 'many simple tools' ideology and allows you to chain its output into nc/ssh/whatever in order to get the data stream to another system.

I should mention that ZFS and HAMMER journaling are both NFS and Samba aware which in practical terms means that you can continue to use your Windblows or OpenBSD desktop (like and my case) and still have Journaling.
I'm not sure what you mean here by the journalling being NFS/Samba aware? I thought you were going to mention the ability to tie ZFS snapshots in with Windows/SMB "Previous Versions" (which is a bit finicky but pretty cool none the less).

Lastly, I'm no file system expert, especially with distributed file systems. In fact I've never used a distributed file system so I'm not in any way trying to say you're wrong, but I'm genuinely intrigued what will make HAMMER2 the first fully distributed file system compared to stuff that already sells itself as fully distributed like Ceph or GlusterFS?
 
HAMMER is pretty fascinating and I wish it were available in FreeBSD as an option alongside ZFS. I also wish it were available across all the BSDs. I'm not sure I understand why NetBSD for example, is working on a ZFS port when they are really in the embedded space...where HAMMER will excel.

The two file systems shouldn't be seen as competing against one another, but very much complementary. They are different. HAMMER 1 does employ an interesting set of features to avoid corruption altogether. It uses CRCs, REDO and UNDO. When you mount a disk and it reports a CRC failure, chances are your hardware is faulty rather than a crash or bad shutdown borked your data.

For a home user (and even SMEs) with perhaps tens of terabytes of data, RAID isn't all that its cracked up to be. It makes your set up more complex, introduces additional points of failure, uses more electricity and generates excess heat. Enterprise quality large disks (6 TB +) with network mirroring and backups is plenty secure for the average shop.

HAMMER 2 will completely change the way we think about data redundancy by way of a type of networked RAID. Live rebuilding of data on a failed disk using one or more networked mirrors from anywhere in the world! Or the continued functioning of a file server with a dead disk in one location because there are two more master mirrors in different locations that kick in to keep serving data...this is light years ahead of ZFS which only really does single system, single machine redundancy and is not network aware.

I like both ZFS and HAMMER and think that the two should be seen as complementary tools in the tool kit.
 
I think your description of a journal is completely different to the rest of the computer industry. A journal is purely a log of in-flight changes being made to a file system so it can be quickly made consistent after a unclean mount. It has nothing to do with maintaining versions or history.
You are right. According to that definition WAPBL is journaling file system even both of us know you can't revert to the older file version on NetBSD. However I am using term "journal" in the contest of different backup strategies. Perhaps the correct way to describe is "built in version control". That is also an understatement because hammer history is used also to protect you from file corruptions. If the file gets corrupted you just pull older version from the history and that one will be OK.

Of course there's also Btrfs that has ZFS-style snapshots. I'm no fan of it, but I believe it's fairly commonly used in Linux distributions now. May even be default in one or two now?
I am tired of hearing about Hurd and Btrfs. I work with Linux every day and RHEL has old SGI XFS. Everything else is crap-shoot. There are people in another building swearing by Ubuntu. I think they are stuck with Ext4. Frankly based on my personal experience HAMMER2 is far more complete than Btrfs. Get a snapshot of DF and try for yourself.

ZFS replication actually isn't network aware (at least doesn't seem to be to me). Fortunately it ties in nicely with the UNIX 'many simple tools' ideology and allows you to chain its output into nc/ssh/whatever in order to get the data stream to another system.
You just send stream through SSH and unpack it on another side.

I'm not sure what you mean here by the journalling being NFS/Samba aware? I thought you were going to mention the ability to tie ZFS snapshots in with Windows/SMB "Previous Versions" (which is a bit finicky but pretty cool none the less).
In my Lab the home directories of my users are physically located on the file server running FreeBSD and using ZFS. Those home directories are mounted on the computing nodes via NFS. When one of our members accidentally delete file or do something stupid with it they send me the e-mail and I pull the older version of their file from .zfs/snapshots. I am not using Samba but I know people who used it and the same is true if you mount a home directory on the Windows machine.

In the case of the HAMMER you have a whole history not just a snapshot. Checkout this picture

http://leaf.dragonflybsd.org/~sgeorge/PICs/Samba-hammer-snapshot-bkp-1.png

and the description of what is actually happening

https://www.dragonflybsd.org/docs/r...s__44___linux__44___bsd_and_mac_os_x_clients/


Lastly, I'm no file system expert, especially with distributed file systems. In fact I've never used a distributed file system so I'm not in any way trying to say you're wrong, but I'm genuinely intrigued what will make HAMMER2 the first fully distributed file system compared to stuff that already sells itself as fully distributed like Ceph or GlusterFS?
I probably should left out that sentence as I knew that is going to play wrong with PR offices of people who are interesting more in sales than innovation.
 
HAMMER is pretty fascinating and I wish it were available in FreeBSD as an option alongside ZFS. I also wish it were available across all the BSDs. I'm not sure I understand why NetBSD for example, is working on a ZFS port when they are really in the embedded space...where HAMMER will excel.
NetBSD guys don't work on anything. They started "porting" ZFS 2007. They also through some hot air about HAMMER. Nobody in NetBSD camp is interested in those technologies. For embedded they have WAPBL which is really cool shit and I wish it was available on vanilla OpenBSD (You have it on the Bitrig). This is what I think about NetBSD.

http://daemonforums.org/showthread.php?t=8810



For a home user (and even SMEs) with perhaps tens of terabytes of data, RAID isn't all that its cracked up to be. It makes your set up more complex, introduces additional points of failure, uses more electricity and generates excess heat. Enterprise quality large disks (6 TB +) with network mirroring and backups is plenty secure for the average shop.
+1

I looked up and down both ZFS RAID-Z2 discipline and hardware RAID 6 for my new home file server and I just told to myself. Do you really need 6X3TB of HDD and all that mambo jumbo to store 100 GB worth of kids pictures and videos? Lets be real everything else on my computer is replaceable.

I like both ZFS and HAMMER and think that the two should be seen as complementary tools in the tool kit.
+1
 
Meh, I'd rather UFS be extended with such features or through another storage protocol for distributed computing than to fuss with another re-invention. That's just me though.

Like XFS/CXFS, for example.
 
For home users in particularly those who have no more than 2-3 TB of data having data on the pair of PFS mirrors is very tempting and probably much more cost effective than similar FreeBSD set up. One should be mindful of the fact that ZFS regardless of the number of HDD requires (or at least many people recommend) at least 16 GB or RAM. 16 GB is not too much and most people will be OK with 8GB or even less but similar DF rig with 2 GB of RAM will probably outperform FreeBSD file server.
16 GB RAM is not needed for ZFS, I have used 512 MB on FreeBSD for 2 TB ZFS mirror pool for about two years ... but if you want to use deduplication, then you still can have that 512 MB but you have to add at least one L2ARC device (preferably SSD) for keeping the hashes in RAM and/or L2ARC, of course you may increase RAM instead.
 
16 GB RAM is not needed for ZFS, I have used 512 MB on FreeBSD for 2 TB ZFS mirror pool for about two years ... but if you want to use deduplication, then you still can have that 512 MB but you have to add at least one L2ARC device (preferably SSD) for keeping the hashes in RAM and/or L2ARC, of course you may increase RAM instead.
I don't know about the prices of the SSDs and ECC RAM in Poland but here in use the difference between 16GB of ECC RAM and 32 or 64 GB SSD are almost negligible. Beyond the point. I respect you knowledge and contribution to FreeBSD community but I hold the view that telling people that ZFS can be run on old hardware with mediocre specs is dis-genuine at best and border line lie. Who cares one might ask?

A recently a gentlemen who is doing humanitarian work in Africa Tanzania wrote an e-mail to DF mailing list after being directed there by few OpenBSD developers. He is trying to put together a network of electronic libraries in the part of Tanzania where power grid is barely existing and not very reliable. Could you guess what OS his computers are running now and what file system do they use?
 
When to use ZFS is a subjective matter of preference, needs, and environment. It can work just fine on lower tier consumer hardware, to a point. I wouldn't use it on a Netbook with 1GB of RAM, but you could and with some tuning, it could work just fine depending on what the user needs or wants. Again, it is a matter of personal preference and the context of your environment when to use it.
 
I don't know about the prices of the SSDs and ECC RAM in Poland but here in use the difference between 16GB of ECC RAM and 32 or 64 GB SSD are almost negligible. Beyond the point. I respect you knowledge and contribution to FreeBSD community but I hold the view that telling people that ZFS can be run on old hardware with mediocre specs is dis-genuine at best and border line lie. Who cares one might ask?
Telling that ZFS requires 16 GB RAM just to work is a lie, as simple as that.

A recently a gentlemen who is doing humanitarian work in Africa Tanzania wrote an e-mail to DF mailing list after being directed there by few OpenBSD developers. He is trying to put together a network of electronic libraries in the part of Tanzania where power grid is barely existing and not very reliable. Could you guess what OS his computers are running now and what file system do they use?
Could You elaborate more on that? Its quite interesing.
 
hammer-time.jpg



Damn skippy!
 
…The main issue I'm seeing with ZFS is that grub2 doesn't play well with it. …

Amongst recent improvements: http://web.archive.org/web/20160423...tes/CURRENT/relnotes/article.html#boot-loader – the boot loader has been updated to support entering the GELI passphrase before loading the kernel.

… FreeBSD supports GELI full disk encryption when creating ZFS volumes. Using GELI is beyond the scope of this document. …

Towards multiplatform ZFS encryption: https://github.com/zfsonlinux/zfs/pull/4329
 
I remember choosing zfs in the antergos installer and then it would crash because 2gb of ram wasn't enough.

I wonder if this is an Antergos issue or something else? I've got a number of FreeBSD VMs running ZFS with no more than 2GB RAM for testing. One of them has a few hundred GBs of storage using ZFS mirrors which I've used for some stress testing - it's not performant, but it's stable...
 
I love ZFS but to be honest I always manually limit the ARC size to guarantee a reasonable amount left over for the system. Haven't kept up with whether any work has been done to help the memory issues but I've had far too many crashes in the past due to memory starvation. (I have been using ZFS since v15 so I was a pretty early adopter).
 
I remember choosing zfs in the antergos installer and then it would crash because 2gb of ram wasn't enough.
I've run ZFS on 2GB and 4GB machines (of which only 3GB were used, because they are 32 bit), and it works just fine, with very minor tuning (one or two variables). Right now I have 4+3+3 TB disks in my server at home, with only 3GB of memory.

The stories of ZFS needing oodles of memory might be true for high-performance production systems (I can't verify that, the performance of my hardware doesn't allow that), but basic function of ZFS doesn't need lots of memory.
 
I've run ZFS on 2GB and 4GB machines (of which only 3GB were used, because they are 32 bit), and it works just fine, with very minor tuning (one or two variables). Right now I have 4+3+3 TB disks in my server at home, with only 3GB of memory.

The stories of ZFS needing oodles of memory might be true for high-performance production systems (I can't verify that, the performance of my hardware doesn't allow that), but basic function of ZFS doesn't need lots of memory.
ZFS is 128-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public forum and we all should refrain from posting bad advises.
 
ZFS (just like HAMMER1 and HAMMER2) is 64-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public forum and we all should refrain from posting bad advises.
ZFS is 128-bit filesystem ...
 
  • Thanks
Reactions: Oko
ZFS is 128-bit file system and it should not be used on 32-bit machine/OS. Your personal experience is irrelevant. This is a public forum and we all should refrain from posting bad advises.
ZFS seems to run excellently on my 32-bit machine. Matter-of-fact, I've run lots of 64-bit file systems and a different 128-bit file system on 32 bit machines. YMMV.
 
Status
Not open for further replies.
Back
Top