ZFS vs HAMMER

Status
Not open for further replies.
Doesn't seem like it. HAMMER1 was developed for a while then effectively dropped, not sure exactly why (Don't know if it's still maintained as the current stable version?). After doing a bit more reading HAMMER2 seems to now be part of recent Dragonfly releases, but is not ready to be used.

So apart from HAMMER having some interesting features and being more lightweight, we may as well be comparing ZFS and the next big file system that has a big list of features that doesn't actually exist yet.

The only thing that really goes in HAMMER favour for me is if it's lightweight enough to truly be a drop in replacement for UFS as a general purpose file system. ZFS definitely still struggles with databases, and UFS is still a better choice in a lot of cases unless you really want ZFS features. Of course if it is only ever available on Dragonfly, I'll probably never get round to using it.
 
The point with ECC memory is, in my humble opinion, a second order fact. When you host several Terabytes of data, you will need a good lot of memory, no matter what file system or operating system you use. Since memory is likely to develop flipped bits, as was already cited, you need ECC memory when you have a lot of memory holding important data. When the data is only short lived, like if you are mainly doing image manipulation in a batch job, then you may go without it. Having cached metadata of the file system developing bitrot is bad, no matter what file system it is. That may be the reason why ZFS is mentioned with ECC memory. Oh, and also the possible uptimes of a machine come in here also.
 
Doesn't seem like it. HAMMER1 was developed for a while then effectively dropped, not sure exactly why (Don't know if it's still maintained as the current stable version?). After doing a bit more reading HAMMER2 seems to now be part of recent Dragonfly releases, but is not ready to be used.
That is FUD! HAMMER1 is feature complete and stable since 2008. It is the default file system of DF OS and used for the root partition. No new features are planned/possible due to the original design limitations. Now one may argue that DF is a research OS which is pretty volatile for enterprise environments, and very few people are using it. That is true, but there is nothing unstable about HAMMER 1. HAMMER1 is as stable a product as ZFS is (most likely not as well tested due to the smaller user base). HAMMER2 has been in works for 2 years and is not even close to being ready for testing, let alone for anything else.
 
What are you using as an alternative to ZFS?
Don't get me wrong, I never had any problems with ZFS (running OpenSXCE), apart from never letting go of cache RAM.

I have been relying on a cheap Buffalo NAS since the last near_disaster_happening 3 years ago.
A recent near_disaster_happening with the NAS have kicked me in the rear to build a better solution for long time storage.
I'm currently searching the internet, checking what the alternatives are, just chipped in with my experience of ZFS.
 
Sorry, I took that mainly from your own words:
HAMMER 1 is DEAD END.

Is v2 supposed to replace the original or is v1 being kept as a complete, stable file system? How much effort is being put into ongoing support for v1 now the devs have moved their attention onto a completely new version?

I may have taken this thread off topic slightly. My original concern was just that there were a lot of unfounded plus points for HAMMER ("amazing performance"/"beats the pants of everything",etc). Half the ZFS down points were also unfounded or irrelevant and peoples attempting to right them were either asked for solid proof* or outright called liars.

*Fair enough but I don't see much proof for a lot of the HAMMER claims.
 
Well, as it's already off topic,

Every time I see this thread
Got that song stuck in my head
My my Dragonfly
Apple of Matt Dillon's Eye
As for me must confess
I use a lot more ZFS,
But I think it ain't no crime
if you say

STOP!
Hammer Time!

(OK, I'm done now--but I can't believe I'm the only one who keeps seeing the subject and thinking of You Can't Touch This).
 
Sorry, I took that mainly from your own words:
I got it. My bad. The fact that English is not my native tongue clearly shows. What I meant by "dead end" is that no new features are being planned/added to the HAMMER 1 or even possible due to B-trees design limitations. But this is also true for example for FFS on OpenBSD. HAMMER1 bugs are fixed regularly when found. Check out for yourself

http://gitweb.dragonflybsd.org/dragonfly.git

Is v2 supposed to replace the original or is v1 being kept as a complete, stable file system? How much effort is being put into ongoing support for v1 now the devs have moved their attention onto a completely new version?
I am not sure that anybody knows what future holds for HAMMER 1 once HAMMER 2 get released. HAMMER 2 is meant to be complete, stable separate file system. It has well defined list of objectives.

http://leaf.dragonflybsd.org/mailarchive/users/2012-02/msg00020.html

Project is significantly behind original schedule so I don't want to think about HAMMER 2.
I recall vividly that Pawel Jakub Dawidek publicly expressed his doubts that Matt Dillan can pull something like HAMMER 1 and write a file system on his own without corporate backing. He did and I hope he will do it one more time with HAMMER 2. Matt and me are people of about the same age and definitely closer to the end of our lives than to the beggining. I learned C programming using his C compiler on Amiga. He was 16 and I was 16. I really want him to pull this one :)

I may have taken this thread off topic slightly. My original concern was just that there were a lot of unfounded plus points for HAMMER ("amazing performance"/"beats the pants of everything",etc). Half the ZFS down points were also unfounded or irrelevant and peoples attempting to right them were either asked for solid proof* or outright called liars.

*Fair enough but I don't see much proof for a lot of the HAMMER claims.

I am by no stretch of imagination serious expert on file systems and I doubt you will find such people hanging here. I started a tread out of my personal frustration for the lack of any compassion between these two file systems and my hope that other users like myself who have been exposed to both file systems will fill in missing pieces. This forum is probably the only place where you can find people who have been exposed to both file systems.

I have no idea why people took so hard for example my statement that ZFS needs lots of RAM. I am having hard time to see why people have to defend ZFS so hard or their defend their choice to use ZFS. The coolest thing about HAMMER and DF is that it is labor of love. Matt got rich during dot com boom. He and handful of other like minded guys are hacking on that in their spare time. As a curios person I was always fascinated what they were doing and try using their labor of love at work. It didn't go quite the way I wanted first time but I am sure I will try again. Since my kids have to eat every day regardless whether Monit works or doesn't on DF I use enterprise tested technology ZFS+FreeBSD at my workplace. ZFS is pretty good you know. It is better than playing with XFS and mdadm.

DF guys are not selling anything, they are not trying to compete with Oracle or with FreeBSD for that matter. People who think that FreeBSD and ZFS are the best thing after slice of bread should ignore this thread. People who work for large server farms probably should ignore this thread as well. People who are too serious forum posts probably should ignore it as well.

That leaves me with a targeted audience of like minded geeks, who suffered cabin fever like myself hopefully from different reasons than mine (I had terrible Bronchitis over past 10 days with prevented me from skiing with my children over the holidays).
 
You also need a really good Intel processor.

Absolutely, completely, and totally false, FUD, etc. ZFS runs reliably on any x86 processor, whether that be Intel, AMD, Via, or anybody else.
  • Linux implementation relays on FUSE.
No, it doesn't, and it hasn't in a long time. ZFS is available as a kernel module for Linux, and runs virtually the same on Linux as any other filesystem/volume manager. No FUSE required.
  • Well known data base degradation (not suitable to keep SQL data bases).
Not true. It requires some tuning, but ZFS runs SQL databases just fine.
  • No volume growing at least FreeBSD version.
You can't add drives to a raidz vdev. But you can add storage space to a raidz vdev by replacing each drive in turn, and then running # zpool online -e for each drive in the vdev. You can also add more raidz vdevs to a pool to increase the total amount of storage in the pool.
  • Upstream is dead. I don't know much about you but OpenZFS doesn't inspire lots of confidence in me.
Not even close. Upstream is very much alive, and patches and features are flowing bi-directionally between FreeBSD and OpenZFS. The mailing lists are quite active, and development is happening as we speak.
 
If OpenZFS is a future what is their tier one development platform? illumos , FreeBSD, or God forbid Linux on which ZFS feels very awkward?

Illumos is the primary development platform for OpenZFS, and is the ultimate gatekeeper of "what is OpenZFS".

However, all stakeholders that support ZFS (Solaris-derivatives, FreeBSD, Linux, MacOS X, etc) are free to develop and implement features. Then, once they are stable, to submit them upstream, which then makes it available to all downstream systems. It's not a perfect system, but it's working quite nicely. For example, there are a handful of features that were originally developed on Linux that are now part of OpenZFS upstream, and have been imported into FreeBSD.

Solaris ZFS and OpenZFS are not related in any way other than they both support the old-school ZFSv28 features. Once you enable feature flags (ZFSv5000), you are running OpenZFS and can ignore anything and everything Solaris/Oracle ZFS related.
 
Linux implementation relays on FUSE.
No, it doesn't, and it hasn't in a long time. ZFS is available as a kernel module for Linux, and runs virtually the same on Linux as any other filesystem/volume manager. No FUSE required.

No FUSE required, but ZFS on Linux lags behind 'Upstream' OpenZFS in features (in ZFS feature flags to be precise):
http://blog.vx.sk/archives/44-OpenZFS-Feature-Flags-Compatibility-Matrix.html

Also from https://twitter.com/ahl/status/543064559301300225:
"teaser: OpenZFS device removal has just landed in our repository; looking forward to seeing it upstreamed!"
 
Here is a short blog post about the 2014 OpenZFS Developer Summit 2014. More specifically, it's about one speaker's thoughts on OpenZFS on Illumos versus Linux. It's a little off-topic for this thread, but I post it here because it shows (and links to information that shows) OpenZFS is alive and well and "growing".
 
It looks Solaris is dead and not marketed at all.
You post one flippant comment about OpenZFS being dead upstream, then go on and make another bizarre statement about Solaris being dead. The most distant galaxy, z8_GND_5296, is probably close than these two absurdly incorrect comments.
 
The most distant known galaxy, z8_GND_5296, is probably close than these two absurdly incorrect comments.
But nitpicking aside, I would like this thread to be civilized (ahh, astronomy, I like it!). It contains valuable information and will likely continue to do so. So, in order to reduce the chance to have this thread being closed for being closer to bar-bragging than forum rules, I would like all participants to calm down and carry on.

Okay?
 
After running DragonflyBSD/HAMMER1 for the past two months (and FreeBSD/ZFS for far longer) I have a couple of items to add to the list that might be beneficial to the discussion.

HAMMER1 (+)
  1. Native encryption using dm_target_crypt, tcplay and libdm
  2. No filesystem performance/penalty when "pools"/file systems are >80% full
  3. ZFS's ZIL/L2ARC equivalent (swap-cache) does not need to be integrated into storage pools and can be removed, upgraded or resized at any time as your system needs change. Under ZFS you'll need to destroy your pool and restore from backup in order to accomplish this.
  4. Surprised no one mentioned this before: offline and online data block deduplication with very spartan RAM requirements
 
Last edited by a moderator:
ZFS's ZIL/L2ARC equivalent (swap-cache) does not need to be integrated into storage pools and can be removed, upgraded or resized at any time as your system needs change. Under ZFS you'll need to destroy your pool and restore from backup in order to accomplish this.
The ZIL has been removable since zpool v28, which has been in FreeBSD for some years now.
 
Ahh, fair enough. Thanks for the correction. I wasn't aware of this. But that brings me to another item then:

Let's say we build a NAS box consisting of 2 HDDs.

Using ZFS you can create a pool across both disks or on either of the single disks by themselves. Let's say you create a pool on one disk and add a ZIL to that pool. Then you subsequently create a second pool on the remaining disk. It seems to me that you would only get the benefit of the ZIL while writing to the first pool only (unless you have some sort of mirror set up).

Under HAMMER, regardless of how you arrange your disks, you'd get the benefits of swap-cache across all the disks/filesystems in the system. Even if newer disks are added later on. This is because swap-cache caches meta-data and filesystem data for all filesystem transactions in the system, not for selected ones that it has been configured for. If I am wrong I am willing to stand corrected of course.
 
  • Thanks
Reactions: Oko
I don't think you are wrong but the ZFS example you gave sounds like a really bad pool layout - especially when one of the main features is self-healing.
 
After running DragonflyBSD/HAMMER1 for the past two months (and FreeBSD/ZFS for far longer) I have a couple of items to add to the list that might be beneficial to the discussion.

HAMMER1 (+)

1) Native encryption using dm_target_crypt, tcplay and libdm

2) No filesystem performance/penalty when "pools"/file systems are >80% full

3) ZFS's ZIL/L2ARC equivalent (swap-cache) does not need to be integrated into storage pools and can be removed, upgraded or resized at any time as your system needs change. Under ZFS you'll need to destroy your pool and restore from backup in order to accomplish this.

4) Surprised no one mentioned this before: offline and online data block deduplication with very spartan RAM requirements

That is very informative post. Interestingly enough I am using more and more ZFS and FreeBSD at work (right now I am running five big rigs and adding more). I am also becoming more and more familiar with
ZFS gotchas. I played last week with ZFS replication to remote server and the thing is magical as long as you have the right hardware.

On another hand I am starting to realize that the whole thread ZFS vs HAMMER 1 is little ridiculous as it really compares two different things. HAMMER 1 is just a file system while ZFS is much more than that (softraid+LVM+file system).

At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.
 
At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.
I have been interested in trying out DragonflyBSD and HAMMER when I have more time. It seems to be a good midpoint between UFS and ZFS from comments I've seen in this thread as well as what I've read elsewhere so far. I have to personally disagree with your assertion that ZFS has no use for a home user though. You don't have to run RAID-Zn to get a benefit from ZFS. I use it simply for snapshots and data integrity on my desktop using multiple mirror vdevs. I feel better copying my data to a backup server from a ZFS system. Backup copies are no good if the data is corrupted before transit to the backup target. ZFS can help mitigate that better than any other production quality file system. Also keep in mind users who are using FreeBSD, or any BSD for that matter as a desktop/workstation, are not your general desktop user a large part of the time. ;)
 
At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.

Interesting point. But, there are lots of home users that stand to benefit from using ZFS at home. Second hand Proliant micro servers for example can be had for less than $200 and this is with ECC RAM. Disks are cheap, so its easy to build a DIY NAS that takes advantage of most of ZFS's features such as RAID-Z2 (probably not deduplication though) for less than $500. ZFS scales well for the home or small business user as much as it does for big enterprise.
 
  • Thanks
Reactions: Oko
Ahh, fair enough. Thanks for the correction. I wasn't aware of this. But that brings me to another item then:

Let's say we build a NAS box consisting of 2 HDDs.

Using ZFS you can create a pool across both disks or on either of the single disks by themselves. Let's say you create a pool on one disk and add a ZIL to that pool. Then you subsequently create a second pool on the remaining disk. It seems to me that you would only get the benefit of the ZIL while writing to the first pool only (unless you have some sort of mirror set up).

Partition the SSD in two, and then assigned each partition to the separate pools as LOG devices.

Same for CACHE devices.

You can't share individual vdevs between pools (which makes sense). But there's nothing stopping you from sharing physical disks between pools.
 
That is very informative post. Interestingly enough I am using more and more ZFS and FreeBSD at work (right now I am running five big rigs and adding more). I am also becoming more and more familiar with
ZFS gotchas. I played last week with ZFS replication to remote server and the thing is magical as long as you have the right hardware.

On another hand I am starting to realize that the whole thread ZFS vs HAMMER 1 is little ridiculous as it really compares two different things. HAMMER 1 is just a file system while ZFS is much more than that (softraid+LVM+file system).

At this point I fail to see how ZFS can be of any use to a typical home user. At the time when you can get 2TB HDD for about $80 I don't understand why would anybody at home put couple of thousand dollars to build the rig with sufficient number of HDD and high enough quality to be able to run properly RAID-Z2 or RAID-Z3 and get the full benefit of ZFS. For home user running 2x2TB HAMMER 1 as mirror where HDD are connected with SATA controllers is enough for all user case scenarios I can think off.

Why would you need to spend thousands of dollars to make ZFS worthwhile?

My home server runs ZFS. Originally with 4x 160 GB IDE drives in raidz1. Then with 4x 250 GB SATA drives in raidz1. Currently with 4x 500 GB SATA drives in two mirror vdevs. Works wonderfully as a home media server running Plex, storing our photos and files, and centralising resources (disk, printers, accounts, etc). Over the years, I may have spent over $1000 CDN on the server, but that's going through 3 different motherboards/CPUs/RAM, multiple disk, multiple controllers, etc.
 
I only see Proliant micro servers of around $400 but I saw this new file server case for $80

http://www.ebay.com/itm/1U-2x5-25-1...635?pt=LH_DefaultDomain_0&hash=item5b0e5d1aab

Definitely good start for DragonFly home file server. Adding 2x2TB HDD CPU and RAM should be all together under $300.

You'd be surprised at what you can get at state auctions!

A DIY box that is supported under Dragonfly has been put together by Matt Dillon here - http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=19110647
 
Status
Not open for further replies.
Back
Top