DragonFly 5.2 is released!

https://marc.info/?l=dragonfly-users&m=150816781917465&w=2

This release features HAMMER2 file system as a technology preview enabled in the default generic kernel.

Code:
dfly# uname -a
DragonFly dfly.bagdala2.net 5.0-RELEASE DragonFly v5.0.0.2.ga9d62-RELEASE #10: Tue Oct 17 07:25:14 EDT 2017     root@dfly.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64

Code:
dfly# gpt -v create /dev/da3
dfly# gpt create: /dev/da3: mediasize=500107862016; sectorsize=512; blocks=976773168
dfly# gpt add -b 34 -i 0 -s 990 /dev/da3
dfly# gpt add -b 1024 -i 1 -s * /dev/da3
dfly# gpt -v show /dev/da3
gpt show: /dev/da3: mediasize=500107862016; sectorsize=512; blocks=976773168
      start       size  index  contents
          0          1      -  PMBR
          1          1      -  Pri GPT header
          2         32      -  Pri GPT table
         34        990      0  GPT part - DragonFly Label64
       1024  976772111      1  GPT part - DragonFly Label64
  976773135         32      -  Sec GPT table
  976773167          1      -  Sec GPT header

dfly# disklabel64 -rw da3s1 auto
dfly# disklabel64 -e da3s1

dfly# newfs_hammer2  /dev/da3s1e
newfs_hammer2: WARNING: HAMMER2 VFS not loaded, cannot get version info.
Using version 1
Volume /dev/da3s1e     size 465.76GB
---------------------------------------------
version:          1
total-size:       465.76GB (500103643136 bytes)
boot-area-size:    64.00MB
aux-area-size:    256.00MB
topo-reserved:      1.82GB
free-space:       463.62GB
vol-fsid:         442a0c12-b33a-11e7-a8ce-b9aeed3cce35
sup-clid:         442a0c25-b33a-11e7-a8ce-b9aeed3cce35
sup-fsid:         442a0c30-b33a-11e7-a8ce-b9aeed3cce35
PFS "LOCAL"
    clid 4430a88b-b33a-11e7-a8ce-b9aeed3cce35
    fsid 4430a89f-b33a-11e7-a8ce-b9aeed3cce35
PFS "DATA"
    clid 4430a8da-b33a-11e7-a8ce-b9aeed3cce35
    fsid 4430a8ea-b33a-11e7-a8ce-b9aeed3cce35

Code:
dfly# mount_hammer2 /dev/da3s1e /test-hammer2
dfly# mount
ROOT on / (hammer, noatime, local)
devfs on /dev (devfs, nosymfollow, local)
/dev/serno/B620550018.s1a on /boot (ufs, local)
/pfs/@@-1:00001 on /var (null, local)
/pfs/@@-1:00002 on /tmp (null, local)
/pfs/@@-1:00003 on /home (null, local)
/pfs/@@-1:00004 on /usr/obj (null, local)
/pfs/@@-1:00005 on /var/crash (null, local)
/pfs/@@-1:00006 on /var/tmp (null, local)
procfs on /proc (procfs, local)
DATA on /data (hammer, noatime, local)
BACKUP on /backup (hammer, noatime, local)
/data/pfs/@@-1:00001 on /data/backups (null, local)
/data/pfs/@@-1:00002 on /data/nfs (null, NFS exported, local)
/dev/da3s1e@DATA on /test-hammer2 (hammer2, local)
 
Last edited:
I can confirm i915 correctly attaches to Kabylake i5 and that efisetup successfully allows to create a bootable UEFI system with GPT table on a Kingston M.2 PCIe ssd. Currently tri-booting with Slackware and FreeBSD
 
DragonFly works fine with a Haswell, including EFI. Also iGPU worked without hitch, including HDMI Audio.

One thing I missed though was RAID support on HAMMER 2..
 
One thing I missed though was RAID support on HAMMER 2..

HAMMER2 is ultimately designed to operate as a clustered filesystem. So unlike say ZFS which does local redundancy with block devices within one server, HAMMER2 is designed to be able to negotiate transactions between and offer redundancy via block devices located across multiple servers with automatic failover. This has been referred to as redundant array of inexpensive servers - (RAIS) or networked RAID.

In a sense an entire server can bite the dust but your files will be safely contained in two or more other servers in different locations. There is also a form of local redundancy planned using a copies=n feature. Effectively if you have two block devices within a single server you can have your data copied in a sort of mirrored fashion between them. If one block device dies, the other continues providing access to your data. It can also be used in round robin fashion to increase throughput. At least that is what I understand from the HAMMER2 design document.

Its promising. I imagine it will eventually trickle to the other BSDs once proven and stable. OpenBSD expressed interest in it a while ago. That nice BSD license on it makes almost anything possible.

But yeah, if you need redundancy now with HAMMER2 your best bet is to get a decent hardware RAID controller.
 
HAMMER2 is ultimately designed to operate as a clustered filesystem. So unlike say ZFS which does local redundancy with block devices within one server, HAMMER2 is designed to be able to negotiate transactions between and offer redundancy via block devices located across multiple servers with automatic failover. This has been referred to as redundant array of inexpensive servers - (RAIS) or networked RAID.

In a sense an entire server can bite the dust but your files will be safely contained in two or more other servers in different locations. There is also a form of local redundancy planned using a copies=n feature. Effectively if you have two block devices within a single server you can have your data copied in a sort of mirrored fashion between them. If one block device dies, the other continues providing access to your data. It can also be used in round robin fashion to increase throughput. At least that is what I understand from the HAMMER2 design document.

Its promising. I imagine it will eventually trickle to the other BSDs once proven and stable. OpenBSD expressed interest in it a while ago. That nice BSD license on it makes almost anything possible.

But yeah, if you need redundancy now with HAMMER2 your best bet is to get a decent hardware RAID controller.

Thanks. If what I did read about Hammer v1's portability (written by M. Dillon) applies to Hammer v2 - there is not much chance it would be portable on other BSD's.
 
Thanks. If what I did read about Hammer v1's portability (written by M. Dillon) applies to Hammer v2 - there is not much chance it would be portable on other BSD's.

Right. Only time will tell I guess. The crux of the issue with regards to OpenBSD though is that they need a next gen FS and there aren't exactly many next gen BSD/ISC licensed filesystems around.
 
Am I not achieving that with net-p2p/btsync? ;)
No you are not! File synchronization software != clustering file system. As a matter of fact file synchronization software != any file system. It is just that file synchronization software whether you are using btsync, rsync, unison, ownCloud or some Windows nonsense. It just uses the default file system which is available on a particular OS, an existing network protocol (in your case bit-torrent. It is completely oblivious to such concepts as high availability, redundancy, self-healing, history retention, checksum, COW etc. However various marketing departments (Red Hat for example) are all to eager to sell you XFS+LVM as ZFS or HAMMER.
 
In a sense an entire server can bite the dust but your files will be safely contained in two or more other servers in different locations.

Am I not achieving that with net-p2p/btsync? ;)

No you are not!

How am I not achieving that? I realize the mechanism are different. If I have five servers in five countries and one server burns to the ground, and I have four copies of my files remaining, how am I not achieving what gofer_touch said?

Sorry didn't mean to derail the intended discussion. :oops:
 
How am I not achieving that? I realize the mechanism are different. If I have five servers in five countries and one server burns to the ground, and I have four copies of my files remaining, how am I not achieving what gofer_touch said?

They're referring to the filesystem itself being spread across a network. RAID distributes multiple copies of a file across multiple block devices, but presents you with a single copy in a single directory. If one of the block devices fails, the file still exists in another block device and can be accessed in the same directory as though nothing has gone wrong. Clustered filesystems do the same thing, but with entire storage racks across networks. You could have a system spread across multiple machines in multiple locations. A system installed on a RAID array will survive disk failure; a system installed on a clustered filesystem will survive a building burning down.
 
How am I not achieving that? I realize the mechanism are different. If I have five servers in five countries and one server burns to the ground, and I have four copies of my files remaining, how am I not achieving what gofer_touch said?
Suppose one of your local copies got corrupted. btsync happily synchronized your files. Now you are a happy owner of the five corrupt copies unless the underlining file system was HAMMER1 or ZFS in which case you will be able to recover an un-corrupted version of the file from the HAMMER history or ZFS snapshot if you are lucky to have one.
Now imagine that you have 10 million tiny image files. Your chances of recovering corrupt files are zero unless file system can self-heal (ZFS). You do understand that one messed up bit on the image file means image is badly damaged.

Home users might not see the value but if you are doing massive data mining you might have hundreds of millions of tiny image files.
 
Thanks. If what I did read about Hammer v1's portability (written by M. Dillon) applies to Hammer v2 - there is not much chance it would be portable on other BSD's.
What you read about HAMMER v1 portability does *not* apply to HAMMER v2.
 
This means there are hopes (at least from the purely technical point of view) that it is easier to port to FreeBSD (or even OpenBSD)?

I have some doubts about HAMMER2 ever appearing in FreeBSD. FreeBSD has ZFS and has invested a lot of resources into making it work and perform well on that platform. ZFS has many good years ahead of it on FreeBSD and more and more people are being sold on FreeBSD/ZFS as a solid basis for fileserving and other applications. That and the OpenZFS project continues to bring about really nice features that are quite useful (compressed L2Arc and native encryption among them). Also there are companies that provide a FreeBSD-based/ZFS solution with support (a must for some setups)!

The situation for OpenBSD appears to be quite a different story. They didn't want such a huge amount of code in their tree and and the ZFS license (CDDL) was a no go from the start. I remember reading a post by an OpenBSD developer who was examining what porting ZFS to OpenBSD would look like and it was not a pretty picture. ZFS is monolithic in its design meaning that you can't really just port certain parts of ZFS, you had to have them all. HAMMER2 by comparison seems to be modular, in that you can have a single image setup without the clustering features (reducing the size and complexity of the code, but still giving you compression, deduplication, checksumming and other modern filesystem features). You'd still need a hardware RAID card for automatic failover in case a disk dies, but IIRC OpenBSD has a pretty decent softraid in its tree.

Linux will likely get it at some point, they even have a read only port of HAMMER1 if I am not mistaken.
 
DragonFly 5.0.1 appears to be out
Minor fixes.
Code:
dfly# uname -a
DragonFly dfly.bagdala2.net 5.0-RELEASE DragonFly v5.0.1.4.g32d6b-RELEASE #11: Tue Nov  7 23:59:09 EST 2017     root@dfly.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64
 
I have some doubts about HAMMER2 ever appearing in FreeBSD.
+1
I would be ready to go step further and wage $100 that HAMMER2 will never be ported to FreeBSD. There is just too much bad blood between FreeBSD elders and their former super star developer Matt Dillon.

FreeBSD has ZFS and has invested a lot of resources into making it work and perform well on that platform. ZFS has many good years ahead of it on FreeBSD and more and more people are being sold on FreeBSD/ZFS as a solid basis for fileserving and other applications. That and the OpenZFS project continues to bring about really nice features that are quite useful (compressed L2Arc and native encryption among them). Also there are companies that provide a FreeBSD-based/ZFS solution with support (a must for some setups)!
FreeBSD folks have essentially reimplemented large part of Solaris kernel in order to get ZFS running. Even with all the resources of the company like Red Hat it would take a decade (couple thousands developer years) for HAMMER2 to mature to the same level. FreeBSD had a good timing (Oracle acquisition of Sun and consequent elimination of entire Sun portfolio). However Joyent (SmartOS) people mean business and Solaris code is well and alive in their products. Also Linux has made great strides to get ZFS working as userland FUSE modules. Next long term Ubuntu release will feature ZFS. I would not be surprised to see it in Red Hat 8 now when even Linux fun boys admit that BTRFS is a vaporware.

The situation for OpenBSD appears to be quite a different story. They didn't want such a huge amount of code in their tree and and the ZFS license (CDDL) was a no go from the start. I remember reading a post by an OpenBSD developer who was examining what porting ZFS to OpenBSD would look like and it was not a pretty picture. ZFS is monolithic in its design meaning that you can't really just port certain parts of ZFS, you had to have them all. HAMMER2 by comparison seems to be modular, in that you can have a single image setup without the clustering features (reducing the size and complexity of the code, but still giving you compression, deduplication, checksumming and other modern filesystem features).
I mostly agree with this assessment. OpenBSD first and foremost lacks a decent modern general purpose file system which will run on all supported architectures not just on amd64. FFS+softdep is just not good enough and It appears that FreeBSD with UFS Journaling has an upper hand. However the golden standard from the network appliance, embedded devices point of view is WAPBL which Wasabi technologies wrote for NetBSD and released under BSD licence after going bankrupt in 2009 IIRC. WAPBL and the automated regression testing tool box are probably the only things worth salvaging from a NetBSD shipwreck. BitRig guys ported WAPBL to OpenBSD so it is quite possible to do so. Walter Neto who actually undertook the project on vanilla OpenBSD disappeared. However as the legacy of his effort we know that WAPBL is full of bugs and that very little has being done in NetBSD on this file system since Wasabi Technologies went belly up.

ZFS is of course not a general purpose file system but a rather specialized 128 bit storage file system. HAMMER2 appears to be more general as it looks that DragonFly will be able to boot from it (HAMMER1 can't be used for the boot and DF guys were using UFS for that). Many people myself included are craving for a storage file system usable on OpenBSD (much more than for native VMM).

You'd still need a hardware RAID card for automatic failover in case a disk dies, but IIRC OpenBSD has a pretty decent softraid in its tree.
I think you are over estimating how good is softraid. It is true that unlike DragonFly which gets it software RAID discipline through old unmaintained FreeBSD natacontrol utility softraid is maintained. However principal use of OpenBSD softraid is for encrypting entire HDD on the laptop. RAID 1 although functional (I use on this very desktop)

Code:
# bioctl sd4                                                 
Volume      Status               Size Device
softraid0 0 Online      2000396018176 sd4     RAID1
          0 Online      2000396018176 0:0.0   noencl <sd0a>
          1 Online      2000396018176 0:1.0   noencl <sd1a>
is very crude. It took me 4 days to rebuild 1TB mirror after accidental power off one HDD. That is just not something usable for a storage purpose in real life.

DF people or should I say alpha male Matt Dillon publicly declared their commitment to hardare RAID. I actually agree with their decision
and always preferred hardware RAID over software RAID until adapting ZFS. OpenBSD people have never made such statements public. It appears that the crude RAID 1 code, relatively recent RAID 5 code, and experimental and non-usable RAID 6 code in OpenBSD tree are mostly result of the departure (around 2010 IIRC) of the principal developer Marko Peereboom (due to the political reasons) from the project rather than a part of some great plan.

Linux will likely get it at some point, they even have a read only port of HAMMER1 if I am not mistaken.
This is very misleading. Tomohiro Kusumi who "ported" HAMMER1 to Linux is at best DragonFly developer (please check his GitHub account) with the history of bombastic announcements related to file systems. I would classify him as a free person who likes to thinker with file systems and is actually quite competent. His Linux HAMMER1 port appears to be barely experimental let alone usable. I would double my original bet to $200 and make a claim that most Linux people are totally oblivious to the existence of HAMMER1 let alone its "port" to Linux. If HAMMER1 and bi-tree concept was as good as originally thought I am sure we will not be waiting for HAMMER2 and Apple would have adopted HAMMER1 instead of developing its own AFS (not to be mistaken with Andrew File System and old NAS Apple File System) which has more or less all features of HAMMER1 and will soon become default on OS X after 10 years of development. I will close this long post by saying that OpenBSD people 2 years ago proposed evaluation of HAMMER2 as one of Google Summer of Code project. Project was not adopted and in the mean time OpenBSD totally dropped Google Summer of Code due to the high effort cost on the side of current developers and little return to the project.
 
actually, the SOC project was accepted for one developer, but he got hired soon after and he notified OpenBSD that he wouldn't be doing the work after all. To iterate, the "student" dropped the project, it wasn't terminated by OpenBSD.
 
https://www.dragonflybsd.org/release52/

Code:
dfly# uname -a

DragonFly dfly.bagdala2.net 5.2-RELEASE DragonFly v5.2.0-RELEASE #13: Tue Apr 10 21:46:14 EDT 2018     root@dfly.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64

dfly# mount
ROOT on / (hammer, noatime, local)
devfs on /dev (devfs, nosymfollow, local)
/dev/serno/B620550018.s1a on /boot (ufs, local)
/pfs/@@-1:00001 on /var (null, local)
/pfs/@@-1:00002 on /tmp (null, local)
/pfs/@@-1:00003 on /home (null, local)
/pfs/@@-1:00004 on /usr/obj (null, local)
/pfs/@@-1:00005 on /var/crash (null, local)
/pfs/@@-1:00006 on /var/tmp (null, local)
procfs on /proc (procfs, local)
DATA on /data (hammer, noatime, local)
BACKUP on /backup (hammer, noatime, local)
/dev/serno/5QG00XF0.s1e@DATA on /test-hammer2 (hammer2, local)
/data/pfs/@@-1:00001 on /data/backups (null, local)
/data/pfs/@@-1:00002 on /data/nfs (null, NFS exported, local)
 
How am I not achieving that? I realize the mechanism are different. If I have five servers in five countries and one server burns to the ground, and I have four copies of my files remaining, how am I not achieving what gofer_touch said?

Sorry didn't mean to derail the intended discussion. :oops:

What you are doing is no different from having 5 hard drives in a single server, each with their own/separate filesystem, each with their own/separate mountpoint, and you are copying files from one to the other 4. But applications can only use a single "master" filesystem for data storage. If the "master" disk dies, you have to restart all your apps, configure them to use a new mountpoint, or fiddle around with mountpoints to get things working again.

HAMMER2 is like having 5 hard drives in a system, configured as a RAID5 array, presented to the system as a single device, with a single filesystem on top. Data is distributed between disks automatically, including parity info for rebuilding corrupted files. If a disk dies, the system carries on. If the dish is replaced, the data on it is rebuilt automatically from the parity.

Now, replace "disk" with "server". A HAMMER2 filesystem is presented as a single "device", a single filesystem, using disks on multiple separate servers.
 
Back
Top