Gvinum RAID5 Volume Freezing When Being Written To

Hi,

I have recently done a fresh install of FreeBSD 9.1-RELEASE amd64 on my server which previously had Gentoo Linux on it. I have set up most services but I am having a real issue with RAID. I have used gvinum to make a RAID5 volume across 3 x 2TB drives, following the instructions in the manual page gvinum(8) and after creating a new fs and attempting to write to it, the process (cp/mv) will freeze and I can't even ctrl-C it. If I open a new shell and try to unmount the volume, that then freezes too (locks). Even issuing a kill -9 on the process does not kill it. I've noticed that if I do a ctrl-C a heap of times then wait about 5 minutes, it will end the process and give me my shell back.

I have tried formatting the volume as UFS2, UFS1, with journalling and without to no avail, so I think it's the lower level playing up. I issued a rebuildparity command which took about 48 hours to complete, but still no good.

The drives are AHCI SATA.

Any ideas or help would be appreciated. I'm a long time Linux user and used FreeBSD a fair bit in the past, back in the 5.0-RELEASE days on an old server.

Here are the details of the volume/plexes/drives:

Code:
gvinum -> l -V
3 drives:
Drive gvinumdrive0:     Device ada1
                Size:    2000398798336 bytes (1907728 MB)
                Used:    2000398314496 bytes (1907728 MB)
                Available:      483840 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2000398450176      483840
Drive gvinumdrive2:     Device ada3
                Size:    2000398798336 bytes (1907728 MB)
                Used:    2000398314496 bytes (1907728 MB)
                Available:      483840 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2000398450176      483840
Drive gvinumdrive1:     Device ada2
                Size:    2000398798336 bytes (1907728 MB)
                Used:    2000398314496 bytes (1907728 MB)
                Available:      483840 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2000398450176      483840

1 volume:
Volume gvinumvolume0:   Size: 4000796628992 bytes (3815456 MB)
                State: up
                Plex  0:        gvinumvolume0.p0        (up),       3726 GB

1 plex:
Plex gvinumvolume0.p0:  Size:   4000796628992 bytes (3815456 MB)
                Subdisks:        3
                State: up
                Organization: raid5     Stripe size: 493 kB
                Flags: 0
                Part of volume gvinumvolume0
                Subdisk 0:      gvinumvolume0.p0.s0
                  state: up     size 2000398314496 (1907728 MB)
                Subdisk 1:      gvinumvolume0.p0.s1
                  state: up     size 2000398314496 (1907728 MB)
                Subdisk 2:      gvinumvolume0.p0.s2
                  state: up     size 2000398314496 (1907728 MB)

3 subdisks:
Subdisk gvinumvolume0.p0.s2:
                Size:    2000398314496 bytes (1907728 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 1009664 (986 kB)
                Drive gvinumdrive2 (gvinumdrive2) at offset 135680 (132 kB)
                Flags: 0
Subdisk gvinumvolume0.p0.s1:
                Size:    2000398314496 bytes (1907728 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 504832 (493 kB)
                Drive gvinumdrive1 (gvinumdrive1) at offset 135680 (132 kB)
                Flags: 0
Subdisk gvinumvolume0.p0.s0:
                Size:    2000398314496 bytes (1907728 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 0 (0  B)
                Drive gvinumdrive0 (gvinumdrive0) at offset 135680 (132 kB)
                Flags: 4
gvinum ->
 
wblock@ said:
How much memory does the system have? Is there a particular reason not to use ZFS?

It has 4GB RAM, with 2GB allocated to a "tmpfs" (/dev/md0) mounted in /var/tmp. This is because the root drive is a SSD, and I want to do ports compilation, etc in RAM.

I chose not to use ZFS because it is inflexible with adding drives. Once you establish a vdev, you can't add drives to it, but you have to add another 3 drives as another vdev then add it to the overall zpool.
I suppose the benefit then is in the right conditions, I could have a 2-drive failure, if they were one drive in each vdev.

I might need to just go that way because this has stumped me, and it seems everyone is moving toward ZFS.
 
I haven't seen any (g-)vinum users in years. 4GiB RAM is not much memory for parallel ports builds.
My build-machine is an AMD FX-8350 with 32GiB RAM and ZFS on a stripe of two Intel 520 SSDs. Even with poudriere which clones a jail per port and installs the dependencies via pkgng it's not I/O bound. From my experience the simplest solution would be to at least double your RAM and run a ZFS on root system without hacks like TMPFS.
 
Good SSDs should last a long time. And if they are not good, limiting writes may not save them anyway.

Also: I feel that tmpfs(5) is a really elegant idea. I use it for /tmp and /usr/obj. The implementation may not be perfect, but it's never given me any trouble.
 
I am not sure I made it clear but the root (/) partition is a standard UDF2 on a cheapo 32GB Kingston SSD with TRIM enabled.

The RAID5 volume is 3 x 2TB hard drives I am using for a Samba fileserver. The base system is running fine on the SSD.

Crest said:
I haven't seen any (g-)vinum users in years.

Yeah, this is something I am wondering about because there's not a lot of discussion going on about it anymore. I had Gentoo Linux installed on the machine before trashing it and going to FreeBSD and was running LVM software RAID5. It was strange how simple gvinum was to achieve the same thing as LVM. Just tell it I want a RAID5 across these drives, and done. I think it's quite underrated. Just be nice if it actually worked :)

Crest said:
4GiB RAM is not much memory for parallel ports builds.

Yeah well when I was running Gentoo, it seemed fine. Only had space issues when building such large packages as MySQL or PHP.


On my gvinum issue, one thing I didn't mention is when the process "locks up", nothing is appearing in the debug or messages log, or dmesg.
 
Same here, running 9.1-STABLE as of 05/06/2013. I had a quick look at the FreeBSD bug tracker and found that the issue has been reported more than 6 years ago, still without a fix:

http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/109762

Seems no one really cares anymore for gvinum, which I think is a shame. I have a system that I can not install more than 4 GB of RAM into, so I am stuck with looking for alternatives to ZFS. gvinum seemed as the one promising solution for me, but since I was able to break it by simply dd'ing a couple of hundred megabytes from /dev/random onto it, I am frustrated...
 
Back
Top