gvinum RAID5 8.0-R issues

Hi there, I'm kinda new to all of this. :stud
My goal is to setup a software RAID5 array on FreeBSD 8.0-RELEASE.
I've read that one of the solutions provided is using gvinum, however due to difficulities i was unable to get it working.

There is a high chance that I'm doing something horribly wrong here, so point my mistakes, please.

The exact steps I took to set all things up on VMware® Workstation 7.0.0 build-203739:
1. Standard installation, auto partitioning.
2. Standard MBR.
3. Developer package and ports.
4. Created user 'testuser' group: 'wheel"
5. Enabled inetd, uncommented telnet line.
6. Restart, add 3x2GB IDE HDD (ad0, ad1, ad3, system drive identified as da0)
7. Logged as root on ttyv0
8. telnet host-guest with putty
9. For ad0, ad1, ad3 ->

[CMD=""]freebsd-teststation# gpart create -s mbr ad0
freebsd-teststation# gpart add -b 63 -s 4194224 -t freebsd ad0
freebsd-teststation# gpart create -s bsd ad0s1[/CMD]

10. As a result:

Code:
freebsd-teststation# gpart show
=>     63  4194225  ad0  MBR  (2.0G)
       63  4194162    1  freebsd  (2.0G)
  4194225       63       - free -  (32K)

=>     63  4194225  ad1  MBR  (2.0G)
       63  4194162    1  freebsd  (2.0G)
  4194225       63       - free -  (32K)

=>     63  4194225  ad3  MBR  (2.0G)
       63  4194162    1  freebsd  (2.0G)
  4194225       63       - free -  (32K)

=>      63  16777152  da0  MBR  (8.0G)
        63  16771797    1  freebsd  [active]  (8.0G)
  16771860      5355       - free -  (2.6M)

=>       0  16771797  da0s1  BSD  (8.0G)
         0    864256      1  freebsd-ufs  (422M)
    864256   1360219      2  freebsd-swap  (664M)
   2224475   1454080      4  freebsd-ufs  (710M)
   3678555    772096      5  freebsd-ufs  (377M)
   4450651  12321146      6  freebsd-ufs  (5.9G)

=>      0  4194162  ad0s1  BSD  (2.0G)
        0       16         - free -  (8.0K)
       16  4194146      8  freebsd-vinum  (2.0G)

=>      0  4194162  ad1s1  BSD  (2.0G)
        0       16         - free -  (8.0K)
       16  4194146      8  freebsd-vinum  (2.0G)

=>      0  4194162  ad3s1  BSD  (2.0G)
        0       16         - free -  (8.0K)
       16  4194146      8  freebsd-vinum  (2.0G)
10. Now gvinum:
[CMD=""]freebsd-teststation# gvinum
gvinum -> raid5 ad0s1h ad1s1h ad3s1h
gvinum -> list -V[/CMD]
Code:
3 drives:
Drive gvinumdrive2:     Device ad3s1h
                Size:       2147267072 bytes (2047 MB)
                Used:       2147221504 bytes (2047 MB)
                Available:       45568 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2147357184          45568
Drive gvinumdrive1:     Device ad1s1h
                Size:       2147267072 bytes (2047 MB)
                Used:       2147221504 bytes (2047 MB)
                Available:       45568 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2147357184          45568
Drive gvinumdrive0:     Device ad0s1h
                Size:       2147267072 bytes (2047 MB)
                Used:       2147221504 bytes (2047 MB)
                Available:       45568 bytes (0 MB)
                State: up
                Flags: 0
                Free list contains 1 entries:
                   Offset            Size
                2147357184          45568

1 volume:
Volume gvinumvolume0:   Size: 4294443008 bytes (4095 MB)
                State: up
                Plex  0:        gvinumvolume0.p0        (up),       4095 MB

1 plex:
Plex gvinumvolume0.p0:  Size:   4294443008 bytes (4095 MB)
                Subdisks:        3
                State: up
                Organization: raid5     Stripe size: 256 kB
                Flags: 0
                Part of volume gvinumvolume0
                Subdisk 0:      gvinumvolume0.p0.s0
                  state: up     size  2147221504 (2047 MB)
                Subdisk 1:      gvinumvolume0.p0.s1
                  state: up     size  2147221504 (2047 MB)
                Subdisk 2:      gvinumvolume0.p0.s2
                  state: up     size  2147221504 (2047 MB)

3 subdisks:
Subdisk gvinumvolume0.p0.s2:
                Size:       2147221504 bytes (2047 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 524288 (512 kB)
                Drive gvinumdrive2 (gvinumdrive2) at offset 135680 (132 kB)
                Flags: 0
Subdisk gvinumvolume0.p0.s1:
                Size:       2147221504 bytes (2047 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 262144 (256 kB)
                Drive gvinumdrive1 (gvinumdrive1) at offset 135680 (132 kB)
                Flags: 0
Subdisk gvinumvolume0.p0.s0:
                Size:       2147221504 bytes (2047 MB)
                State: up
                Plex gvinumvolume0.p0 at offset 0 (0  B)
                Drive gvinumdrive0 (gvinumdrive0) at offset 135680 (132 kB)
                Flags: 0
gvinum ->

I thought, that at this stage initialisation should begin - but it didnt.
Anyway, i proceeded to
[CMD=""]freebsd-teststation# newfs -U /dev/gvinum/gvinumvolume0
freebsd-teststation# mkdir /mnt/raid5
freebsd-teststation# mount /dev/gvinum/gvinumvolume0 /mnt/raid5/[/CMD]

And finally:

Code:
freebsd-teststation# dd if=/dev/zero of=/mnt/raid5/file count=128 bs=512k
^C86+0 records in
85+0 records out
44564480 bytes transferred in 464.141622 secs (96015 bytes/sec)
Should speak for itself. Achieved similar thrilling results on 'real hardware', and then I got confirmation on VM.

CPU idle.
My guess I messed something badly.
What am I doing wrong?
 
darekxan said:
5. Enabled inetd, uncommented telnet line.

8. telnet host-guest with putty
Ok. Stop right here. Time to lose this really bad habit. Don't use telnet, it's horribly insecure. Everyone can 'listen' in on what you do because it's a clear text protocol. If you su over telnet everyone will also see your root password. Disable inetd and telnet. Use secure shell. You already have PuTTY and it will work exactly the same except it's infinitely more secure.

Just add to /etc/rc.conf:
Code:
sshd_enable="YES"
Then start sshd with # /etc/rc.d/sshd start. Connect as usual with PuTTY but instead of telnet use ssh.


As for gvinum RAID5, this is what I used to set up mine: http://www.schmut.com/howto/freebsd-software-raid-howto
 
I know about telnet/ssh, but since its just VM guest-host there is no sensitive data sent.

Anyway, to the point.

Since 8.0-Release, bsdlabel -e is no longer functional. It is recommended to use gpart instead.
Anyway, using following config file:
Code:
drive r0 device /dev/ad0s1h
drive r1 device /dev/ad1s1h
drive r2 device /dev/ad3s1h
volume raid5
    plex org raid5 512k
    sd drive r0
    sd drive r1
    sd drive r2
    sd drive r3
Results in the same unusability of an array.

I tried doing the same steps for FreeBSD 7.3, and everyting works.
It must be something changed in 8.0.
 
I'm currently running 8.0-STABLE but I set up my gvinum when I was running 7-STABLE.

But, bsdlabel -e still works for me. Mind you I didn't use gpart to create the slice, I used fdisk for that.
Code:
root@molly:~#bsdlabel -e ad4s1
# /dev/ad4s1:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  b:   524288       16      swap
  c: 976773105        0    unused        0     0         # "raw" part, don't edit
  d:  3145728   524304     vinum
  e: 973103073  3670032     vinum
Code:
root@molly:~#gvinum list
8 drives:
D t1                    State: up       /dev/ad4s1d     A: 0/1535 MB (0%)
D t2                    State: up       /dev/ad5s1d     A: 0/1535 MB (0%)
D t3                    State: up       /dev/ad6s1d     A: 0/1535 MB (0%)
D t4                    State: up       /dev/ad7s1d     A: 0/1535 MB (0%)
D r1                    State: up       /dev/ad4s1e     A: 0/475147 MB (0%)
D r2                    State: up       /dev/ad5s1e     A: 0/475147 MB (0%)
D r3                    State: up       /dev/ad6s1e     A: 0/475147 MB (0%)
D r4                    State: up       /dev/ad7s1e     A: 0/475147 MB (0%)

2 volumes:
V temp                  State: up       Plexes:       1 Size:       6142 MB
V raid5                 State: up       Plexes:       1 Size:       1392 GB

2 plexes:
P temp.p0             S State: up       Subdisks:     4 Size:       6142 MB
P raid5.p0           R5 State: up       Subdisks:     4 Size:       1392 GB

8 subdisks:
S temp.p0.s0            State: up       D: t1           Size:       1535 MB
S temp.p0.s1            State: up       D: t2           Size:       1535 MB
S temp.p0.s2            State: up       D: t3           Size:       1535 MB
S temp.p0.s3            State: up       D: t4           Size:       1535 MB
S raid5.p0.s0           State: up       D: r1           Size:        464 GB
S raid5.p0.s1           State: up       D: r2           Size:        464 GB
S raid5.p0.s2           State: up       D: r3           Size:        464 GB
S raid5.p0.s3           State: up       D: r4           Size:        464 GB
 
Fdisk doesnt work -
Code:
teststation# fdisk -BI ad0
******* Working on device /dev/ad0 *******
fdisk: Class not found
Same with bsdlabel:
Code:
teststation# bsdlabel -e da0s1
# /dev/da0s1:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  a:   890880        0    4.2BSD        0     0     0
  b:   701454   890880      swap
  c: 16771797        0    unused        0     0         # "raw" part, don't edit
  d:  1546240  1592334    4.2BSD        0     0     0
  e:   811008  3138574    4.2BSD        0     0     0
  f: 12822215  3949582    4.2BSD        0     0     0
...
:q
bsdlabel: Class not found
re-edit the label? [y]:
Code:
teststation# uname -a
FreeBSD teststation.localdomain 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:02:08 UTC 2009     root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
 
I did some digging. According to this:
What's cooking for FreeBSD 8?
fdisk is obsolete.
GEOM_PART becomes the default slicer
.

I've tried using the following:
- sade(8)/gpart(8) for partitioning in different ways,
- config file for gvinum,
- entire drives (ad0)
- entire slices (ad0s1)
- partitions (ad0s1h, ad0s1d - sade created)
No matter what I do, array wont initialise, and is unusable.

What concerns me, I cant find anything wrong in my attempts. There were so many of them, I followed 2 or 3 guides (mostly outdated), that somehow I should have managed to get it working...

Anyway, did anyone manage to create RAID-5 array, and get it working on fresh 8.0-RELEASE installation?

All of this, refers to 8.0-RELEASE AMD64.
Yet I got to try 8.0-STABLE...
 
have you tried loading the geom_mbr.ko,
geom_label.ko and geom_bsd.ko before the
attempts you list above?
Just a hunch it might help.
.........
Unless of course it does not apply in
the context.
........
 
Seems geom_label couldnt be loaded (as it is already in kernel?);
Code:
80-STABLE# kldload geom_label.ko
kldload: can't load geom_label.ko: File exists
rest loads just fine.
Code:
80-STABLE# kldstat
Id Refs Address            Size     Name
 1    6 0xffffffff80100000 d33fa8   kernel
 3    1 0xffffffff81035000 2107     geom_bsd.ko
 4    1 0xffffffff81038000 1489     geom_mbr.ko

Fdisk is complaining about Geom not found: "ad0", while it obviously should have access to it. After runnig fdisk drive appears sliced incorrectly - Two "s1" slices found.
Code:
80-STABLE# ls /dev/ | grep ad0
ad0
80-STABLE# fdisk -BI ad0
******* Working on device /dev/ad0 *******
fdisk: invalid fdisk partition table found
fdisk: Geom not found: "ad0"
80-STABLE# ls /dev/ | grep ad0
ad0
ad0s1
ad0s1
Bsdlabel appears to be broken when these modules are loaded.
Code:
80-STABLE# bsdlabel -wB ad0s1
80-STABLE# ls /dev/ | grep ad0
ad0
ad0s1
ad0s1
ad0s1a
ad0s1a
ad0s1a
ad0s1a
ad0s1c
ad0s1c
ad0s1ca
ad0s1ca
80-STABLE#
However, "bsdlabel -e" seems to work.

I think that these (bsdlabel, fdisk, and replacing them with gpart or sade) problems are not my major concern.

Following my steps form post #1 on 7.3-RELEASE results in a working and functionable raid5 array.

My thoughts are:
- gvinum raid5 is somehow broken on 8.0-RELEASE, 8.0-STABLE and 9.0-CURRENT
- bsdlabel and fdisk are now obsolete and legacy tools, replaced with gpart and sade

Am I wrong here? where should I find such info?
 
If by broken you mean broken in the way it's set up? Something there seems to throw a spanner in the works. As I said, mine was set up when I was running 7-STABLE. It's been a few months since I've upgraded to 8.0-STABLE.

The only problem I have is that there's a directory on that gvinum volume with lots of subdirectories with files and more subdirectories which seemingly goes on forever. When I try to remove that directory I can hear all the disks churning away but after a while the churning stops and the whole system just stalls. I can still login locally and sometimes even remotely but doing a simple [cmd=]ls[/cmd] will stall that session too. Anyway, as long as I don't touch that directory everything seems to work fine :e

Here are my figures, just for reference. Performance probably sucks somewhat as the volume is pretty filled up x(

Code:
dice@molly:~>uname -a
FreeBSD molly.dicelan.home 8.0-STABLE FreeBSD 8.0-STABLE #0: Tue Mar  9 02:28:09 CET 2010     root@molly.dicelan.home:/usr/obj/usr/src/sys/MOLLY8  i386
dice@molly:~>dd if=/dev/zero of=/storage/MediaTomb/Movies/test count=128 bs=512k
128+0 records in
128+0 records out
67108864 bytes transferred in 199.082891 secs (337090 bytes/sec)
dice@molly:~>mount | grep storage
/dev/gvinum/raid5 on /storage (ufs, NFS exported, local)
dice@molly:~>df -h /storage
Filesystem           Size    Used   Avail Capacity  Mounted on
/dev/gvinum/raid5    1.3T    1.2T    4.3G   100%    /storage
 
darekxan said:
Broken, I mean - non functional, bugged.

Like I said, it's also filled up. That certainly doesn't do wonders for it's performance :e

Reading however is fine
Code:
dice@molly:~>dd if=/storage/MediaTomb/Movies/test of=/dev/null bs=512k
128+0 records in
128+0 records out
67108864 bytes transferred in 0.041292 secs (1625228659 bytes/sec)
 
Why don't you try ZFS? When I set this machine up ZFS was only recently added.

I do want to switch but that's going to take quite some work and I never seem to have the time for it.
 
using config file:
Code:
drive disk_1 device /dev/ad0
drive disk_2 device /dev/ad1
drive disk_3 device /dev/ad3
volume raid5
plex org raid5 512k
sd drive disk_1
sd drive disk_2
sd drive disk_3
After newfs -U and mounting:
On 7.3-RELEASE
Code:
73-RELEASE# dd if=/dev/zero of=/mnt/raid/file count=128 bs=512k
128+0 records in
128+0 records out
67108864 bytes transferred in 1.821543 secs (36841768 bytes/sec)
7.3-RELEASE#
On 8.0-STABLE:
(took too long so I ^C ->
Code:
80-STABLE# dd if=/dev/zero of=/mnt/raid/file count=128 bs=512k
^C60+0 records in
59+0 records out
30932992 bytes transferred in 98.994957 secs (312470 bytes/sec)
80-STABLE#
I came to point where I'm quite sure what I'm doing. Maybe 8.0 needs another way to set things up?
 
Thanks for suggestion SirDice, will try ZFS, and report later.
(I dont see post-edit feature, am I blind from staring at consoles or there rly isnt one?)
 
ZFS performance:
Code:
80-STABLE# dd if=/dev/zero of=/tank/file bs=512k count=2048
2048+0 records in
2048+0 records out
1073741824 bytes transferred in 7.158559 secs (149994128 bytes/sec)
80-STABLE#
Shocked. Why would I bother with gvinum?
 
Back
Top