bfreek said:1gb ecc ram.
Please, elaborate on that. What's up with that (disappearing) memory leak?olav said:I might have the same problem I had. Memory leak(disappearing). It happens with the combination of 8.2-release and Samba.
I now have 5GB of ram put into it. Nothing really changed -.-jem said:Sorry I didn't point this out earlier, but that there is your problem. ZFS wants LOTS of memory. I'd strongly recommend at least 4GB for a system running raidz pools. It's why I threw 8GB RAM into my MicroServer.
olav said:It's a bug in 8.2-Release, there is already a patch for it available with 8.2-Stable. Use top to check if all your memory is there.
bfreek said:I now have 5GB of ram put into it. Nothing really changed -.-
AndyUKG said:Have you got a link to a bug report or anything? What does the leak affect? ZFS?
ta Andy.
Not sure about the 2TB drives, but the 1.5TB samsungs are hopeless IME. I bought 5 of them, have RMA'd 3 of them and about to send back the 4th, in less than 1 year. I can't understand how they get good reviews on Newegg.vermaden said:The Samsung F3 ones are fast and low on power at the same time (available sizes 500GB/1TB):
http://www.tomshardware.com/reviews/2tb-hdd-7200,2430-10.html
The only drawback is a little worse 'access time' then in WD Black/Blue drives.
Interesting, thanks!olav said:
Because they test them under unrealistic circumstances, that is buy them and test them for a few days. This is nothing near a real use case...carlton_draught said:Not sure about the 2TB drives, but the 1.5TB samsungs are hopeless IME. I bought 5 of them, have RMA'd 3 of them and about to send back the 4th, in less than 1 year. I can't understand how they get good reviews on Newegg.
Well, to newegg's credit most of the 3 star or less involve an RMA. So judging reliability by the sum of the percentage of 1-3 star reviews is not a terrible methodology in the scheme of things. If you have a better one, I'm all ears. Perhaps only use reviews in the last 6 months and do the same thing. If I do this to the drive I bought, it jumps from 26% to 40%. That's only 56 reviews to judge them on (down from 505), but I suppose as long as your sample size is 30+ it's reliable enough.bfreek said:Because they test them under unrealistic circumstances, that is buy them and test them for a few days. This is nothing near a real use case...
Well, I'm at 80% in 1 year. We'll see how I go.I have very bad experience with Samsung drives on the long run - return rates of 60-70% within 1-2 years.
Exactly. I'm not sure what the solution is. At least Hitachi are honest enough to put 24x7 usage, though they qualify that and say that it's for low duty cycle.Sidenote: The distinction between consumer and non-consumer drives is just bullshit as computers tend to run for hours each day even at home.
[karli@main ~]$ zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
label/rack1:1 ONLINE 0 0 0
errors: No known data errors
pool: pool2
state: ONLINE
scrub: scrub completed after 2h59m with 0 errors on Fri Apr 8 16:48:04 2011
config:
NAME STATE READ WRITE CKSUM
pool2 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
label/rack-1:2 ONLINE 0 0 0
label/rack-1:3 ONLINE 0 0 0
label/rack-1:4 ONLINE 0 0 0
label/rack-1:5 ONLINE 0 0 0
label/rack-2:1 ONLINE 0 0 0
label/rack-2:2 ONLINE 0 0 0
label/rack-2:3 ONLINE 0 0 0
label/rack-2:4 ONLINE 0 0 0
cache
label/cache1 ONLINE 0 0 0
label/cache2 ONLINE 0 0 0
errors: No known data errors
carlton_draught said:And when Intel publishes that with their new 320 series, that they had <1% return rates for their 2nd gen, it's believable to me. And it cross checks with what I see in newegg and other sites (e.g. Amazon). And it is why that when I go to buy another SSD, I will be buying Intel despite other drives being reputedly faster.
for i in /dev/ada*;do dd if=/dev/zero of=$i bs=1m count=1;done
for i in {0..5};do glabel label disk$i /dev/ada$i;done
for i in /dev/ada*;do gnop create -S 4096 $i;done
for i in /dev/ada*;do gpart create -s gpt $i;done
for i in /dev/ada*;do gpart add -t freebsd-zfs -b 2048 $i.nop;done
zpool create storage raidz2 ada0p1.nop ada1p1.nop ada2p1.nop ada3p1.nop ada4p1.nop ada5p1.nop
Sebulon said:Recently bought a WD30EZRS, a 3TB 4K drive, and actually didnt get what all the fuss was about, Ive been using it now for some time with zfs end recv, as a secondary pool, replicating my primary for disaster recovery- no problemo. I just shrugged it off, thinking that Im just lucky, or that it liked me better than every one else...=)
Well...it didnt. Yesterday I sat into place eight 1TB drives raidz2 for my primary pool and (until its filled) one 3TB for the secondary, adding another 3TB to match the size of the primary later. So I had switched the polarity of the pools so that I would be able to crash my previous primary 4-drive pool and build this:
carlton_draught said:Exactly. I'm not sure what the solution is. At least Hitachi are honest enough to put 24x7 usage, though they qualify that and say that it's for low duty cycle.
fadolf said:I'm not sure if I get this right. I have 6 of those drives, which I plan to use for a raidz2 pool. Do I have to worry about the alignment if I use the gnop method to create the zpool? And if so, would this achieve what is necessary?
Code:for i in /dev/ada*;do dd if=/dev/zero of=$i bs=1m count=1;done for i in {0..5};do glabel label disk$i /dev/ada$i;done for i in /dev/ada*;do gnop create -S 4096 $i;done for i in /dev/ada*;do gpart create -s gpt $i;done for i in /dev/ada*;do gpart add -t freebsd-zfs -b 2048 $i.nop;done zpool create storage raidz2 ada0p1.nop ada1p1.nop ada2p1.nop ada3p1.nop ada4p1.nop ada5p1.nop
$ for i in 0 1 2 3 4 5; do glabel label disk0$i ada$i; done
$ for i in 01 02 03 04 05; do gnop create -S 4096 label/disk$i; done
$ zpool create storage raidz2 label/disk01.nop label/disk02.nop label/disk03.nop label/disk04.nop label/disk05.nop
AndyUKG said:I've only played around a little with gnop, but I have an idea that if you create a gnop device for say ada0, that doesn't mean that ada0p1.gnop will also exist. If I'm wrong, then your steps look good, if I'm right then you will need to do a gnop create for the ada*p1 devices... Apart from that, yes you need to worry about alignment but you are good for that using the "-b 2048" option with gpart
phoenix said:Why are you labelling the disks, then partitioning the disks directly, then using the partitions to create the pool?
A simpler method is to just label the disks, create the gnop devices using the labels, then create the pool using the gnop devices:
Code:$ for i in 0 1 2 3 4 5; do glabel label disk0$i ada$i; done $ for i in 01 02 03 04 05; do gnop create -S 4096 label/disk$i; done $ zpool create storage raidz2 label/disk01.nop label/disk02.nop label/disk03.nop label/disk04.nop label/disk05.nop
for i in {0..5};do glabel label disk$i /dev/ada$i;done
for i in {0..5};do glabel label disk$i /dev/label/disk$i;done
for i in /dev/ada*;do gpart create -s gpt $i;done
for i in label/disk*;do gpart add -t freebsd-zfs -b 2048 $i;done
do gnop create -S 4096 label/disk0p1
zpool create storage raidz2 label/disk0p1.nop label/disk1p1 label/disk2p1 label/disk3p1 label/disk4p1 label/disk5p1
# gnop create -s 4096
and # gnop create -S 4096
, the following is what you want:# gpart create -s GPT ada0
# gpart add -b 2048 -t freebsd-zfs ada0
# gpart modify -i 1 -l disk1 ada0
# gnop create -S 4096 gpt/disk1
# zpool create poolname gpt/disk1.nop
phoenix said:However, if you use the entire disk, you automatically get proper alignment, since you start at sector 0.
geom label list ada0
Geom name: ada0
Providers:
1. Name: label/disk0
Mediasize: 2000398933504 (1.8T)
Sectorsize: 512
Mode: r1w1e1
secoffset: 0
offset: 0
seclength: 3907029167
length: 2000398933504
index: 0
Consumers:
1. Name: ada0
Mediasize: 2000398934016 (1.8T)
Sectorsize: 512
Mode: r1w1e2