OS disk for ZFS system and M4A785TD-M Hardware Compatibility

I plan to build this system as a server for a couple of years...

AMD 235e
Asus mATX M4A785TD-M EVO Motherboard
4GB DDR3 ECC RAM
Seasonic Micro Atx SS300-SFD 300 Watt
3x 1.5TB Samsung Harddisks
OS Disk???

I plan to put the 3x 1.5TB disks into a Raidz and for ease boot from something else.

I was thinking of an SSD, something like the entry level 40GB Intel,
http://www.aria.co.uk/Products/Comp...id+State+Hard+Drive+-+Retail+?productId=39253

Either that or the most reliable 3.5" 2.5" SATA2 Disk I can get my hands.

Are these upto the job, Ive read some people using them as zfs caches, but surely this will wear it out in no time, even if I allocate 20GB. For the OS I could move bits of /var and /tmp to a RAM disk(?)
http://forums.freebsd.org/showthread.php?t=10210

In addition I cant actually find any of the ICs in the hardware support list but have found this post which would suggest the peripherals will work:
http://lists.freebsd.org/pipermail/freebsd-questions/2010-January/210725.html

Chipset: AMD 785G.
Storage Controller: SB710.
Firewire: JMB381
LAN: RTL8112L
GPU ATI HD 4200 (totally overkill but hey-ho).

Thanks for any advice...
 
The suitability of the 40gb Intel SSD for your particular case depends on your anticipated workload. You could for example, cut it up into a 5gb root/base partition, 5gb swap partition and use the remaining 30gb for a partition used as an L2ARC device for your pool.

The slow write speed (35mb/s) of that particular SSD model will not be an issue for you: L2ARC writes are bunched up and written to the device in a sequential rotor fashion at a throttled write speed anyway. Even under very heavy use, you are unlikely to see more than 8mb/s writes to your L2ARC so you will not be crippled by the write speed, contrary to what a lot of misinformed people are likely to tell you. However, there are 2 things you should be aware of:

1) The 80/160gb X25-M SSDs from Intel, which are a step up from your choice are specced for 24/7 operation for 5 years straight, writing 20GB/day of data to the drive. This is a fair amount of writes and for desktop use, you aren't likely to have any issues (my X25-M in use in my heavily used Win7 desktop is averaging out to about 14gb/day of writes).

That being said, you could have a vastly different workload, I don't know. I know of one person for whom these consumer SSDs are completely unsuitable, considering his L2ARC is hammered so heavily, it has to write about 130gb/day, meaning an X25-M SSD would most likely drop dead within a year. Due to the heavy workload, this user is forced to use the much more expensive X25-E SLC drives, which can cope with this kind of punishment.

2) Another overlooked issue is L2ARC bookkeeping. For every 1gb of L2ARC being in use, the system needs roughly 25mb of RAM for keeping track of things. With a 30gb L2ARC under active use, expect ~750mb of RAM to be consistently used for L2ARC bookkeeping alone.
 
To expand a bit on my previous post.

While the L2ARC is a great thing and certainly VERY useful for a lot of people, you need to ask yourself if YOU would really benefit from it. L2ARC gives a lot of benefit when you have a huge datapool, with a certain amount of "hot data" being accessed via random reads. An example of this would be a data pool storing a grand total of 5tb worth of data, with say, 80gb out of that total 5tb being hammered really heavily (for example a database). Unless your system has over 80gb of RAM, you obviously won't be able to cache all this hot data in your RAM, meaning the slow mechanical disks used to store your pool with be hammered pretty heavily. The L2ARC will allow to cache this hot data to the SSD, greatly improving IOPS and reducing latency.

But if this thing you are doing is a NAS, L2ARC will basically do nothing for you. NAS storage usually implies mostly large sequential reads and writes, which are almost completely ignored by L2ARC.
 
Thanks Jago.

Well its a home server, on average no more than 15GB of accesses per day. Most will be media streaming, the only other activities will be svn, samba windows share and a mail server.

Thinking about caches in general, I suppose it isnt entirely useful for how this machine will be used (as you rightly point out). The machine will have 4GB of RAM and a 3TB Raidz from 3x1.5TB disks which I believe is a fair amount for ZFS.

Given that ive had a significant number of hard drives die on me over the past decade, I thought it would be good to use an SSD as a OS disk but equally dont want a mechanical failure to be replaced by an electrical one.

Normally I have a hardware raid 0 with everything on the same disk but ZFS appeals for many reasons but does result in a dilemma of where to have the OS.
 
Just to explain, my ZFS array (3TB) would have around 15GB reads and about 2GB writes per day on average (worst case). So If I had a 20-30GB L2ARC it would take 2 weeks to write through the cache before it wound back to the start (if its synchronous). Obviously when fragmentation occurs it would worsen the effect, but ill own up to not doing enough research on this. BUT, from what you say and in my case, it seems rather pointless.

So if I didnt have the L2ARC, the only thing I have to worry about is the wear from FreeBSD 'doing its thing' 24/7. Not sure how much writing FreeBSD does in the background but I presume with samba file/print, apache, mysql, svn, mail, streamer, auto backups and system utils running all the time; logs and temporary data would churn a reasonable amount.

I plan to create a md at startup to copy applicable /var /tmp parts to memory, then when shutting down to copy back to disk (overkill?). I presume /var/log /var/mail and /tmp would be the worst offenders. I wouldnt want to use too much memory, possibly around 725MB. I suppose one would have to be careful in a system crash but on the other hand does it really differ otherwise? Besides, the system has ECC ram and is on a UPS so im helping it as much as possible.

Again this is a home server so it sits idle 75% of the time. When there are users its only mail and the media streaming app from two users, the rest is solely used by myself.
 
The SVN and the mailserver would benefit from L2ARC, while media streaming and samba sharing most likely wouldn't. If your expected workload measures in 15Gb of reads per day, this sound like very low use levels, where using an L2ARC in the first place (even with well matched kind of random read workload) is serious overkill and not really needed at all.

I have recently built myself a ZFS NAS and I was wondering about some of the same things you mention. I concluded that the best option would me would be a RAID10-style ZFS pool setup for everything. I started with 2 x 2tb drives in a mirror configuration. As my storage and speed requirements rise, I will add another mirror vdev to the pool... and then another one, etc, essentially resulting in a pool that is "striped across mirrors". This offers good reliability due to using mirrors (in a 6 disk setup you can lose 3 disks, as long as it's 1 dead disk per mirror vdev) as well as high performance due to data being striped. This pool hosts everything and I also boot off it.

Pros:
Can be easily expanded by adding 1 (or more) mirror to the pool at will (you can't "grow" an existing RAIDZ vdev)
You can stripe together mirrors of different sizes (for example 2x1tb + 2x1.5tb + 2x2tb)
Equal or better reads than RAIDZ (assuming a stripe of 2 mirrors or more)
MUCH better writes than RAIDZ
Higher reliability than RAIDZ

Cons:
Less usable space
 
@embeddedbob

If You plan to use 3 disks, then You may use this setup:
http://daemonforums.org/showthread.php?t=4200

It will create 512 MB RAID1 from the space on the beginning of the disks ale left all the rest for ZFS.

If you plan to use external device for L2ARC, then you should also think about redundancy, for that (RAID1 SSDs).

As for Samsung drives, be sure to get newest Samsung Spinoint F3 series (they are really fast for linear transfers and low on power at the same time).

You may also invest in WD Black/RE3 series for better access times then on Samsungs, but with a lot lower linear transfers (Samsung F3 has 500 GB platters, WD only about 340GB).
 
vermaden said:
@embeddedbob
If you plan to use external device for L2ARC, then you should also think about redundancy, for that (RAID1 SSDs).
Not at all, this is completely unneeded.

Should a read/write occur on the L2ARC vdev, the IO for that block is deferred to the "real" underlying storage where the data is held and an error is logged. There is no interruption of any kind. Should the L2ARC vdev randomly up and die, it came be removed from the pool at will.

However, what you are saying does apply to using dedicated ZIL. As of right now, ZIL vdevs cannot be removed from a pool in FreeBSD (in OpenSolaris, this capability was only added very recently, google for "slog removal"). This can cause a serious problem where should your ZIL vdev die, you now end up with a pool that can no longer be imported on reboot and you can't fix it either because you cannot remove the dead vdev from the pool. This is why it's highly recommended to use redundancy for dedicated ZIL vdevs. This however, does not apply to L2ARC at all.
 
vermaden said:
@embeddedbob
As for Samsung drives, be sure to get newest Samsung Spinoint F3 series (they are really fast for linear transfers and low on power at the same time).
Truth be told, with the kind of load and usage pattern described by the OP, basically any modern disc model is good enough :)

Here would be my choice of disks depending on criteria:

cheap and fast: Samsung Spinpoint F3 (1tb is the biggest size available)
cheap and big: Seagate Barracuda 7200.11 1.5tb

huge and fast: Western Digital Caviar Black 2TB
huge and cheap: Western Digital Caviar Green 2TB*

*Note to get the newer model with 64mb cache and not the older 32mb one, the newer models are much faster. Additionally, it may be a good idea to run WDIDLE on these drives to disable automatic head parking every 8 seconds enabled by default on these disks, before putting them into a RAID array of any sort.
 
cheap and big: Seagate Barracuda 7200.11 1.5tb
Stay away from Seagate 7200.11 series drives (also some ES and Maxtor drives are involved), there was BIG issue with 7200.11 firmware, making drives totally unaccessible with all data lost, other series (like 7200.12 are slower but do not face this issue).
http://news.cnet.com/seagate-fixes-7200.11-drives-except-when-it-doesnt/

huge and cheap: Western Digital Caviar Green 2TB*
WD Caviar Green as you say drives also have 'interesting' issue, ther stop the heads after 8 seconds (!) of idle, while a lot of disk access are at about 10-30 seconds (flushes/atime/etc), so with 20 000 monthly load/unload cycles you will hit the marginal 300 000 times safe limit (specified on WD site in specification) by about a year, in short words, you are risking loosing all data on that disk.

WD released an utlility (only for Green RE drives) but people report that it also sometimes work for 'consumer' drives, but the problem still remains ...

I would omit these as far as possible, I actually had one, but sold it away and get 3 x Samsung F3 1 TB drives.
 
vermaden said:
Stay away from Seagate 7200.11 series drives (also some ES and Maxtor drives are involved), there was BIG issue with 7200.11 firmware, making drives totally unaccessible with all data lost, other series (like 7200.12 are slower but do not face this issue).
http://news.cnet.com/seagate-fixes-7200.11-drives-except-when-it-doesnt/
Emphasis on was :) Yes, that firmware bug was scary, but it has now been fixed, all new drives have been shipping with fixed firmware for quite some time now and the drives are in a good place price/performance/size ratio-wise.
vermaden said:
WD Caviar Green as you say drives also have 'interesting' issue, ther stop the heads after 8 seconds (!) of idle, while a lot of disk access are at about 10-30 seconds (flushes/atime/etc), so with 20 000 monthly load/unload cycles you will hit the marginal 300 000 times safe limit (specified on WD site in specification) by about a year, in short words, you are risking loosing all data on that disk.

WD released an utlility (only for Green RE drives) but people report that it also sometimes work for 'consumer' drives, but the problem still remains ...
I've had to personally deal with this as I have 2 x 2tb WD Green drives. Yes, the IntelliPark tech can cause quite an annoyance if these drives are used in a RAID configuration. Not only is the resulting behavior unhealthy for the heads, the resulting noise was nearly driving me up the wall. However, all you have to do is to make a boot disk (usb flash stick will do), put WDIDLE on it, boot off the stick, run the tool and you're done. No more headparking every 8 seconds, no more rapidly increasing Load_Cycle_Count and no more crazy noise.

Considering the speed/price/size ratio of the drives, I cannot omit them from my recommendation list, even with this issue, but obviously you could get a better drive by going with the Black or RE4 models. You will just have to pay twice as much.
 
Jago said:
Truth be told, with the kind of load and usage pattern described by the OP, basically any modern disc model is good enough :)

Here would be my choice of disks depending on criteria:

cheap and fast: Samsung Spinpoint F3 (1tb is the biggest size available)
cheap and big: Seagate Barracuda 7200.11 1.5tb

huge and fast: Western Digital Caviar Black 2TB
huge and cheap: Western Digital Caviar Green 2TB*

*Note to get the newer model with 64mb cache and not the older 32mb one, the newer models are much faster. Additionally, it may be a good idea to run WDIDLE on these drives to disable automatic head parking every 8 seconds enabled by default on these disks, before putting them into a RAID array of any sort.

Is it too late to mention it was always going to be the :)
Samsung HD154UI EcoGreen F2 1.5TB SATA-II 3.5" Hard Drive - OEM

I already have 1 and theres some goods deals, £138 for two drives. Its the most cost effective solution for me and Ive had too many problems with seagate drives in the past 2 years. Ive never had problems with WD and have a myBook as backup device (via eSATA).

As I understand it TLER is only encountered when the drives having problems and trying to recover from error/s (remap sectors). If this isnt allowed to complete surely the drive still has problems or is terminal. In that case I think I would rely on my backup or ZFS mirror / Z array. Or am I missing something?

In any case Ive had a look on the net and most not found anything too negative with Samsumg 1.5TB HD154UI 32MB. The drive I have already is quiet and so far, reliable (~9 months) :)
 
Jago said:
The SVN and the mailserver would benefit from L2ARC, while media streaming and samba sharing most likely wouldn't. If your expected workload measures in 15Gb of reads per day, this sound like very low use levels, where using an L2ARC in the first place (even with well matched kind of random read workload) is serious overkill and not really needed at all.

I have recently built myself a ZFS NAS and I was wondering about some of the same things you mention. I concluded that the best option would me would be a RAID10-style ZFS pool setup for everything. I started with 2 x 2tb drives in a mirror configuration. As my storage and speed requirements rise, I will add another mirror vdev to the pool... and then another one, etc, essentially resulting in a pool that is "striped across mirrors". This offers good reliability due to using mirrors (in a 6 disk setup you can lose 3 disks, as long as it's 1 dead disk per mirror vdev) as well as high performance due to data being striped. This pool hosts everything and I also boot off it.

Pros:
Can be easily expanded by adding 1 (or more) mirror to the pool at will (you can't "grow" an existing RAIDZ vdev)
You can stripe together mirrors of different sizes (for example 2x1tb + 2x1.5tb + 2x2tb)
Equal or better reads than RAIDZ (assuming a stripe of 2 mirrors or more)
MUCH better writes than RAIDZ
Higher reliability than RAIDZ

Cons:
Less usable space

What your saying is good advice and I *should* take it... Your post did cause me to stop and think ;)

Problem is I need 3TB (well not at the moment but I want this system to last 2 years). I already have a rack case with 3xSATA disk carrier that I want to keep (save £ again).

In both respects 3 disks makes sense resulting in a 3TB Raidz makes sense. This is also the reason why for the OS disk an SSD made sense as its easily mountable in the rack case (anything for an easy life ;) ). This leads me onto...

vermaden said:
@embeddedbob

If You plan to use 3 disks, then You may use this setup:
http://daemonforums.org/showthread.php?t=4200

It will create 512 MB RAID1 from the space on the beginning of the disks ale left all the rest for ZFS.

If you plan to use external device for L2ARC, then you should also think about redundancy, for that (RAID1 SSDs).

As for Samsung drives, be sure to get newest Samsung Spinoint F3 series (they are really fast for linear transfers and low on power at the same time).

You may also invest in WD Black/RE3 series for better access times then on Samsungs, but with a lot lower linear transfers (Samsung F3 has 500 GB platters, WD only about 340GB).

See above re F3.

The link to the booting from 3 disks with a Raidz is HUGELY useful! Im going to get the hardware and try this setup first, then test the error cases to check if it goes wrong Im OK with putting it right :)


Big thanks to you both for all this advice! :beergrin
 
Back
Top