Doubts on ZFS

Maybe I wrote something funny. Obviosly you noticed english is not my main language, so I am sorry.

By "private" I mean a single person, not a company, such as I. I don't have a tax ID right now and I'd like not to ask for one only for that.

@peetaur: I just get the SuperMicro Superchassis SC847E16-R1400UB. Since yours should be similar I'd like to ask you two things.
1) I'm not familiar with server chassis but this one claims to have a redundant power supply (there are two standard & separated power supply plugs to 230V - you might have 120V). Do I have to branch both of them or their "redundancy" consist of: "it does not work anymore, just swap the cables" ? Have to read the whole manual yet.
2) Strange thing to me was the chassis rear: where in normal PCs there is a small "mask" (specific for each motherboard, comes with it) here there is just a piece of steel givin 2 x ps/2 ports, 2 x VGA/serial ports and 2 x LAN ports. There doesn't seem to be a way to undo this other that to unscrew the whole back section (except where the PCI/PCIe slots are). Does it sound an alarming bell only to me or is that normal ? A whole backsection for the rear of the chassis will be supplied with a supermicro motherboard ?

Thank you very much to all of you guys.
You've been very helpful.
 
luckylinux said:
Do I have to branch both of them or their "redundancy" consist of: "it does not work anymore, just swap the cables" ? Have to read the whole manual yet.

I didn't read the manual either, or put it together. But if I pull either power plug, it beeps at me and stays powered on, with no interruption. So it is redundant while it is running, with automatic failover. This is normal and expected with all our dual power supply servers.

luckylinux said:
2) Strange thing to me was the chassis rear: where in normal PCs there is a small "mask" (specific for each motherboard, comes with it) here there is just a piece of steel givin 2 x ps/2 ports, 2 x VGA/serial ports and 2 x LAN ports. There doesn't seem to be a way to undo this other that to unscrew the whole back section (except where the PCI/PCIe slots are). Does it sound an alarming bell only to me or is that normal ? A whole backsection for the rear of the chassis will be supplied with a supermicro motherboard ?

I haven't done any work like that on the chassis, so I can't help you there. I get the servers pre-built, and never asked for a non-Supermicro board, and always got one.
 
&quot said:
I didn't read the manual either, or put it together. But if I pull either power plug, it beeps at me and stays powered on, with no interruption. So it is redundant while it is running, with automatic failover. This is normal and expected with all our dual power supply servers.
So you did branch one plug to each power supply, while you plugged the other end of both cables into an UPS or a power plug, correctly?

&quot said:
I haven't done any work like that on the chassis, so I can't help you there. I get the servers pre-built, and never asked for a non-Supermicro board, and always got one.
I too plan to use a supermicro board. However not all boards are the same (even supermicros). Some have 3 LAN ports, some 2, others even 4. Now they offer sometimes USB 3.0 as well. What I nmean is that each motherboard (at least in PCs) has its own I/O shield, but this server chassis instead of the "hole" where you normally put the I/O shield, has a quite fixed piece of steel. See this for details.



On their site this chassis should be compatible with any ATX/E-ATX motherboard. I now wonder if they ship you a whole back I/O shield which also convers the vent holes since a standard I/O shield won't fit. Should I ask supermicro?

Thank you again for your support.
 
OK. It seems like this one is cheap because you have to pay something more later.
Furthermore it seems like I'm limited to motherboard choises to the ones with UIO support.
This implies two things
a) need to buy (a bit expensive) UIO (2U) Riser cards, since PCIe cards are mounted perpendicular to the motherboard's PCI normal insertion direction (or if you want the PCIe mounting will be parallel to the motherboard's plane)
b) only UIO motherboards are supported: this means that the ports I described in my previous posts are the only one that can be used

I'm a bit disappointed by this, however this was a really cheap chassis (<700$) instead of the 1600$ on amazon (same model) or the 2200$ (LPB model, same as peetaur's). If I have to buy 2x100$ PCIe riser this will still be convenient. The speed however will be limited to x8.
I contacted supermicro support and asked them if they plan to support UIO on the lga2011 series. Otherwise I think I'll go with the G34 socket and a couple of not-too-much expensive AMD Interlagos CPUs.

I'll keep you updated on the developement and to Supermicro's answer. I even asked them if they can suggest an alternative configuration to lga2011. We'll see if they reply.
 
Seems like this would be quite tricky to actually hold a motherboard inside and not costing too much. A more practical solution would be to do something like what they suggest here.

I think maybe it's better since that will offer me a larger choice of motherboards. I already own a 100$ E-ATX/ATX/microATX case (Fractal Design Define XL) and can definitively put the actual server inside, while using the other chassis as "HDD container".
Still I'd need a pair of SAS controllers to get "decent" performance and, again, a pair of SAS expanders. I don't really understand how they did, but the HP SAS Expander from the article has a port on its back.

On the other end of the server, what should I put? (I mean in the Supermicro chassis I just ordered I'd put the backplane they suggest and the HP SAS Expander, but on the Fractal Design what should I put besides motherboard, CPU, RAM and SAS controllers?). It doesn't necessarily have to be the HP SAS Expander (that was just to ilustrate the idea): one/two cards to connect two chassis together without any MB / CPU / RAM in one of them (basically it does nothing but power HDDs and house them).
I'm not looking for very high bandwith (though it won't hurt) and HDD reads/writes (let's say 100MB/s-200MB/s should be plenty).

The inconvenient of this would be the non-redundancy of the server but I can live with that at home. I appreciate if anybody can clear this aspect (as tried to explain in the article).
 
I think I need to bump this since probably a post wasn't read.

I repeat the question then: does ZFS suffer from sudden power outage? I ask this because as for now I'd have a 36-day HDD JBOD backed up by a redundant PSU conneted to two UPSses. The actual server hosting the CPU and RAM would be inside another case and powered by a non-redundant PSU (always connected to an UPS). If this non-reduntant PSU fail what will happen? The HDDs will always be powered, but not connected to the SAS backlane anymore. Does ZFS support that (surely it won't like it, but will it remains reliable).

Another solution, which may be cheaper right now, is using these so-called UIO motherboards. Basically I could just take a 1366 motherboard and a cheap quad core Xeon (or an equivalent 8 cores Interlagos) and throw that inside the redundant-PSU case. Actually Supermicro suggested me a 8 port SAS controller with external port for 600$! Would connecting the SAS port internal to the MB to the backlane work without any additional controller (or using a cheaper one). Basically that one is so expensive because it provides an external port and support 250+ devices (which I don't need to, the latter I mean)?

Here is (part) of the reply I got from supermicro
Currently your only choice would be using X8DTU-F mainboard with risercard RSC-R2UU-A4E8+ and RSC-R2UU-2E4R

NOTE : right side risercard can ONLY be used if CPU2 is not used

I'd say it's not so bad (I can buy a <300$ CPU and quad core should be enough - can also take the 3.3GHz single socket one since I could not use socket # 2). Still better than another case, PSU. I think the 200$ controllers you suggested may manage up to 36HDDs. And, anyway, there is only one (SAS) cable going to the backlane, so I don't even need 8 ports: 1 should be enough (if it can support up to 36HDDs IMHO).

What do you think? Could this work? Basically they just "wash their hands" saying that only their backplane with THEIR (or adaptec's) controller and only a few HDDs are supported. I think consumer drives should / may work, they just don't assure it for obvious responsability reasons.
 
@DutchDaemon: I just used bold font so that one didn't have to read the whole post to see if he/she could answer.

Anyway. I got another answer from Supermicro which is quite strange to say the least:
X9DRi-LN4F+ (or any other X9D board) requires revision “M” chassis

The SC745TQ-R800B is not validated (PSU is not tested) but you can use CSE-745TQ-R920B which is officially supported

And
For X9D mainboard Series (any form factor) revision “M” is indeed mandatory, any older chassis revision is 100% not compatible

There are differences in mainboard mounting, heatsink mounting and powersupply revisions

Now I can understand that EE-ATX boards won't fit into a normal big tower case, but they said even ATX and E-ATX boards require a revision "M" chassis.
That's quite odd to me because as far as I know ATX and E-ATX should be standard form factor with standard mounting holes.

In the end: not only I should go with their chassis E-ATX or EE-ATX, but they require me to buy a EE-ATX case of a specific revision (which costs 50% more), which will deprecate in a few years. What are they doing? Changing mounting holes' placement every 2 years? Or this is "only" for an optimized air flow or something like that?

Anyone using "standard" (consumer) E-ATX cases for their MBs? Which one and with which MB? Unfortunately I can't access Tyan motherboards since they're not sold in my country. The last possibility would be a dual socket AMD G34 from ASUS but their quality should not be as good as SM's (I think).
 
luckylinux said:
@DutchDaemon: I just used bold font so that one didn't have to read the whole post to see if he/she could answer.

Ok, then I suggest you either write more succinctly or use a "tl;dr" type summary or list of questions near the end.
 
luckylinux said:
Now I can understand that EE-ATX boards won't fit into a normal big tower case, but they said even ATX and E-ATX boards require a revision "M" chassis.
That's quite odd to me because as far as I know ATX and E-ATX should be standard form factor with standard mounting holes.
This is just a guess, but if the X9D boards are using a newly-introduced CPU family there simply may not be any matching holes anywhere on the older chassis that can accomodate the new heat sink mounting points.
 
Terry_Kennedy said:
This is just a guess, but if the X9D boards are using a newly-introduced CPU family there simply may not be any matching holes anywhere on the older chassis that can accomodate the new heat sink mounting points.
Looks like you're right :(

I also contacted Noctua and today got a reply VERY FAST.
Here it is if anybody else was wondering about the same thing.

The Xeon coolers for LGA1366 are not compatible with the LGA2011 due to a completely different mounting specification.

For the X9DR3-F we have no solution at all, because the mainboard has the LGA2011 socket with the Narrow ILM. For the X9DRi-LN4F+ the situation is better, as you can basically use all standard coolers from our lineup in combination with the NM-I2011 Mounting-Kit.

Unfortunately I'm a (bit of a) noob in server motherboards. Strange that the heatsink standoffs aren't on the motherboard but on the chassis :S. Is this for added mechanical stability? On desktop motherboards the heat-sink support was always on the motherboard (as well as a backplate for added stability). I guess server MB are really a whole other class then.
 
luckylinux said:
Anyone using "standard" (consumer) E-ATX cases for their MBs? Which one and with which MB? Unfortunately I can't access Tyan motherboards since they're not sold in my country. The last possibility would be a dual socket AMD G34 from ASUS but their quality should not be as good as SM's (I think).

Lian Li makes "consumer" (but they are extremely high quality) cases for E-ATX and HPTX standards like this one. Regarding which MB, Supermicro does make a dual socket C32 MB. I'm actually working on my home ZFS build and planning on using the Supermicro board. The last AMD Server build I did the backplate actually secured the heatsink, while the Intel build I did for work the standoffs on the chassis had secured the heatsink. The difference one was a rack (Work) while the other was a home build (pedestal).

Either way if you can hang for a couple of days I'm actually in the process of ordering a dual C32 Super Mirco MB at the minimum I can tell you what the heat sink back plate looks like.
 
BTW that Asus Board is EEB. The specs from Lian Li for EEB would be this case. Very nice with hot swappable drive bays, but it ain't cheap that's for sure. Either way it will fit and you won't have to worry about special SuperMicro changes to the backplate.

In addition I'm planning on going Dual Socket C32. It is cheaper than Dual G34 (by about $100 on the MB and in orders of magnitude cheaper for the processors) and comes in the smaller ATX format, which means just about any ATX case will fit it. You still can go up to 128GB of RAM. You do forfeit the Quad Channel vs the Dual, but for my build (for home) I think I can make due with the dual channel. We'll see :)
 
Back
Top