ZFS - Best partitioning scheme between SSD and caviar red

cchamberlain said:
Wow, all I can say about that is holy crap. May I ask what your primary usage is? Website?
Hobby usage. All 128TB are available to my web servers, though most of the data isn't used on any of my web pages. A lot of it is large datasets relating to my other hobby - auto racing. I've got engine/vehicle performance, actual wind tunnel performance, and simulations of all of the above. Also lots of high-res laser scans of the car for 3D modeling. Sorry, no porn.

Work is different - substantial local computations on large(r) datasets, so CPU performance is also important.

By PCI-E SSD, is that an mSATA or something else?
Something else. OCZ Enterprise Velodrive DC-HHPX8-320G - specs here. I picked up a number of them inexpensively since they were just EOL'd in the last few months.
 
Terry_Kennedy said:
Hobby usage. All 128TB are available to my web servers, though most of the data isn't used on any of my web pages. A lot of it is large datasets relating to my other hobby - auto racing. I've got engine/vehicle performance, actual wind tunnel performance, and simulations of all of the above. Also lots of high-res laser scans of the car for 3D modeling. Sorry, no porn.

Work is different - substantial local computations on large(r) datasets, so CPU performance is also important.


Something else. OCZ Enterprise Velodrive DC-HHPX8-320G - specs here. I picked up a number of them inexpensively since they were just EOL'd in the last few months.

Sounds like you are in big data. Those drives have some nice read speed.
 
kpa said:
Do not use glabel(8) for labeling disks or partitions on GPT partitioned disks. GPT has its own labeling system that is superior in many ways. Also labeling the whole disks does not make sense if you want to identify partitions by easy names.

After creating the partitions as you did above:

# gpart modify -l swap1 -i 2 ada1
# gpart modify -l swap2 -i 2 ada2

# gpart modify -l disk1 -i 3 ada1
# gpart modify -l disk2 -i 3 ada2

Do these to force GEOM "retasting" to make the labels visible in /dev/gpt immediately:

# true >/dev/ada1
# true >/dev/ada2

You can see the labels in the output of

# gpart show -l

Then you can use the names gpt/swap1 gpt/swap2 for building a gmirror(8) for swap.

# gmirror label myswap gpt/swap1 gpt/swap2

And build the ZFS pool using the names gpt/disk1 and gpt/disk2

# zpool create mypool mirror gpt/disk1 gpt/disk2

The bootcode is written with these, on both disks:

# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2


This was very helpful for getting the labels correct, however I ran into an issue when mirroring the disks. I first tried the command you gave but it didn't work because I already have a pool up and I'm trying to add a mirror to it (per usdmatt's instructions on page 1).

# zpool attach zroot gpt/disk1 gpt/disk2
Code:
cannot attach gpt/disk2 to gpt/disk1: no such device in pool

I also tried -

# zpool attach zroot disk1 disk2
Code:
cannot open 'disk2': no such GEOM provider
must be a full path or shorthand device name

Will probably just wipe the disks again tonight and start over, at least I'm learning. :)

If I'm doing something glaringly wrong please let me know. Also, am I setting up the swap mirror in typical fashion or could I be doing it in a better way? Does the ordering of my SATA connections on my motherboard matter? If I switch the connections, would the disks still stay ada1 and ada2? If I'm understanding correctly the actual labels are put on the last sector of the disks, but is the geom ada1/ada2 also written to the disks? Thanks again for bearing with me.
 
cchamberlain said:
# zpool attach zroot gpt/disk1 gpt/disk2
Code:
cannot attach gpt/disk2 to gpt/disk1: no such device in pool

To use those GPT labels that were assigned, give an absolute, not relative path to them:
# zpool attach zroot /dev/gpt/disk1 /dev/gpt/disk2
 
wblock@ said:
To use those GPT labels that were assigned, give an absolute, not relative path to them:
# zpool attach zroot /dev/gpt/disk1 /dev/gpt/disk2

Tried it, got the
Code:
cannot attach /dev/gpt/disk2 to /dev/gpt/disk1: no such device in pool
I did a gpart show -l and I noticed that what used to be GEOM ada1 device got renamed to ufsid/5162c4e28540c74a. Not sure how that happened, but I'm about ready to reformat, feels like something is screwed up.

Another question I have is if I'm mirroring the disks and mirroring the swaps, am I not supposed to mirror the boot slice also?
 
@wblock@, the relative names work fine, all geom(8) utilities support using shortcut names without the leading /dev.

Back to the problem, if you're adding a new mirror vdev to an existing pool the correct command is # zpool add.

# zpool add zroot mirror gpt/disk1 gpt/disk2.

However I have to ask, how does the pool look without the mirror that is going to be added? If it's now a single disk pool adding a mirror vdev to it does not make much sense. The resulting pool wouldn't have full redundancy for all of the data stored.
 
Last edited by a moderator:
kpa said:
However I have to ask, how does the pool look without the mirror that is going to be added? If it's now a single disk pool adding a mirror vdev to it does not make much sense. The resulting pool wouldn't have full redundancy for all of the data stored.

Its pretty much just a fresh install. This is what I was thinking might end up happening, wasn't sure if adding the mirror copied existing data or not so that clears that up.

Sounds like the smartest move at this point would be to wipe the drives and start over clean, right?
 
usdmatt said:
You can start with a single disk, create file systems, put data on it and then convert to a mirror without any problem.

Code:
# zpool create pool disk1 (single disk)
(you can start adding filesystems/data now)
# zpool attach pool disk1 disk2 (mirror)
# zpool add pool mirror disk3 disk4 (2 mirrors)
# zpool add pool mirror disk5 disk6 (3 mirrors)

I was going off these instructions that usdmatt posted on the first page. I was under the belief that using add on an existing pool would stripe the drives. He mentioned specifically not to use add if I was adding a second drive into a single mirror as it is undoable (quite possible I misunderstood something here).
 
cchamberlain said:
Its pretty much just a fresh install. This is what I was thinking might end up happening, wasn't sure if adding the mirror copied existing data or not so that clears that up.

Sounds like the smartest move at this point would be to wipe the drives and start over clean, right?

You're not offering much information and most of what you offer is contradicting itself.

What kind of pool you want to build, that's the first thing you have to make very clear to yourself and state it very clearly in your posts. Otherwise we can't offer any reliable instructions.
 
kpa said:
You're not offering much information and most of what you offer is contradicting itself.

What kind of pool you want to build, that's the first thing you have to make very clear to yourself and state it very clearly in your posts. Otherwise we can't offer any reliable instructions.

Sorry if I've made things complicated, bad habit of typing what I'm thinking. First and foremost, my goal is to learn the ins-and-outs of FreeBSD as fast as possible via trial and error. I know what I want to do with the server, still not locked down on what the best configuration would be to get there (because I'm jumping headfirst into ZFS). I'll try and keep the questions a little more focused.

As I've said, my end state usage is very clear - a mix of home media and file server as well as personal web server (mostly intranet usage - usenet applications, SABnzbd, CouchPotato, SickBeard, etc.). I want to first get this mirror going and then in a few weeks or so will likely wipe that out and go RAIDZ1 when I pick up a third drive. Again, goal is to learn the gotchas of ZFS, and improve on my final build. Whats the point in redundancy if you accidentally wipe out all your data typing in the wrong command, right? I should have made that clearer from the start.

These are the primary questions I'm still researching
  • What is the best practice for setting up boot and swap partition (slice?) mirroring on ZFS? If swap should be mirrored, why not boot?
  • How should the 4k offset be done on my drives?

And yes, I realize most of this is on the internet and I have read quite a few pages / forums on the 4k stuff alone. Its taking a while to process everything since it seems like a lot has changed in the install process that is not in the top (official looking) hits on Google, leading to a lot of confusion over contradictory information.

Please let me know if anything is not clear at this point and thanks for all the help so far!
 
wblock@ said:
Please expand on that--do you mean SSD support in the installer, or TRIM support in ZFS, or something else?

Hi Warren,

I'm booting from a conventional GEOM mirror. Some time soon I might get game enough to turn that into a ZFS mirror.

I was mostly referring to TRIM support for SSDs in general, and ZFS in particular. Following all the threads on the 4K block thing has taken time too.

I know that ZFS is self-levelling to some extent. But sorting out the issues takes time and effort.

Cheers,
 
If I get this right you started with one disk and you want to attach another disk to create a mirror vdev? In that case the proper procedure is # zpool attach and yes, the existing data will be replicated on the newly attached disk.

It goes something like this:

Partition the disks the way you already did.

Create the pool with one disk assuming the partition on the first disk is called gpt/disk1:

# zpool create pool gpt/disk1

Now fill the pool with data.

Then attach the second disk or more precisily the partition gpt/disk2 on it to the pool:

# zpool attach pool gpt/disk1 gpt/disk2

This should give you exactly what you want with the existing data replicated on the second disk and ZFS handling the redundancy for you in a completely transparent way.


Of course you could create the pool with a mirror vdev at the very start but this is more of a proof that you can always turn a single disk ZFS pool into a mirrored one.
 
Thanks @kpa, I finally got around to repartitioning the system (ended up going for a clean install). I followed @usdmatt's advice and used @vermaden's guide to a T, and added my SSD as cache device. My setup now looks like -
# zpool status
Code:
  pool: sys
 state: ONLINE
  scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        sys           ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0
            gpt/sys2  ONLINE       0     0     0
        cache
          ada0        ONLINE       0     0     0

I'm happy with this setup for now, but down the line if I get more drives and go to RAIDZ1, would I be able to use the dump/restore commands to back up pertinent data elsewhere, reformat the system, install as RAIDZ1, and restore the data? Would I be better off using beadm to back up everything elsewhere?

If anyone has done this, I'm just looking for the simplest process, perhaps there is an up to date thread on this somewhere that I haven't come across yet.
 
Last edited by a moderator:
kpa said:
dump(8) does not work with ZFS. You'll have to use either # zfs send / # zfs receive to create/restore backups or use net/rsync for the same purpose.

Okay thanks, I'll take a look at those. This thread has gotten me to a good point, from my perspective you are good to mark it as solved. Appreciate all the help everyone has thrown in!
 
Back
Top