Solved zpool quick set up question

Hello,

Pardon a very stupid question, but I searched prior and couldn't make out the exact answer = \
Anyways,

I have 4 new drives. I am trying to format them, with 4k alignment, and then group them into one RAID-Z vdev and than either attach them to existing pool or create a new pool using it.
Code:
gpart create -s GPT ad0 (repeat n times for all drives)
gpart add -t freebsd-zfs -a 4k ad0 (repeat n times for all drives)
zpool create pool-name-here raidz ad0 ad1 ad2 ad3
or if I am attaching
zpool add pool-name-here raidz ad4 ad5 ad6 ...

Then if I want to add facilities like slog for example, I don't format those, correct? So I would just do
zpool attach pool-name-here log sdb1

Then make my file systems etc. with zfs create

And so to mount this at boot time I add (zfs_enable="YES") into rc.conf and (zfs_load="YES") into loader.conf correct?

Could you please let me know if I am missing something? I always done this automagically at installation, now taking it seriously etc. doing it manually..
 
Don't add a partition table or partitions. Let ZFS use the whole drives.
 
SirDice

But what about 4k alignment? I am running 11.0 release on that machine (fresh install on a UFS formated ssd). Does zfs do this automatically these days?

Also side question, since I have my entire OS on that disk. If the system/system disk fall down (due to user error haha), I should just be able to wipe that drive and Zpool will be detected by any other system that knows zfs and should be able to reform it immediately, correct?

p.s. sorry I didn't post in storage right away. I thought this question was to basic for that section = \ But point taken, I'll post all zfs/storage matters here next time.
 
Where did sdb1 come from, that's a Linux thing isn't it?

Also you should be using /dev/adaX for disk devices these days. The old adX devices were scrapped years ago; They still exist for legacy purposes but they just point to the relevant ada device.

As far as I'm aware there is not really any problem with using partitions with ZFS on FreeBSD. If you're only using the disks for ZFS and not booting off them, you may as well use the entire disk as it keeps things simpler and alignment isn't a problem as the very start of the disk would be aligned anyway. FreeBSD does identify most 4k disks these days, although i would probably set # sysctl vfs.zfs.min_auto_ashift=12 manually before making the pool just to make sure ZFS does use 4k. To confirm you can run # zdb -l /dev/adaX on one of the pool disks (or partitions if you used partitions) and make sure the ashift is 12.

If you do partition the disks using gpart() like in your first example, you will have a new device for each partition. Seeing as you're only adding a ZFS partition, you should see ada0p1. If you add further partitions they will be ada0p2 and so on. If you use partitions, make sure you use the partition when making the pool. If you go and use the raw disk device like in your example commands, ZFS will use the whole disk and clobber your partition table.

I don't believe you actually need zfs_load="yes" in /boot/loader.conf unless you are booting off ZFS, although it can't hurt to have it in there.

I'm not sure what you mean about wiping a disk and pool being detected on another system in your second post. That sentence doesn't make any sense to me...
 
usdmatt

You are correct on all definitions of the disks I was just typing in a hurry at lunch break = ]
My apologies for confusion.

Anyways, the last line was basically me asking this:

I will have a storage pool, and a system drive.
System drive is formatted with UFS. The entire freebsd, ports, swap and boot reside there. I boot only from that drive.
The zpool will be purely the data I am serving to clients.

So, if the system drive fails catastrophically for any reason, will I not loose any data on the zpool. As far as I understood, this was the case. Which means, I could just replace the entire system drive and than import my zpool back, correct?

In the same train of thought, if I were to pull out the entire zpool and relocate it to a different machine that has ZFS of same version or better (smartos, OmniOS, w/e), that system should just be able to detect this pool, import it and bring it online like nothing happened, correct?
 
So, if the system drive fails catastrophically for any reason, will I not loose any data on the zpool. As far as I understood, this was the case. Which means, I could just replace the entire system drive and than import my zpool back, correct?
Yes.

In the same train of thought, if I were to pull out the entire zpool and relocate it to a different machine that has ZFS of same version or better (smartos, OmniOS, w/e), that system should just be able to detect this pool, import it and bring it online like nothing happened, correct?
Yes.
 
Hello,

Pardon a very stupid question, but I searched prior and couldn't make out the exact answer = \
Anyways,

I have 4 new drives. I am trying to format them, with 4k alignment, and then group them into one RAID-Z vdev and than either attach them to existing pool or create a new pool using it.
Code:
gpart create -s GPT ad0 (repeat n times for all drives)
gpart add -t freebsd-zfs -a 4k ad0 (repeat n times for all drives)
zpool create pool-name-here raidz ad0 ad1 ad2 ad3
or if I am attaching
zpool add pool-name-here raidz ad4 ad5 ad6 ...

You partitioned the drives, then used the full drive in the pool creation. You need to pick: partition the drives and use the partitions; or don't partition the drives and use the drives.

Most likely, you want to use: zpool create pool-name-here raidz ad0p1 ad1p1 ad2p1 ad3p1

Or, even better, add a label to your gpart(8) command to give the drives human-readable labels, and use the labels in the pool.
Code:
# gpart create -s GPT ad0
# gpart add -t freebsd-zfs -a 1M -l disk0 ad0
<repeat the above for each drive, incrementing the label number>
# zpool create pool-name-here raidz gpt/disk0 gpt/disk1 gpt/disk2 gpt/disk3
 
SirDice & phoenix
Thank you for reply!

I have set up the pool successfully some time ago. With one raidz vdev and slog I seem to get same speeds as I was on my smaller storage rig. But I have not fully tested how much performance I get with the slog, and I have yet to test running vms from share iscsi storage with and without slog.
Playing with various ZFS features and centralizing my data so far. Many questions still left(will post on relevant board), but All and all, so far so good ^^

Marking as SOLVED.
p.s. I wonder, do we have a gallery freebsd machines in off-topics section or is it too much off-topic?
 
Back
Top