Even if you give all of the space on a disk to ZFS, you should still use a GPT-style partition table (created using gpart), with just one partition. Why? Because it allows you to give the partition a clear and understandable string name, which doesn't change when disks are added/removed. It also means that if the disk becomes separated from the computer (for example, you find it a few years later in a drawer), you can very quickly check what is on it, because the string name of the partition will help identify it.I would say it's best to build the pool using the entire disk..
gpart show
=> 40 104857520 ada0 GPT (50G)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 4194304 3 freebsd-swap (2.0G)
4605952 100249600 4 freebsd-zfs (48G)
104855552 2008 - free - (1.0M)
=> 40 104857520 ada1 GPT (50G)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 4194304 3 freebsd-swap (2.0G)
4605952 100249600 4 freebsd-zfs (48G)
104855552 2008 - free - (1.0M)
I'll describe my environment. I want to separate data from OS so I want to create two zpools.Your question isn't clear. Do you have any other OS on these disks?
[sherman.129] # gpart status
Name Status Components
ada0p1 OK ada0
ada0p2 OK ada0
ada0p3 OK ada0
ada1p1 OK ada1
ada1p2 OK ada1
ada1p3 OK ada1
[sherman.130] # gmirror status
Name Status Components
mirror/swap COMPLETE ada0p2 (ACTIVE)
ada1p2 (ACTIVE)
[sherman.132] $ swapinfo
Device 1K-blocks Used Avail Capacity
/dev/mirror/swap 16777212 0 16777212 0%
[sherman.133] $ zpool status zroot
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:42 with 0 errors on Sun Nov 1 03:11:10 2020
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
Not necessarily. There is a modified zfsboot installer script, if you replace the one on the install image [1] with it, bsdinstall(8) let you set the zpool partition size. I have a multi OS installation on two separate disks, used this method to install 12.2-RELEASE root-on-ZFS in a partition 20 % of the disk in size, stripe, but it works also for mirror or raidz.So if I understood correctly there is no possibility to do this by BSDinstall and I need to manually (in command prompt) do this instructions:
That might do the trick but then again... why use ZFS if you don't want to use its capabilities?Thanks to all for answers. In the end I decided to add additional disks and install FreeBSD there. I think it is to much work to do that on separate partitions and it's not worth that.
About mirrors: my understanding is that keeping stuff separated and mirrors are solving completely different issue. I want to use mirror because I want to prepare for disk failure and not for keeping data separated.First, your question is odd. You want to use 2 pools to separate system and user data. But you also talk about a mirror, but a mirror only ensures that your data is stored on 2 different storages, how would this help to keep your stuff separated?
In most cases you're right but I've got quite specific environment. So I've got many servers working on production and most of them with deprecated versions of FreeBSD e.g. 11.0. What's more users are not happy when I want to do updates on this environment so trying to convince them to update from 11.0 to 12.2 is very difficult because it needs many reboot and a lot of time to fetch an update. So I came up with different idea. If I have all important data on separate pool: let's call it zdata I can call 'zpool export zdata' on the old FreeBSD take new disks install there fresh FreeBSD 12.2 attach disks with zdata and call 'zpool import zdata'.Most of all: why would it matter? It seems as if you think that if you use one ZFS pool all your data can't be separated but this is not true. I always have 1 ZFS pool on my servers (except one) but I can still keep my stuff easily separated. By utilizing a home directory for example (zroot/home on my end). Or keeping data in specific places. I don't host my websites in the default designated space of /usr/local/www, that seems chaotic to me. Instead I rely on /opt/websites (which is known as zroot/websites) or ones in home directories. Either way: I can pretty much always keep my user data separated from system data, usually by dedicating a dataset for it.
That's correct.About mirrors: my understanding is that keeping stuff separated and mirrors are solving completely different issue. I want to use mirror because I want to prepare for disk failure and not for keeping data separated.
No offense but... this doesn't make sense. First... upgrading a FreeBSD system doesn't require many reboots at all, all it takes is a decent preparation. Because I host a rather customized setup I always build my stuff from source and well... you can easily build your system, maybe set the job priority decently low in order not to have it interfere with other stuff, and you're done.What's more users are not happy when I want to do updates on this environment so trying to convince them to update from 11.0 to 12.2 is very difficult because it needs many reboot and a lot of time to fetch an update.
Aaaah, I think I see what you mean... Build / upgrade a shadow system in the background, then down your current system, take out the disks, and add them to the new system. Import the data and you're back up and running.So I came up with different idea. If I have all important data on separate pool: let's call it zdata I can call 'zpool export zdata' on the old FreeBSD take new disks install there fresh FreeBSD 12.2 attach disks with zdata and call 'zpool import zdata'.
I heard that the most safe way is to update in cascade so if I want to update fbsd 11.0 to 12.2 i need to update: 11.0 -> 11.1 -> 11.2 -> 11.3 -> 11.4 -> 12.2No offense but... this doesn't make sense. First... upgrading a FreeBSD system doesn't require many reboots at all, all it takes is a decent preparation. Because I host a rather customized setup I always build my stuff from source and well... you can easily build your system, maybe set the job priority decently low in order not to have it interfere with other stuff, and you're done.
You're absolutely right! I have plans to do it exactly this way in the future but right now I just need to handle this disk migration so I needed to figure out something.That strategy could definitely work, however... If possible I'd try to set up some kind of network storage instead. It's been a while since I worked with one of those myself but that would probably be the most ideal setup. Disconnect from one server, connect from the upgraded server and you're done. Since most have RAID solutions you also wouldn't need to worry too much about hardware failures.
What's more users are not happy when I want to do updates on this environment so trying to convince them to update from 11.0 to 12.2 is very difficult because it needs many reboot and a lot of time to fetch an update.
You heard wrong. You can go from 11.0 to 12.2 in one go.I heard that the most safe way is to update in cascade so if I want to update fbsd 11.0 to 12.2 i need to update: 11.0 -> 11.1 -> 11.2 -> 11.3 -> 11.4 -> 12.2