ZFS ZFS and two hard disks

I physically installed two hard disks to my server, one with 500GB and the other 80GB. I intended to use the disk of 80GB as the target disk for the installation of FreeBSD system and the disk of 500GB as the disk for data storage. However, it seems that the ZFS combined the two hard disks. See below.

Code:
$ df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    515G    407M    515G     0%    /
devfs                 1.0K    1.0K      0B   100%    /dev
zroot/tmp             515G     88K    515G     0%    /tmp
zroot/usr/home        515G    128K    515G     0%    /usr/home
zroot/usr/ports       515G    666M    515G     0%    /usr/ports
zroot/usr/src         515G     88K    515G     0%    /usr/src
zroot/var/audit       515G     88K    515G     0%    /var/audit
zroot/var/crash       515G     88K    515G     0%    /var/crash
zroot/var/log         515G    140K    515G     0%    /var/log
zroot/var/mail        515G     88K    515G     0%    /var/mail
zroot/var/tmp         515G     88K    515G     0%    /var/tmp
zroot                 515G     88K    515G     0%    /zroot
$ mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
$ ls -li /Volumes
ls: /Volumes: No such file or directory
$
 
The commands shown don't really confirm anything, other than the size appears to be 515GB, which is probably the sum of the 500 & 80GB drives.
What does zpool status show? That will confirm exactly how the disks have been configured in ZFS.

Code:
$ ls -li /Volumes
ls: /Volumes: No such file or directory
I think this is a directory found on OSX? Doesn't exist on FreeBSD
 
The commands shown don't really confirm anything, other than the size appears to be 515GB, which is probably the sum of the 500 & 80GB drives.
What does zpool status show? That will confirm exactly how the disks have been configured in ZFS.

Code:
$ ls -li /Volumes
ls: /Volumes: No such file or directory
I think this is a directory found on OSX? Doesn't exist on FreeBSD

Here it is. BTW, if the two disks were combined together, then the next time I re-install FreeBSD all data stored on the two disks would be erased? The intention I installed two hard disks on the server is to install FreeBSD on one disk and store data on another disk so that if I would re-installed FreeBSD, the data would be still there.

Code:
[tomhsiung@Toms-Server ~]$ zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada0p3    ONLINE       0     0     0
      ada1p3    ONLINE       0     0     0

errors: No known data errors
[tomhsiung@Toms-Server ~]$
 
Yes you have a stripe across the two disks. I would reinstall FreeBSD and make sure you only select the one disk in the ZFS section (I never use the installer so I'm not sure of the exact options but I'm sure you can select just a single disk). If you're not sure it may be worth temporarily unplugging the sata cable from the larger drive so the installer has no option other than using a single disk.

The only way to keep any data already on the pool would be to copy it somewhere else first. You're only using 400MB so it doesn't look like you actually have anything on there other than FreeBSD?

If they are separate pools then yes, you would be able to do anything you want with the 80GB system disk without affecting the data pool. Just bear in mind that if you re-install FreeBSD, the data pool won't appear automatically (so don't panic if you start up and don't see it mounted), you just need to import it once the system is up and running. Of course you need to be careful during re-install to make sure you don't make any changes to the wrong disk.
 
OK. Let me re-install FreeBSD.

Is there any method to set the filesystems of both hard disks to ZFS but they are not merged as one stripe?
 
From mind I'd say only select one HD. It's been a while since I used the installer, but I believe it does allow you to specify what drivers you want to use for your installation. When that doesn't work you could always drop down to the console and do the partitioning manually.
 
Just deal with one disk in the installer. Once you have the system up and running, creating a pool on the second disk is easy.
 
Just deal with one disk in the installer. Once you have the system up and running, creating a pool on the second disk is easy.

Could you show me a reference for how to create a ZFS prove once the system is up and running! Thanks!
 
Here's what I would do to make sure I didn't get anything wrong.

Check which disk ZFS is currently running on for the system. Depending on which order the disks are connected, the 80GB disk could be ada0 or ada1. Also I'd probably check zfs list to make sure it does show ~80GB of space and I didn't install to the wrong disk.
Code:
# zpool status

If the system pool is on ada0, I'd then double check ada1 is the 500GB. (Obviously your output should show 500G and not 10G)
Code:
# diskinfo -v /dev/ada1
/dev/ada1
        512             # sectorsize
        10737418240     # mediasize in bytes (10G)

Now that I'm sure, destroy the partition table on this disk as we don't need it and it just complicates things, then create a new pool
Code:
# gpart destroy -F ada1
# zpool create storage ada1
 
Here is my output.

Code:
$ zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada1p3    ONLINE       0     0     0

errors: No known data errors
$ diskinfo -v /dev/ada0
diskinfo: /dev/ada0: Permission denied
$ su
Password:
root@Toms-Server:/ # diskinfo -v /dev/ada0
/dev/ada0
    512             # sectorsize
    500107862016    # mediasize in bytes (466G)
    976773168       # mediasize in sectors
    0               # stripesize
    0               # stripeoffset
    969021          # Cylinders according to firmware.
    16              # Heads according to firmware.
    63              # Sectors according to firmware.
    5VME5PJT        # Disk ident.
    Not_Zoned       # Zone Mode

root@Toms-Server:/ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               1.04G  69.2G    88K  /zroot
zroot/ROOT           407M  69.2G    88K  none
zroot/ROOT/default   407M  69.2G   407M  /
zroot/tmp             88K  69.2G    88K  /tmp
zroot/usr            651M  69.2G    88K  /usr
zroot/usr/home        88K  69.2G    88K  /usr/home
zroot/usr/ports      651M  69.2G   651M  /usr/ports
zroot/usr/src         88K  69.2G    88K  /usr/src
zroot/var            592K  69.2G    88K  /var
zroot/var/audit       88K  69.2G    88K  /var/audit
zroot/var/crash       88K  69.2G    88K  /var/crash
zroot/var/log        152K  69.2G   152K  /var/log
zroot/var/mail        88K  69.2G    88K  /var/mail
zroot/var/tmp         88K  69.2G    88K  /var/tmp
root@Toms-Server:/ #
 
And I got this error,

Code:
root@Toms-Server:/ # gpart destroy -F ada0
gpart: Device busy
root@Toms-Server:/ # gpart destroy -F ada0
gpart: Device busy

I searched the Internet and they said this error was caused by the fact that partition(s) on ada0 were not removed first. I don't know how to remove partition.
 
It seems that partition 2 on ada0 (500GB) is busy and prevents the gpart destroy -F ada0 syntax.

Code:
root@Toms-Server:/ # gpart delete -i ada0
gpart: Invalid value for 'i' argument: Invalid argument
root@Toms-Server:/ # gpart delete -i 1 ada0
ada0p1 deleted
root@Toms-Server:/ # gpart delete -i 2 ada0
gpart: Device busy
root@Toms-Server:/ # gpart delete -i 0 ada0
gpart: index '0': No such file or directory
root@Toms-Server:/ # gpart delete -i 3 ada0
ada0p3 deleted
root@Toms-Server:/ # gpart delete -i 2 ada0
gpart: Device busy
root@Toms-Server:/ # gpart delete -i 4 ada0
gpart: index '4': No such file or directory
root@Toms-Server:/ # gpart destroy -F ada0
gpart: Device busy
root@Toms-Server:/ #
 
Seems that the swap0 partition on ada0 is the cause. Try to delete the swap0 partition now.

Code:
root@Toms-Server:/ # gpart show -l ada1
=>       40  156299296  ada1  GPT  (75G)
         40       1024     1  gptboot0  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  swap0  (2.0G)
    4196352  152102912     3  zfs0  (73G)
  156299264         72        - free -  (36K)

root@Toms-Server:/ # gpart show -l ada0
=>       40  976773088  ada0  GPT  (466G)
         40       2008        - free -  (1.0M)
       2048    4194304     2  swap0  (2.0G)
    4196352  972576776        - free -  (464G)

root@Toms-Server:/ #
 
Looks like you have a partition called swap0 on both disks. If you're going to label partitions make sure you use unique names, usually swap0 on the first disk, swap1 on the second, etc. (Did you label them or the installer?)

See if it's in use with swapinfo, then use swapoff /dev/gpt/swap0 to remove the swap device and then delete the partition. I'd probably reboot at that point in the hope that the system will automatically pick up the swap0 partition from the other disk and start using that instead.
 
Finally, I booted my server with the FreeBSD USB-stick and during the partition step I used shell to delete the swap0 partition. Now the "busy device" error resolved.
 
Now I have a clean disk (500GB), no partition table. What should I do next?

Code:
[tomhsiung@Toms-Server /]$ sudo zpool create storage ada0
invalid vdev specification
use '-f' to override the following errors:
/dev/ada0 is part of potentially active pool 'zroot'
 
It's finding the ZFS data from when it was part of the original pool. If you're confident it's the right disk, run zpool create -f storage ada0
 
It's finding the ZFS data from when it was part of the original pool. If you're confident it's the right disk, run zpool create -f storage ada0

Done. So now my second hard disk (500GB, ada0) is ready and I could copy data into it? If I reinstall the FreeBSD system into hard disk one (80GB, ada1), all data existing on the ada0 would remain there? Do I need to input some command before I could see these stored data again after reinstallation, or just to physically connect the ada0 with wires to the mother board and boot the machine? Much appreciated!

I could see from the "df -h" command that the ada0 disk was recognized as /data. But the "gpart show -l" command did not output the partition info of the disk ada0, is this correct?

Code:
[tomhsiung@Toms-Server /]$ df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default     70G    417M     69G     1%    /
devfs                 1.0K    1.0K      0B   100%    /dev
zroot/tmp              69G     88K     69G     0%    /tmp
zroot/usr/home         69G     88K     69G     0%    /usr/home
zroot/usr/ports        70G    651M     69G     1%    /usr/ports
zroot/usr/src          69G     88K     69G     0%    /usr/src
zroot/var/audit        69G     88K     69G     0%    /var/audit
zroot/var/crash        69G     88K     69G     0%    /var/crash
zroot/var/log          69G    160K     69G     0%    /var/log
zroot/var/mail         69G     88K     69G     0%    /var/mail
zroot/var/tmp          69G     88K     69G     0%    /var/tmp
zroot                  69G     88K     69G     0%    /zroot
data                  449G     23K    449G     0%    /data
[tomhsiung@Toms-Server /]$ sudo gpart show -l
Password:
=>       40  156299296  ada1  GPT  (75G)
         40       1024     1  gptboot0  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  swap0  (2.0G)
    4196352  152102912     3  zfs0  (73G)
  156299264         72        - free -  (36K)

[tomhsiung@Toms-Server /]$ sudo gpart show -l ada0
gpart: No such geom: ada0.
[tomhsiung@Toms-Server /]$

Tom
 
Finally done! In my example, my secondary disk is ada0. Be careful about yours. And I partition whole disk into 1 partition. Later, I create ZFS pool called "data".

Step 1 - delete all partitions on secondary disk by gpart delete -i1 ada0

Step 2 - destroy the partition table on secondary disk by gpart destroy -F ada0

Step 3 - create the partition table on secondary disk by gpart create -s GPT ada0

Step 4 - create the partition (I partition whole disk into 1 partition, ZFS) on secondary disk by gpart add -t freebsd-zfs -l zfs0 -a 1M ada0

Step 5 - create zfs pool by zpool create data ada0p1

Code:
Last login: Wed Jan 24 22:52:32 2018 from 192.168.1.201
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
Could not chdir to home directory /home/tomhsiung: No such file or directory
[tomhsiung@Toms-Server /]$ sudo gpart show -l
Password:
=>       40  976773088  ada0  GPT  (466G)
         40       2008        - free -  (1.0M)
       2048  976771072     1  zfs0  (466G)
  976773120          8        - free -  (4.0K)

=>       40  156299296  ada1  GPT  (75G)
         40       1024     1  gptboot0  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  swap0  (2.0G)
    4196352  152102912     3  zfs0  (73G)
  156299264         72        - free -  (36K)

[tomhsiung@Toms-Server /]$ zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    data        ONLINE       0     0     0
      ada0p1    ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada1p3    ONLINE       0     0     0

errors: No known data errors
[tomhsiung@Toms-Server /]$
 
I physically installed two hard disks to my server, one with 500GB and the other 80GB. I intended to use the disk of 80GB as the target disk for the installation of FreeBSD system and the disk of 500GB as the disk for data storage. However, it seems that the ZFS combined the two hard disks. See below.

Code:
$ df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    515G    407M    515G     0%    /
devfs                 1.0K    1.0K      0B   100%    /dev
zroot/tmp             515G     88K    515G     0%    /tmp
zroot/usr/home        515G    128K    515G     0%    /usr/home
zroot/usr/ports       515G    666M    515G     0%    /usr/ports
zroot/usr/src         515G     88K    515G     0%    /usr/src
zroot/var/audit       515G     88K    515G     0%    /var/audit
zroot/var/crash       515G     88K    515G     0%    /var/crash
zroot/var/log         515G    140K    515G     0%    /var/log
zroot/var/mail        515G     88K    515G     0%    /var/mail
zroot/var/tmp         515G     88K    515G     0%    /var/tmp
zroot                 515G     88K    515G     0%    /zroot
$ mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
$ ls -li /Volumes
ls: /Volumes: No such file or directory
$


just reinstall and use ufs
 
I could see from the "df -h" command that the ada0 disk was recognized as /data. But the "gpart show -l" command did not output the partition info of the disk ada0, is this correct?

There's not really any point in creating a partition table on the disk if it's just going to be used for ZFS. If you do partition it, I would label it something meaningful like mydata instead of zfs0, then use the label when creating the pool. Doesn't really make any difference though.
Code:
zpool create data gpt/mydata

If you re-install the system, or move the disk to another server, you will need to run zpool import -f data to "mount" the pool again.

just reinstall and use ufs

Why?
 
There's not really any point in creating a partition table on the disk if it's just going to be used for ZFS. If you do partition it, I would label it something meaningful like mydata instead of zfs0, then use the label when creating the pool. Doesn't really make any difference though.
Code:
zpool create data gpt/mydata

If you re-install the system, or move the disk to another server, you will need to run zpool import -f data to "mount" the pool again.



Why?

Less drama.
 
ZFS is no more difficult to use than UFS - if you take the time to learn the basics (In fact it makes many things easier). UFS is still useful for small or embedded systems, or virtual machines (especially if they're on top of ZFS), but for most people ZFS now makes more sense as their main file system.

The first reply was a complete waste of time on a post about how to create two pools.
 
ZFS is no more difficult to use than UFS - if you take the time to learn the basics (In fact it makes many things easier). UFS is still useful for small or embedded systems, or virtual machines (especially if they're on top of ZFS), but for most people ZFS now makes more sense as their main file system.

The first reply was a complete waste of time on a post about how to create two pools.
I know zfs.
Its cool.
UFS works fine too, and is simpler esp for basic use cases, and in fact recommended under busy databases eh?
Not sure what you mean by waste of time...or maybe that' not directed at me.
 
So by zfs, you even don’t have to partition the hard disk, except the hard disk where FreeBSD system installs.
 
Back
Top