ZFS How to get access to a file system tree on the second disk with ZFS?

Hi, everyone!

There is two hard drive on the system

Code:
% uname -a
FreeBSD desktop.freebsd.lan 12.0-RELEASE-p2 FreeBSD 12.0-RELEASE-p2 r343203 OptiPlex amd64

Description of the connected disks:
KINGSTON - SSD disk with newly installed FreeBSD 12.0-RELEASE-p2
WD - old SATA disk with FreeBSD 11.2 STABLE (also with ZFS)

Code:
% camcontrol devlist
<KINGSTON SUV400S37240G 0C3FD6SD> at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD3200AAJS-56B4A0 01.03A01> at scbus2 target 0 lun 0 (pass2,ada1)

Code:
% dmesg | grep ada
ada0 at ata2 bus 0 scbus0 target 0 lun 0
ada0: <KINGSTON SUV400S37240G 0C3FD6SD> ACS-4 ATA SATA 3.x device
ada0: Serial Number 50026B726701BDFF
ada0: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada0: 228936MB (468862128 512 byte sectors)
ada1 at ata4 bus 0 scbus2 target 0 lun 0
ada1: <WDC WD3200AAJS-56B4A0 01.03A01> ATA8-ACS SATA 2.x device
ada1: Serial Number WD-WCAT1D555825
ada1: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada1: 305245MB (625142448 512 byte sectors)

Initial pre conditions

Code:
% gpart status
Name Status Components
ada0p1 OK ada0
ada0p2 OK ada0
ada0p3 OK ada0
ada1p1 OK ada1
ada1p2 OK ada1
ada1p3 OK ada1

Code:
% gpart show
=> 40 468862048 ada0 GPT (224G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 8388608 2 freebsd-swap (4.0G)
8390656 460470272 3 freebsd-zfs (220G)
468860928 1160 - free - (580K)

=> 40 625142368 ada1 GPT (298G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 16777216 2 freebsd-swap (8.0G)
16779264 608362496 3 freebsd-zfs (290G)
625141760 648 - free - (324K)

Code:
% zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 65,5G 146G 88K /zroot
zroot/ROOT 15,0G 146G 88K none
zroot/ROOT/default 15,0G 146G 15,0G /
zroot/tmp 17,6M 146G 17,6M /tmp
zroot/usr 50,5G 146G 88K /usr
zroot/usr/home 47,6G 146G 47,6G /usr/home
zroot/usr/ports 1,55G 146G 1,55G /usr/ports
zroot/usr/src 1,31G 146G 1,31G /usr/src
zroot/var 1,17M 146G 88K /var
zroot/var/audit 88K 146G 88K /var/audit
zroot/var/crash 88K 146G 88K /var/crash
zroot/var/log 668K 146G 668K /var/log
zroot/var/mail 176K 146G 176K /var/mail
zroot/var/tmp 88K 146G 88K /var/tmp

Code:
% mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)

Code:
% zpool status
pool: zroot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0

errors: No known data errors

Very likely, the second disk (ada1) contain pool with the same name as the first one (I mean zroot).
Because, while installation system on the second disk, guided "ZFS Configuration" menu, "pool type" was accepted as stripe and "Swap size" was changed from default value to 8 Gb. All other settings was accepted by default.

Code:
% df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 161G 15G 146G 9% /
devfs 1,0K 1,0K 0B 100% /dev
linprocfs 4,0K 4,0K 0B 100% /compat/linux/proc
tmpfs 1,4G 4,0K 1,4G 0% /compat/linux/dev/shm
fdescfs 1,0K 1,0K 0B 100% /dev/fd
procfs 4,0K 4,0K 0B 100% /proc
zroot/tmp 146G 18M 146G 0% /tmp
zroot/usr/home 193G 48G 146G 25% /usr/home
zroot/usr/ports 147G 1,5G 146G 1% /usr/ports
zroot/usr/src 147G 1,3G 146G 1% /usr/src
zroot/var/audit 146G 88K 146G 0% /var/audit
zroot/var/crash 146G 88K 146G 0% /var/crash
zroot/var/log 146G 668K 146G 0% /var/log
zroot/var/mail 146G 176K 146G 0% /var/mail
zroot/var/tmp 146G 88K 146G 0% /var/tmp
zroot 146G 88K 146G 0% /zroot

I have created new pool:
#zpool create -f wdpool ada1p3

And after that I've got

Code:
# zpool history
History for 'wdpool':
2019-01-27.19:55:59 zpool create -f wdpool ada1p3

History for 'zroot':
2019-01-20.01:37:52 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot ada0p3
2019-01-20.01:37:52 zfs create -o mountpoint=none zroot/ROOT
2019-01-20.01:37:52 zfs create -o mountpoint=/ zroot/ROOT/default
2019-01-20.01:37:52 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2019-01-20.01:37:52 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2019-01-20.01:37:52 zfs create zroot/usr/home
2019-01-20.01:37:52 zfs create -o setuid=off zroot/usr/ports
2019-01-20.01:37:52 zfs create zroot/usr/src
2019-01-20.01:37:52 zfs create -o mountpoint=/var -o canmount=off zroot/var
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/audit
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/crash
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/log
2019-01-20.01:37:52 zfs create -o atime=on zroot/var/mail
2019-01-20.01:37:52 zfs create -o setuid=off zroot/var/tmp
2019-01-20.01:37:52 zfs set mountpoint=/zroot zroot
2019-01-20.01:37:52 zpool set bootfs=zroot/ROOT/default zroot
2019-01-20.01:37:52 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2019-01-20.01:37:57 zfs set canmount=noauto zroot/ROOT/default

Code:
# mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
wdpool on /wdpool (zfs, local, nfsv4acls)

Code:
% zfs list
NAME USED AVAIL REFER MOUNTPOINT
wdpool 268K 281G 88K /wdpool
zroot 65.5G 146G 88K /zroot
zroot/ROOT 15.0G 146G 88K none
zroot/ROOT/default 15.0G 146G 15.0G /
zroot/tmp 17.8M 146G 17.8M /tmp
zroot/usr 50.5G 146G 88K /usr
zroot/usr/home 47.6G 146G 47.6G /usr/home
zroot/usr/ports 1.55G 146G 1.55G /usr/ports
zroot/usr/src 1.31G 146G 1.31G /usr/src
zroot/var 1.17M 146G 88K /var
zroot/var/audit 88K 146G 88K /var/audit
zroot/var/crash 88K 146G 88K /var/crash
zroot/var/log 668K 146G 668K /var/log
zroot/var/mail 176K 146G 176K /var/mail
zroot/var/tmp 88K 146G 88K /var/tmp

Code:
% zpool status
pool: wdpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
wdpool ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors

pool: zroot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0

errors: No known data errors

Code:
%df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 161G 15G 146G 9% /
devfs 1.0K 1.0K 0B 100% /dev
linprocfs 4.0K 4.0K 0B 100% /compat/linux/proc
tmpfs 1.2G 4.0K 1.2G 0% /compat/linux/dev/shm
fdescfs 1.0K 1.0K 0B 100% /dev/fd
procfs 4.0K 4.0K 0B 100% /proc
zroot/tmp 146G 18M 146G 0% /tmp
zroot/usr/home 193G 48G 146G 25% /usr/home
zroot/usr/ports 147G 1.5G 146G 1% /usr/ports
zroot/usr/src 147G 1.3G 146G 1% /usr/src
zroot/var/audit 146G 88K 146G 0% /var/audit
zroot/var/crash 146G 88K 146G 0% /var/crash
zroot/var/log 146G 672K 146G 0% /var/log
zroot/var/mail 146G 176K 146G 0% /var/mail
zroot/var/tmp 146G 88K 146G 0% /var/tmp
zroot 146G 88K 146G 0% /zroot
wdpool 281G 88K 281G 0% /wdpool

I notice that, output df -h and zfs list provide wrong information about actuly used size.
In actual fact, the file system contain more then 250 Gb of data. As far as I understand, in the final result, the output should be something like that:

Code:
%df -h
....
wdpool/tmp
wdpool/usr/home
wdpool/usr/ports
wdpool/usr/src
wdpool/var/audit
wdpool/var/crash
wdpool/var/log
wdpool/var/mail
wdpool/var/tmp
wdpool
...

How can I get full access to the file system tree on the second disk with ZFS? I mean slice ada1p3... Maybe I missed something....? Please, correct me,
 
#zpool import
on its own will give a list of candidates if you have more than one zpool on your machine. It also gives information / advice on the use of "-f" flag to force import.
 
I forgot to mention. When the pool has been created by command:
#zpool create -f wdpool ada1p3.

I've run
Code:
# zpool import
no pools available to import
 
When you create the pool default mount point is "/pool". If you want to specify something else use -m flag

example:
# zpool create -m /mnt/mypool poolname ada1p3

You have to read the zpool(8) man page.
 
I have re-created the pool
Code:
# zpool create -o altroot=/mnt -f wdpool /dev/ada1p3
As a result:
Code:
# zfs list | grep wdpool
wdpool               268K   281G    88K  /mnt/wdpool
But still something wrong...(
Code:
# zpool import -d /mnt -o altroot=/mnt -f wdpool
cannot import 'wdpool': a pool with that name is already created/imported,
and no additional pools with that name were found
 
What's wrong? The "zfs list" command shows you the pool is already imported and mounted.

Thanks for your reply. Main goal is pretty simple, to get data from previously installed system on the ada1.
In other words, I want to mount slice ada1p3. For example, do something like mount -t zfs /dev/ada1p3 /mnt.
But still can't to find right solution(. But I know the solution exist).
 
In other words, I want to mount slice ada1p3. Do something like mount -t zfs /dev/ada1p3 /mnt.
That's not how ZFS works.

You can import the pool and then look at it's contents. In this respect ZFS works entirely different compared to 'traditional' filesystems like UFS or EXT.
 
ZFS works entirely different compared to 'traditional' filesystems like UFS or EXT.
Yes, you are right, and I understand it.
You can import the pool and then look at it's contents.
I've alredy done it. But after new pool creation, there is nothing in the newly created directry. No contents.
Please see above, I have already described the situation. It seems like, that I missed something. But what exactly? The solution is near.
 
You mentioned that the pool has been recreated. So it's supposed to be empty. It's like formatting a filesystem and then wondering why your files are gone.
 
It's like formatting a filesystem and then wondering why your files are gone.
Hmmm. Very interesting... But, I don't think so, and I hope it is not. And there is one more strange detail... Each time, after reboot, the newly created pool (wdpool) gone.
and zpool list shows only one pool which is located on ada0.
 
After rereading your posts, you seem to not understand the fundamentals, like creating or exporting pools.
Two times already you have created a new pool on /dev/ada0p3, so your data is lost, no matter how you look at it.

Accept the loss and be better prepared in the future. It is essential to read both the Handbook and manpages you have been pointed to.
I can also recommend to backup important data to UFS, while you are new to ZFS. Then you‘ll allways have an easy to access backup until you have gotten familiar with zfs. The backups I took to UFS actually saved me once, when I was still learning zfs. rsync is a great tool for that IMO. I’m a bit bit paranoid about backups however. How you backup your data is just a matter of taste, if they work for you. Anyway, keeping backups is important.
 
Back
Top