Solved Mixed UFS & ZFS in a Big Disk?

Running out of space, we got an additional storage for our VMs. This time around it was a large storage (vtzck2) of ~ 2TB HDD added to 180GB SSD (vtzck0) that we had only been using until now.

Here are the commands I ran to expand the root.
Code:
1. Create a partition (say vtzck2) after detecting the new block in your dmesg log
#gpart create -s GPT vtzck2


2. Format the partition (in this case using freebsd-zfs):
# gpart add -t freebsd-zfs -a 1M vtzk2

3.  Create a filesystem for the partition {Crux of the matter: this command creates a UFS but I need ZFS}:
# newfs -U /dev/vtzk2p1


4. Make directory for accessing the partition:
mkdir /bigdisk


5. Edit the fstab by adding the below
#vi /etc/fstab
/dev/vtzk2p1    /bigdisk        zfs     rw      2       2


6. Mount the partition:
# mount /bigdisk
***The above command threw an error- like drive not available.**

7.  Add the partition to your pool for ZFS:
# zpool add mypool vtzk2p1


8. Check your filesystem for the increment in storage capacity
# zpool status
#df -h

***Steps 3-6 may not be necessary if using ZFS. I had run them in the course of getting things to work.***


My Questions:
1.) I am not sure if the command 'newfs -U /dev/vtbd2p1' was the proper command to use after using 'gpart add -t freebsd-zfs -a 1M vtbd2'. It appears I mixed UFS and ZFS filesystems or overwrote the ZFS even though the root expanded to approx 2.2TB . What should have been the proper command(s) after the 'gpart add' since I wanted a ZFS?


2.) What are the implications of what I have done so far e.g. performance degradation for mixing a zpool of ufs on HDD big disk with zfs on SSD root/base disk? Any recommendations?


3.) I want to ensure that I have a system whereby I can detach the new storage, download my base/host image and carry out other operations as I like. If I detach it the right way (& without a replacement - i.e. zpool replace...), would the host still function well (e.g. downloading & using its snapshot elsewhere) as I can imagine that some data might be on the big disk?

4.) One of the reasons of getting the big disk is to store big data generated in one of our jails in it. Unfortunately, I could not access the '/bigdisk' dir from the jail. Hence, I can't change the location of the datastore from the jail (in 180GB base/host) to the big disk mounted dir '/bigdisk'. The only option that has come to my mind is using rsync (to sync data between the disks) but I am not sure that is an efficient approach. The big data service in the jail can't even access the '/bigdisk'. Any recommendations?
 
Thanks VladiBG.

I am now wanting to replace the 2TB HDD (vtzck2) with a 2TB SSD(vtzck3). At the moment, I have got the 180GB SSD (vtzck0) & the 2TB HDD (vtzck2) as a mirror. What I did was that I replaced the 2TB HDD with the 2TB SSD AND LATER attached the 2TB HDD to the 180GB SSD.

I could have just attached the 2T SSD to the 180GB once and for all and detach/remove the 2T HDD. After all, I was more interested in having a mirror-0 of the 180GB & 2TB SSD. I now have to wait for scrub/resilver two or more times. That is what happens for being an apprentice (are we all not one in one field of life or the other? :):)).


Current State:
Code:
NAME               STATE  READ  WRITE CHKSUM
zroot
    mirror-0        ONLINE  0      0     0
        vtzck0s2  ONLINE  0      0     0
        vtzck2p1  ONLINE  0      0     0(resilvering)
    vtzck3p1      ONLINE  0      0     0

**Note: vtzck1 is freebsd-swap

Preferred State:
Code:
NAME               STATE  READ  WRITE CHKSUM
zroot
    mirror-0        ONLINE  0      0     0
        vtzck0s2  ONLINE  0      0     0
        vtzck1p2  ONLINE  0      0     0

That is, I want to decommission the 2T HDD (vtzck2).

I would try, after resilvering, to detach the 2T SSD(vtzck3) from zroot first by running 'zpool detach zroot vtzck3p1' and then the replace the 2T HDD(vtzck2) with 2T SDD (vtzck3p1) by running 'zpool replace zroot vtzck2p1 vtzck3p1'.

I am worried about how to detach the vtzck3. I tried detaching it earlier but no luck. I suspect the reason is that I had used 'zpool online -e....' at some point and I now have a bigger pool - zroot of 2TB + 180GB. I however think it should not be a problem as long as the pool size remains the same, which it is. I might use 'zpool remove....' or 'zpool detach -f....' if it is becoming difficult. The 2TB HDD has to go AND the 2TB SSD has to join the 180GB SSD in mirror-0.

Finally, I really need get the system running after all the operations. Any warning/recommendation?
 
The Raid 1 (mirror) must be with two or more identical size members. Mirroring 180GB with 2TB won't work if any of the two disks fail.
 
Thanks again VladlBG. I have no control on the sizes of the disks. I would however take note of your advice in a situation that I have control over the hardware.

I read here [see its last comment] that it is a good idea to at least set up a mirror for different vdev types.

What I may do overtime is to create equal partitions on the disks and place them in the same mirror. I have no intention of paying the VPS provider again to resize the 180GB SSD but rather rely on their big disk(2TB).

Other than that, would everyone reckon that my procedure [zpool attach/replace/detach/remove.... ] would work?
 
Everything depend of what type of redundancy of the hard-disk is using your hosting provider. You may not need to make a mirror at all. Then you can have 2 zpools one of 180GB and another with 2TB. (zroot = 180GB ; zbigdata = 2TB)

For example if your hosting provider is using some storage let say with VRAID60 and within it he provision his customers a virtual disks then you don't have to worry about your virtual disk at all. Then you can use all of your provided storage without creating any raid level with your disks.

Another example is if you manage physical disks and you have to make sure that there's some redundancy for your data then you need to create an appropriate RAID to ensure your data protection.

Keep in mind that even when you have a RAID you will still need a backup and that the snapshots are NOT a backup.
 
For example if your hosting provider is using some storage let say with VRAID60 and within it he provision his customers a virtual disks then you don't have to worry about your virtual disk at all. Then you can use all of your provided storage without creating any raid level with your disks.
Thanks VladiBG. This paragraph sums up what I am wanting to do. I needed not to worry about the hardware right from the outset. I am now manipulating the only pool because I am running out of space on the 180GB SSD and have now acquired a 2TB in order to increase the storage capacity of the vm. It was for that reason that I earlier increased the pool size with 'autoexpand on' & 'zpool online -e...'. If that would be fine, how can I restore the system back to just the zroot pool (1) with no raid (mirror/cache/etc) and (2) without destroying the existing zroot pool? I meant getting it back to:
Code:
NAME               STATE  READ  WRITE CHKSUM
zroot
        vtzck0s2  ONLINE  0      0     0
        vtzck1p2  ONLINE  0      0     0

Then you can have 2 zpools one of 180GB and another with 2TB. (zroot = 180GB ; zbigdata = 2TB)
I can smell fire in this sentence. It would be nice to install packages in the 180GB SSD, dump data on the big disk and change data dir for critical apps to point to the big disk pool. But there is a thread (I wish I could easily locate it now) warning that accessing two pools can be problematic in some cases for some applications/jails/etc AND could be prone to a system failure.

Keep in mind that even when you have a RAID you will still need a backup and that the snapshots are NOT a backup.
This warning is non-negotiable. We can rely on tools such as xzfer, zap and the core zfs snapshot for that.

Ultimately [ignoring all other warnings about multiple pools], I should now be working towards:

Code:
NAME               STATE  READ  WRITE CHKSUM
zroot
        vtzck0s2  ONLINE  0      0     0
zstorage      
        vtzck1p2  ONLINE  0      0     0
 
You know why zap destroy || zfs destroy fails on some filesystems like docker, iocage, etc?

I have tried "zap destroy" & "zfs destroy (-r) SNAPSHOT" which are meant to destroy all expired snapshots and all existing snapshots respectively but no luck . All [supposedly destroyed] snapshots are still available after restarting the PC.
#zfs list
zroot/docker 20.9G 8.27G 1.10G /usr/docker
zroot/docker/53e834dd0531caf69a79cb0552074b55c0b12d3281d9d176a04280d0678c76c4 346M 8.27G 346M legacy
zroot/iocage/base/11.0-RELEASE/root/lib 5.98M 8.27G 5.98M /iocage/base/11.0-RELEASE/root/lib


I have deleted all files in the dirs & unmounted all the filesystems - iocage, docker - yet they all come back mounted after restarting the PC. I also changed the canmount to no but no luck.

I need a lot of free space; so I want all those filesystems/mountpoints gone.
 
Back
Top