Solved Adding a New Hard Disk into pool

Hello, I am new user on FreeBSD. I am testing using vmware. My problem is insert new disk to zpool because hdd server need more space :r,

This capture zfs list

11t6era.jpg


This my command for create partition on da1

Code:
gpart create -s gpt da1
gpart add -t freebsd-zfs -l disk2 da1

and then I am set hdd 2 to pool, name pool is "tank"

Code:
zpool get bootfs tank
zpool set bootfs="" tank
zpool get bootfs tank
zpool add tank /dev/gpt/disk2
zpool status tank
zpool set bootfs=tank tank
zpool get bootfs tank


and than
Code:
zpool list
df -h
success but, if server restart, can't boot, and display warning.
Code:
tank:/boot/kernel/kernel"
maybe I am using bad syntax, please correct my thread. Thanks before :D
 
Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.
 
SirDice said:
Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.

sorry sir I do not know no moderation when posting,

gkontos said:
You can't add a second drive to a ZFSonRoot system unless you are creating a mirror.

oh okay, so I can't insert a new hdd, and enlarged the zpool capacity, if there is no other way to increase the size of the zpool?

thx for reply
 
oh okay, so I can't insert a new hdd, and enlarged the zpool capacity, if there is no other way to increase the size of the zpool?

You'll have to export the pool to different media as backup, wipe clean your hard drive and do the zfs again with additional hard drive as mirror or raidz. After that, you can import the pool.
 
SirDice said:
Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.

Remington said:
You'll have to export the pool to different media as backup, wipe clean your hard drive and do the zfs again with additional hard drive as mirror or raidz. After that, you can import the pool.

thanks for the advice :r , is there any reference links should I read to do it that way ..
 
There is a way to add a single hard drive (or vdev) on an existing root on zfs setup, it's just not documented well. You have to turn off the bootfs property of the pool before adding the disk and turn it back on after the operation.

I'm assuming the existing disk is ada0 and the new disk is ada1

# zpool set bootfs="" tank
# zpool add tank /dev/ada1
# zpool set bootfs="whatitwasbefore" tank

The above would give you more storage but no redundancy, in other words a RAID-0 setup.

The same could be done to create a mirror with redundancy, the command would be zpool attach

# zpool set bootfs="" tank
# zpool attach tank /dev/ada0 /dev/ada1
# zpool set bootfs="whatitwasbefore" tank

RaidZ vdevs can not be created this way, they have to be recreated from scratch.
 
thanks for the explanation, I follow the way of the first :

# zpool set bootfs="" tank
# zpool add tank /dev/ada1
# zpool set bootfs="whatitwasbefore" tank

This my capture

lxjdx.jpg


and then restart the server,

3wy6q.jpg


What should I do?

Thank you for reply :D
 
kpa said:
There is a way to add a single hard drive (or vdev) on an existing root on zfs setup, it's just not documented well. You have to turn off the bootfs property of the pool before adding the disk and turn it back on after the operation.

I never made it possible to boot off a striped ZFS system. Has it worked for you?
 
My fileserver has a two mirror vdevs striped and it's fully bootable Root on ZFS setup. It was initially two different mirrors, one for system and other one for data but I merged them into one. I didn't do anything special to merge them, I created a second vdev out of the other pair of disks that first cleared of any labels and zpool added them to the pool.

Code:
whitezone ~ % zpool status
  pool: zwhitezone
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.
  scan: scrub repaired 0 in 1h46m with 0 errors on Thu Nov 15 00:45:53 2012
config:

        NAME               STATE     READ WRITE CKSUM
        zwhitezone         ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            label/wzdisk0  ONLINE       0     0     0
            label/wzdisk1  ONLINE       0     0     0
          mirror-1         ONLINE       0     0     0
            label/wzdisk2  ONLINE       0     0     0
            label/wzdisk3  ONLINE       0     0     0

errors: No known data errors
whitezone ~ %
 
@kpa,

This is very interesting. Where have you installed the bootcode?

I suppose you can boot from wzdisk0 & wzdisk1 only?
 
The bootcode is on a separate 2GB IDE plug SSD, basically it's a GPT partitioned disk with only one freebsd-boot partition. I did have a set up where I had a separate freebsd-boot partitions on the first two data disks for the boot code and it worked fine. I then got rid of partitions on the data disks alltogether.

I believe if I had wanted I could have had partitions for bootcode on all four disks and I would have been able to boot from an y of them using the BIOS F12 boot menu.

The current set up is the best compromise for me, there is no need to have partitions on the data disks and it's still a full Root on ZFS system.
 
kpa said:
The bootcode is on a separate 2GB IDE plug SSD, basically it's a GPT partitioned disk with only one freebsd-boot partition.

A few questions because this is becoming more interesting.
  • Why did you decide to allocate 2GB for a bootcode?
  • Are you using the rest SSD space for SWAP?
  • Does the /boot directory resides in whitezone pool?
  • Is there a particular reason why you use an older ZFS version?
Thanks
 
The freebsd-boot partition is only 128k, rest is used for swap.

There is no separate /boot partition or dataset, /boot is on the zwhitezone/ROOT/freebsd dataset that is the rootfs on the system.

I don't know why it says the pool is using older version of metadata, it's a version 28 pool. Maybe something is slightly broken on 9-STABLE at the moment... I only just noticed this after updating.
 
The ability to have the boot code on a separate disk or USB stick is quite nifty as it saves you from having to put boot code on every disk in the root pool (you can even uses 'whole disks' in the root pool). I did try it out myself a while back.

The legacy warning in the above output mentions feature flags which are a post-v28 feature. Looks like you actually have a newer version of ZFS (newer than v28) since your last upgrade.
 
Oh yes, you're right. My dmesg(8) says:

Code:
...
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
...

Does that mean that the numbering used by Sun is no longer used?
 
Well the Sun version numbers are still there but they've move the SPA verison from 28 -> 5000 and it should stay there. This is so if you try and import into Solaris (or any other system running a non feature-flag aware version), you should get a graceful error telling you the version of the pool is not supported. I guess they're hoping Solaris ZFS never gets to pool version 5000.

Before, any new feature that may of made the pool incompatible caused an increase of the pool version. It's a simple way of making sure a pool that *could* have X feature enabled never gets imported into a system that doesn't support X feature (even if you never actually used it).

Now any new feature has a 'feature flag' and the pool version stays the same. The benefit is that a feature-flag aware system can import any pool, even if it was created on a system with different or newer features, as long as those features are not used. It's also possible for a pool to be opened read only, as long as any unsupported features in use only affect writing and hasn't changed the disk format. (Basically each feature has an 'any system importing this pool must support this feature' or 'a system without this feature can read this pool but not write' flag).
 
My fileserver has a two mirror vdevs striped and it's fully bootable Root on ZFS setup. It was initially two different mirrors, one for system and other one for data but I merged them into one. I didn't do anything special to merge them, I created a second vdev out of the other pair of disks that first cleared of any labels and zpool added them to the pool.
[/code]

I have been a regular user of FreeBSD for about 10 seperate servers for about 12 years. I am currently trying to build a development/file server for our developers. I started with FreeNAS 9.3 because I wanted to try that, had not used ZFS before and thought this might be an easy way to get going. It works fine, but because it's sort of nanoBSD there is zero package management I cannot customise it outside the GUI without breaking it. However, since this box is multi purpose (combining the functions of 4 older separate boxes in our office), I need to customise it. I made it work with some jails, a virtualbox instance with a FreeBSD 10.2 virtual machine inside. Anyway, it's messy, and I now want to ditch FreeNAS and just install plain FreeBSD and set it up the way I want.

The challenge it this: The box has 4 x 4TB WD disks. Currently (in FreeNAS) they are in 2 mirrored vdevs (gives 8TB storage and good response to IOPS). The freeNAS OS sits on a USB flash drive. I want to install zfs on root so that the whole machine incl the OS is protected by ZFS integrity (the 4 boxes it is replacing all had hardware raid). Installing the OS on the USB flash drive is not a good idea, due to limited life of that medium.

With FreeBSD 10.2 RELEASE, obviously I can even just use the bsdinstall "gui" to install ZFS on root and then build the machine from there. The bsdinstall utility allows creating my first vdev mirror. The idea was, once the OS is installed, to then just combine disks3&4 into a second mirror vdev and add that to the zpool which is mounted on root.

However, after searching around I found that this is "not really supported", eg:

Code:
zpool add -f rpool log c0t6d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs

I have found the unofficial workaround in several places:

Code:
I found that most people use the "unset bootfs property, add vdev, set bootfs again" trick to have a working
multiple-vdev root pool under FreeBSD.

All of these links are quite old, 2012 and earlier. Is the "single vdev and no SLOG" restriction for "rpools" still present in FreeBSD 10.2? (I haven't had the opportunity to try, without nuking my new box.

If so, is it "safe" to use that "set bootfs" trick? Does it make the system vulnerable during boot?

kpa Your solution seems very neat (no swap and boot partitions on the vdev data drives) and just a tiny /boot partition on an SSD which is dual use for swap. How did you manage to get to this configuration? You only said:

Code:
I didn't do anything special to merge them, I created a second vdev out of the other pair of disks that first cleared of any labels and zpool added them to the pool.

I don't have an SSD, but could just add that, if that's what's required for the nice, clean 2 x 2 mirror zfs-on-root setup.

Many thanks, sorry for digging up this ancient thread, but it was the most recent one I could find that's really relevant.
 
Oliver Schonrock I no longer have that system and I have forgotten some details of what I did. I suggest you set up a VirtualBox system using FreeBSD 10.2 and experiment using that what kind of operations can be done, I do have a vague memory that multi-vdev root on ZFS is now supported but I'm not 100% sure.
 
You shouldn't have any problems using a multi-vdev root. I personally would follow the "ZFS Madness" instructions on this forum a create a single sys/ROOT/default beadm style dataset for the OS. Then create other datasets for your data/jails/etc once the OS is installed and running.

If you want to use whole disks and have bootcode on an SSD, just do the following (assuming ada0 is the SSD and ada1-4 are the 4TB disks):

Partition and add ZFS bootcode to the SSD:
You can add other partitions later for swap/cache/whatever
Code:
# gpart create -s gpt ada0
# gpart add -t freebsd-boot -s 128k ada0
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
Create the pool, add filesystems and set the boot fs
Code:
# zpool create -o cachefile=/tmp/zpool.cache sys mirror ada{1,2} mirror ada{3,4}
# zfs set mountpoint=none sys
# zfs set atime=off sys
# zfs set compress=lz4 sys (optional)
# zfs create sys/ROOT
# zfs create -o mountpoint=/mnt sys/ROOT/default
# zpool set bootfs=sys/ROOT/default sys
Then finish extracting the OS into /mnt as in the howto.
You don't need to add the vfs.root.mountfrom option in /boot/loader.conf any more.

I just tested this in VMWare and it was pretty easy to get the system booting with one GPT paritioned disk, containing one boot partition, and 4 other disks in a whole-disk, raid10 style pool.

Edit: One thing I haven't mentioned in here is 4k alignment. You'll probably want to run the following before creating the zpool to , and possibly look at getting the partitions aligned correctly on the SSD.
Code:
sysctl vfs.zfs.min_auto_ashift=12
 
kpa : yeah "it just works" . I created 4 x 10GB SATA disk on vbox and used the bsdinstall setup utility to install FreeBSD 10.2

This is what I got:

Code:
root@zfsroot:~ # df -h
Filesystem  Size  Used  Avail Capacity  Mounted on
zroot/ROOT/default  17G  618M  16G  4%  /
devfs  1.0K  1.0K  0B  100%  /dev
zroot/tmp  16G  96K  16G  0%  /tmp
zroot/usr/home  16G  136K  16G  0%  /usr/home
zroot/usr/ports  17G  629M  16G  4%  /usr/ports
zroot/usr/src  16G  96K  16G  0%  /usr/src
zroot/var/audit  16G  96K  16G  0%  /var/audit
zroot/var/crash  16G  96K  16G  0%  /var/crash
zroot/var/log  16G  152K  16G  0%  /var/log
zroot/var/mail  16G  96K  16G  0%  /var/mail
zroot/var/tmp  16G  96K  16G  0%  /var/tmp
zroot  16G  96K  16G  0%  /zroot

Code:
root@zfsroot:~ # gpart show ada0
=>  34  20971453  ada0  GPT  (10G)
  34  6  - free -  (3.0K)
  40  1024  1  freebsd-boot  (512K)
  1064  984  - free -  (492K)
  2048  4194304  2  freebsd-swap  (2.0G)
  4196352  16773120  3  freebsd-zfs  (8.0G)
  20969472  2015  - free -  (1.0M)

root@zfsroot:~ # gpart show ada1
=>  34  20971453  ada1  GPT  (10G)
  34  6  - free -  (3.0K)
  40  1024  1  freebsd-boot  (512K)
  1064  984  - free -  (492K)
  2048  4194304  2  freebsd-swap  (2.0G)
  4196352  16773120  3  freebsd-zfs  (8.0G)
  20969472  2015  - free -  (1.0M)

Code:
root@zfsroot:~ # zpool status
  pool: zroot
state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   zroot  ONLINE  0  0  0
    mirror-0  ONLINE  0  0  0
    ada0p3  ONLINE  0  0  0
    ada1p3  ONLINE  0  0  0

errors: No known data errors

Then I literally just did:

Code:
zpool add zroot mirror ada2 ada3

and got

Code:
root@zfsroot:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

   NAME  STATE  READ WRITE CKSUM
   zroot  ONLINE  0  0  0
    mirror-0  ONLINE  0  0  0
    ada0p3  ONLINE  0  0  0
    ada1p3  ONLINE  0  0  0
    mirror-1  ONLINE  0  0  0
    ada2  ONLINE  0  0  0
    ada3  ONLINE  0  0  0

errors: No known data errors

and now have 17G (bit less than half the total disk space so that's right):

Code:
root@zfsroot:~ # df -h
Filesystem  Size  Used  Avail Capacity  Mounted on
zroot/ROOT/default  17G  618M  16G  4%  /
devfs  1.0K  1.0K  0B  100%  /dev
zroot/tmp  16G  96K  16G  0%  /tmp
zroot/usr/home  16G  136K  16G  0%  /usr/home
zroot/usr/ports  17G  629M  16G  4%  /usr/ports
zroot/usr/src  16G  96K  16G  0%  /usr/src
zroot/var/audit  16G  96K  16G  0%  /var/audit
zroot/var/crash  16G  96K  16G  0%  /var/crash
zroot/var/log  16G  152K  16G  0%  /var/log
zroot/var/mail  16G  96K  16G  0%  /var/mail
zroot/var/tmp  16G  96K  16G  0%  /var/tmp
zroot  16G  96K  16G  0%  /zroot

PERFECT! thanks! no need for weird bootfs="" tricks

I didn't bother splitting ada2 and ada3 into 3 slices because I figured that the swap and bootcode is already mirrored once. That's right yes?

Rebooted and it "just works".

Nice. Now need to shuffle data off the FreeNAS box and then nuke that thing!
 
You shouldn't have any problems using a multi-vdev root. I personally would follow the "ZFS Madness" instructions on this forum a create a single sys/ROOT/default beadm style dataset for the OS. Then create other datasets for your data/jails/etc once the OS is installed and running.

Thanks for those tips. Wasn't aware of beadm. Based on my above df Looks like I already have a sys/ROOT/default beadm style dataset for the OS"? This is just the default that bsdinstall(8) gave me:

Code:
[oliver@zfsroot ~]$ zfs list 
NAME  USED  AVAIL  REFER  MOUNTPOINT
zroot  1.22G  16.1G  96K  /zroot
zroot/ROOT  618M  16.1G  96K  none
zroot/ROOT/default  618M  16.1G  618M  /
zroot/tmp  68.5K  16.1G  68.5K  /tmp
zroot/usr  629M  16.1G  96K  /usr
zroot/usr/home  106K  16.1G  106K  /usr/home
zroot/usr/ports  629M  16.1G  629M  /usr/ports
zroot/usr/src  96K  16.1G  96K  /usr/src
zroot/var  577K  16.1G  96K  /var
zroot/var/audit  96K  16.1G  96K  /var/audit
zroot/var/crash  96K  16.1G  96K  /var/crash
zroot/var/log  104K  16.1G  104K  /var/log
zroot/var/mail  92.5K  16.1G  92.5K  /var/mail
zroot/var/tmp  92.5K  16.1G  92.5K  /var/tmp

With sensible properties like atime=off for all but /var/mail. Looks all good to me?

Will just add some more data sets with special props like 8k recordsize for /var/db/mysql when I build on the real system.

Will play with SSDs later, but that's very helpful to have documented too.

ThxThanks guys.
 
Back
Top