1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Adding a New Hard Disk into pool

Discussion in 'Storage' started by bo0t, Dec 1, 2012.

  1. bo0t

    bo0t New Member

    Messages:
    4
    Likes Received:
    0
    Hello, I am new user on FreeBSD. I am testing using vmware. My problem is insert new disk to zpool because hdd server need more space :r,

    This capture zfs list

    [​IMG]

    This my command for create partition on da1

    Code:
    gpart create -s gpt da1
    gpart add -t freebsd-zfs -l disk2 da1
    and then I am set hdd 2 to pool, name pool is "tank"

    Code:
    zpool get bootfs tank
    zpool set bootfs="" tank
    zpool get bootfs tank
    zpool add tank /dev/gpt/disk2
    zpool status tank
    zpool set bootfs=tank tank
    zpool get bootfs tank

    and than
    Code:
    zpool list
    df -h
    
    success but, if server restart, can't boot, and display warning.
    Code:
    tank:/boot/kernel/kernel"
    
    maybe I am using bad syntax, please correct my thread. Thanks before :D
     
  2. SirDice

    SirDice Moderator Staff Member Moderator

    Messages:
    17,423
    Likes Received:
    14
    Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.
     
  3. gkontos

    gkontos Member

    Messages:
    1,385
    Likes Received:
    1
    You can't add a second drive to a ZFSonRoot system unless you are creating a mirror.
     
  4. bo0t

    bo0t New Member

    Messages:
    4
    Likes Received:
    0
    sorry sir I do not know no moderation when posting,

    oh okay, so I can't insert a new hdd, and enlarged the zpool capacity, if there is no other way to increase the size of the zpool?

    thx for reply
     
  5. Remington

    Remington New Member

    Messages:
    57
    Likes Received:
    0
    You'll have to export the pool to different media as backup, wipe clean your hard drive and do the zfs again with additional hard drive as mirror or raidz. After that, you can import the pool.
     
  6. bo0t

    bo0t New Member

    Messages:
    4
    Likes Received:
    0
    thanks for the advice :r , is there any reference links should I read to do it that way ..
     
  7. kpa

    kpa Member

    Messages:
    4,026
    Likes Received:
    13
    There is a way to add a single hard drive (or vdev) on an existing root on zfs setup, it's just not documented well. You have to turn off the bootfs property of the pool before adding the disk and turn it back on after the operation.

    I'm assuming the existing disk is ada0 and the new disk is ada1

    # zpool set bootfs="" tank
    # zpool add tank /dev/ada1
    # zpool set bootfs="whatitwasbefore" tank

    The above would give you more storage but no redundancy, in other words a RAID-0 setup.

    The same could be done to create a mirror with redundancy, the command would be zpool attach

    # zpool set bootfs="" tank
    # zpool attach tank /dev/ada0 /dev/ada1
    # zpool set bootfs="whatitwasbefore" tank

    RaidZ vdevs can not be created this way, they have to be recreated from scratch.
     
  8. Remington

    Remington New Member

    Messages:
    57
    Likes Received:
    0
  9. bo0t

    bo0t New Member

    Messages:
    4
    Likes Received:
    0
    thanks for the explanation, I follow the way of the first :

    # zpool set bootfs="" tank
    # zpool add tank /dev/ada1
    # zpool set bootfs="whatitwasbefore" tank

    This my capture

    [​IMG]

    and then restart the server,

    [​IMG]

    What should I do?

    Thank you for reply :D
     
  10. gkontos

    gkontos Member

    Messages:
    1,385
    Likes Received:
    1
    I never made it possible to boot off a striped ZFS system. Has it worked for you?
     
  11. kpa

    kpa Member

    Messages:
    4,026
    Likes Received:
    13
    My fileserver has a two mirror vdevs striped and it's fully bootable Root on ZFS setup. It was initially two different mirrors, one for system and other one for data but I merged them into one. I didn't do anything special to merge them, I created a second vdev out of the other pair of disks that first cleared of any labels and zpool added them to the pool.

    Code:
    whitezone ~ % zpool status
      pool: zwhitezone
     state: ONLINE
    status: The pool is formatted using a legacy on-disk format.  The pool can
            still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
            pool will no longer be accessible on software that does not support feature
            flags.
      scan: scrub repaired 0 in 1h46m with 0 errors on Thu Nov 15 00:45:53 2012
    config:
    
            NAME               STATE     READ WRITE CKSUM
            zwhitezone         ONLINE       0     0     0
              mirror-0         ONLINE       0     0     0
                label/wzdisk0  ONLINE       0     0     0
                label/wzdisk1  ONLINE       0     0     0
              mirror-1         ONLINE       0     0     0
                label/wzdisk2  ONLINE       0     0     0
                label/wzdisk3  ONLINE       0     0     0
    
    errors: No known data errors
    whitezone ~ % 
    
    
     
  12. gkontos

    gkontos Member

    Messages:
    1,385
    Likes Received:
    1
    @kpa,

    This is very interesting. Where have you installed the bootcode?

    I suppose you can boot from wzdisk0 & wzdisk1 only?
     
  13. kpa

    kpa Member

    Messages:
    4,026
    Likes Received:
    13
    The bootcode is on a separate 2GB IDE plug SSD, basically it's a GPT partitioned disk with only one freebsd-boot partition. I did have a set up where I had a separate freebsd-boot partitions on the first two data disks for the boot code and it worked fine. I then got rid of partitions on the data disks alltogether.

    I believe if I had wanted I could have had partitions for bootcode on all four disks and I would have been able to boot from an y of them using the BIOS F12 boot menu.

    The current set up is the best compromise for me, there is no need to have partitions on the data disks and it's still a full Root on ZFS system.
     
  14. gkontos

    gkontos Member

    Messages:
    1,385
    Likes Received:
    1
    A few questions because this is becoming more interesting.
    • Why did you decide to allocate 2GB for a bootcode?
    • Are you using the rest SSD space for SWAP?
    • Does the /boot directory resides in whitezone pool?
    • Is there a particular reason why you use an older ZFS version?
    Thanks
     
  15. kpa

    kpa Member

    Messages:
    4,026
    Likes Received:
    13
    The freebsd-boot partition is only 128k, rest is used for swap.

    There is no separate /boot partition or dataset, /boot is on the zwhitezone/ROOT/freebsd dataset that is the rootfs on the system.

    I don't know why it says the pool is using older version of metadata, it's a version 28 pool. Maybe something is slightly broken on 9-STABLE at the moment... I only just noticed this after updating.
     
  16. usdmatt

    usdmatt Member

    Messages:
    543
    Likes Received:
    1
    The ability to have the boot code on a separate disk or USB stick is quite nifty as it saves you from having to put boot code on every disk in the root pool (you can even uses 'whole disks' in the root pool). I did try it out myself a while back.

    The legacy warning in the above output mentions feature flags which are a post-v28 feature. Looks like you actually have a newer version of ZFS (newer than v28) since your last upgrade.
     
  17. kpa

    kpa Member

    Messages:
    4,026
    Likes Received:
    13
    Oh yes, you're right. My dmesg(8) says:

    Code:
    ...
    ZFS filesystem version: 5
    ZFS storage pool version: features support (5000)
    ...
    
    Does that mean that the numbering used by Sun is no longer used?
     
  18. usdmatt

    usdmatt Member

    Messages:
    543
    Likes Received:
    1
    Well the Sun version numbers are still there but they've move the SPA verison from 28 -> 5000 and it should stay there. This is so if you try and import into Solaris (or any other system running a non feature-flag aware version), you should get a graceful error telling you the version of the pool is not supported. I guess they're hoping Solaris ZFS never gets to pool version 5000.

    Before, any new feature that may of made the pool incompatible caused an increase of the pool version. It's a simple way of making sure a pool that *could* have X feature enabled never gets imported into a system that doesn't support X feature (even if you never actually used it).

    Now any new feature has a 'feature flag' and the pool version stays the same. The benefit is that a feature-flag aware system can import any pool, even if it was created on a system with different or newer features, as long as those features are not used. It's also possible for a pool to be opened read only, as long as any unsupported features in use only affect writing and hasn't changed the disk format. (Basically each feature has an 'any system importing this pool must support this feature' or 'a system without this feature can read this pool but not write' flag).