Adding a New Hard Disk into pool

Place to ask questions about partitioning, labelling, filesystems, encryption or anything else related to storage area.

Adding a New Hard Disk into pool

Postby bo0t » 01 Dec 2012, 08:06

Hello, I am new user on FreeBSD. I am testing using vmware. My problem is insert new disk to zpool because hdd server need more space :r,

This capture [FILE]zfs list[/FILE]

Image

This my command for create partition on [FILE]da1[/FILE]

Code: Select all
gpart create -s gpt da1
gpart add -t freebsd-zfs -l disk2 da1


and then I am set hdd 2 to pool, name pool is "tank"

Code: Select all
zpool get bootfs tank
zpool set bootfs="" tank
zpool get bootfs tank
zpool add tank /dev/gpt/disk2
zpool status tank
zpool set bootfs=tank tank
zpool get bootfs tank



and than
Code: Select all
zpool list
df -h

success but, if server restart, can't boot, and display warning.
Code: Select all
tank:/boot/kernel/kernel"

maybe I am using bad syntax, please correct my thread. Thanks before :D
bo0t
Junior Member
 
Posts: 4
Joined: 01 Dec 2012, 07:43

Postby SirDice » 01 Dec 2012, 11:58

Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.
Senior UNIX Engineer at Unix Support Nederland
Experience is something you don't get until just after you need it.
User avatar
SirDice
Old Fart
 
Posts: 16182
Joined: 17 Nov 2008, 16:50
Location: Rotterdam, Netherlands

Postby gkontos » 01 Dec 2012, 12:12

You can't add a second drive to a ZFSonRoot system unless you are creating a mirror.
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby bo0t » 02 Dec 2012, 03:45

SirDice wrote:Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.


sorry sir I do not know no moderation when posting,

gkontos wrote:You can't add a second drive to a ZFSonRoot system unless you are creating a mirror.


oh okay, so I can't insert a new hdd, and enlarged the zpool capacity, if there is no other way to increase the size of the zpool?

thx for reply
bo0t
Junior Member
 
Posts: 4
Joined: 01 Dec 2012, 07:43

Postby Remington » 02 Dec 2012, 07:54

oh okay, so I can't insert a new hdd, and enlarged the zpool capacity, if there is no other way to increase the size of the zpool?


You'll have to export the pool to different media as backup, wipe clean your hard drive and do the zfs again with additional hard drive as mirror or raidz. After that, you can import the pool.
Remington
Junior Member
 
Posts: 55
Joined: 03 Aug 2012, 12:15

Postby bo0t » 02 Dec 2012, 07:59

SirDice wrote:Read your signup email bo0t. When your post is held for moderation, don't post it again and again. Simply wait for a moderator to release it.


Remington wrote:You'll have to export the pool to different media as backup, wipe clean your hard drive and do the zfs again with additional hard drive as mirror or raidz. After that, you can import the pool.


thanks for the advice :r , is there any reference links should I read to do it that way ..
bo0t
Junior Member
 
Posts: 4
Joined: 01 Dec 2012, 07:43

Postby kpa » 02 Dec 2012, 08:19

There is a way to add a single hard drive (or vdev) on an existing root on zfs setup, it's just not documented well. You have to turn off the [FILE]bootfs[/FILE] property of the pool before adding the disk and turn it back on after the operation.

I'm assuming the existing disk is [FILE]ada0[/FILE] and the new disk is [FILE]ada1[/FILE]

[CMD="#"]zpool set bootfs="" tank[/CMD]
[CMD="#"]zpool add tank /dev/ada1[/CMD]
[CMD="#"]zpool set bootfs="whatitwasbefore" tank[/CMD]

The above would give you more storage but no redundancy, in other words a RAID-0 setup.

The same could be done to create a mirror with redundancy, the command would be [FILE]zpool attach[/FILE]

[CMD="#"]zpool set bootfs="" tank[/CMD]
[CMD="#"]zpool attach tank /dev/ada0 /dev/ada1[/CMD]
[CMD="#"]zpool set bootfs="whatitwasbefore" tank[/CMD]

RaidZ vdevs can not be created this way, they have to be recreated from scratch.
kpa
MFC'd
 
Posts: 3395
Joined: 05 Jul 2010, 13:19
Location: People's Technocratic Republic of Finland

Postby Remington » 02 Dec 2012, 08:48

bo0t wrote:thanks for the advice :r , is there any reference links should I read to do it that way ..


Read this to understand how to export/import pool.
http://docs.oracle.com/cd/E19253-01/819-5461/gbchy/index.html

If you want to setup your system as mirror/raidz and then import the pool. Use this script.
http://forums.freebsd.org/showthread.php?t=35947
Remington
Junior Member
 
Posts: 55
Joined: 03 Aug 2012, 12:15

Postby bo0t » 02 Dec 2012, 09:39

thanks for the explanation, I follow the way of the first :

[CMD=" "]# zpool set bootfs="" tank[/CMD]
[CMD=" "]# zpool add tank /dev/ada1[/CMD]
[CMD=" "]# zpool set bootfs="whatitwasbefore" tank[/CMD]

This my capture

Image

and then restart the server,

Image

What should I do?

Thank you for reply :D
bo0t
Junior Member
 
Posts: 4
Joined: 01 Dec 2012, 07:43

Postby gkontos » 02 Dec 2012, 10:45

kpa wrote:There is a way to add a single hard drive (or vdev) on an existing root on zfs setup, it's just not documented well. You have to turn off the [FILE]bootfs[/FILE] property of the pool before adding the disk and turn it back on after the operation.


I never made it possible to boot off a striped ZFS system. Has it worked for you?
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby kpa » 02 Dec 2012, 10:53

My fileserver has a two mirror vdevs striped and it's fully bootable Root on ZFS setup. It was initially two different mirrors, one for system and other one for data but I merged them into one. I didn't do anything special to merge them, I created a second vdev out of the other pair of disks that first cleared of any labels and [FILE]zpool add[/FILE]ed them to the pool.

Code: Select all
whitezone ~ % zpool status
  pool: zwhitezone
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.
  scan: scrub repaired 0 in 1h46m with 0 errors on Thu Nov 15 00:45:53 2012
config:

        NAME               STATE     READ WRITE CKSUM
        zwhitezone         ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            label/wzdisk0  ONLINE       0     0     0
            label/wzdisk1  ONLINE       0     0     0
          mirror-1         ONLINE       0     0     0
            label/wzdisk2  ONLINE       0     0     0
            label/wzdisk3  ONLINE       0     0     0

errors: No known data errors
whitezone ~ %

kpa
MFC'd
 
Posts: 3395
Joined: 05 Jul 2010, 13:19
Location: People's Technocratic Republic of Finland

Postby gkontos » 02 Dec 2012, 11:07

@kpa,

This is very interesting. Where have you installed the bootcode?

I suppose you can boot from [FILE]wzdisk0[/FILE] & [FILE]wzdisk1[/FILE] only?
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby kpa » 02 Dec 2012, 11:14

The bootcode is on a separate 2GB IDE plug SSD, basically it's a GPT partitioned disk with only one [FILE]freebsd-boot[/FILE] partition. I did have a set up where I had a separate [FILE]freebsd-boot[/FILE] partitions on the first two data disks for the boot code and it worked fine. I then got rid of partitions on the data disks alltogether.

I believe if I had wanted I could have had partitions for bootcode on all four disks and I would have been able to boot from an y of them using the BIOS F12 boot menu.

The current set up is the best compromise for me, there is no need to have partitions on the data disks and it's still a full Root on ZFS system.
kpa
MFC'd
 
Posts: 3395
Joined: 05 Jul 2010, 13:19
Location: People's Technocratic Republic of Finland

Postby gkontos » 02 Dec 2012, 13:59

kpa wrote:The bootcode is on a separate 2GB IDE plug SSD, basically it's a GPT partitioned disk with only one [FILE]freebsd-boot[/FILE] partition.


A few questions because this is becoming more interesting.
  • Why did you decide to allocate 2GB for a [FILE]bootcode[/FILE]?
  • Are you using the rest SSD space for [FILE]SWAP[/FILE]?
  • Does the [FILE]/boot[/FILE] directory resides in whitezone pool?
  • Is there a particular reason why you use an older ZFS version?

Thanks
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby kpa » 02 Dec 2012, 14:07

The [FILE]freebsd-boot[/FILE] partition is only 128k, rest is used for swap.

There is no separate [FILE]/boot[/FILE] partition or dataset, [FILE]/boot[/FILE] is on the [FILE]zwhitezone/ROOT/freebsd[/FILE] dataset that is the rootfs on the system.

I don't know why it says the pool is using older version of metadata, it's a version 28 pool. Maybe something is slightly broken on [FILE]9-STABLE[/FILE] at the moment... I only just noticed this after updating.
kpa
MFC'd
 
Posts: 3395
Joined: 05 Jul 2010, 13:19
Location: People's Technocratic Republic of Finland

Postby usdmatt » 02 Dec 2012, 16:23

The ability to have the boot code on a separate disk or USB stick is quite nifty as it saves you from having to put boot code on every disk in the root pool (you can even uses 'whole disks' in the root pool). I did try it out myself a while back.

The legacy warning in the above output mentions feature flags which are a post-v28 feature. Looks like you actually have a newer version of ZFS (newer than v28) since your last upgrade.
usdmatt
Member
 
Posts: 419
Joined: 16 Mar 2009, 12:59

Postby kpa » 02 Dec 2012, 16:35

Oh yes, you're right. My [man=8]dmesg[/man] says:

Code: Select all
...
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
...


Does that mean that the numbering used by Sun is no longer used?
kpa
MFC'd
 
Posts: 3395
Joined: 05 Jul 2010, 13:19
Location: People's Technocratic Republic of Finland

Postby usdmatt » 03 Dec 2012, 09:35

Well the Sun version numbers are still there but they've move the SPA verison from 28 -> 5000 and it should stay there. This is so if you try and import into Solaris (or any other system running a non feature-flag aware version), you should get a graceful error telling you the version of the pool is not supported. I guess they're hoping Solaris ZFS never gets to pool version 5000.

Before, any new feature that may of made the pool incompatible caused an increase of the pool version. It's a simple way of making sure a pool that *could* have X feature enabled never gets imported into a system that doesn't support X feature (even if you never actually used it).

Now any new feature has a 'feature flag' and the pool version stays the same. The benefit is that a feature-flag aware system can import any pool, even if it was created on a system with different or newer features, as long as those features are not used. It's also possible for a pool to be opened read only, as long as any unsupported features in use only affect writing and hasn't changed the disk format. (Basically each feature has an 'any system importing this pool must support this feature' or 'a system without this feature can read this pool but not write' flag).
usdmatt
Member
 
Posts: 419
Joined: 16 Mar 2009, 12:59


Return to Storage

Who is online

Users browsing this forum: No registered users and 1 guest