How to resize GPT disk?

hello everybody!

I make a server for backup.
The basis for the disk array is selected hardware SATA raid Areca 1220.
Now I'm faced with a choice of file system and disk layout table.

MBR is not suitable, because the size of the disk array is already - 2TB, therefore remains only GPT.

I experimented with MBR. After adding the disk to the array and change the partition, just change a partition bsdlabel etc.

But with the GPT is not so simple, because partition table does not exist only at the beginning of the disk, but in the end. If you change the size of the array the second table is somewhere in the middle and the system still only sees this allocated space. At system reboot swears that is not a valid GPT table, and that will be used first, but I understand it - but what to do? :)

I could not get gpart change GPT. I tried to find anything in the ports, but there are also only gedit and gpte - one is not going under amd64, the other does not work ...

Does anyone can suggest a smart idea?
 
Of course, that most likely I'll use ZFS.
But they are very different things.
First, drive to cut, and so much to create file systems.
 
[SOLVED] How to resize GPT disk ?

Hello again everyone,

First, I apologize, because possible to use raw disk for ZFS.
However, this option I do not like, so I got my way:)

Experiment

1. They made volumeset, watch the size of
Code:
# сamcontrol readcap da1-h
Device Size: 954 M, Block Length: 512 bytes

2. Make the scheme
Code:
# gpart create -s gpt da1
da1 created

3. Made partitions
Code:
# gpart add -t freebsd-zfs da1
da1p1 added

4. Look what happened
Code:
# gpart show da1
=> 34 1952701 da1 GPT (954M)
       34 1952701 1 freebsd-zfs (953M)

5. Making zpool
Code:
#zpool create test da1p1
# df -h test
Filesystem    Size    Used   Avail Capacity  Mounted on
test          912M     18K    912M     0%    /test

6. Filling drive
Code:
# time dd if=/dev/urandom of=/test2/dump bs=1m
dd: /test2/dump: No space left on device
912+0 records in
911+1 records out
955383808 bytes transferred in 22.368060 secs (42711965 bytes/sec)
dd if=/dev/urandom of=/test2/dump bs=1m  0.00s user 19.99s system 89% cpu 22.370 total

# df -h /test
Filesystem    Size    Used   Avail Capacity  Mounted on
test          912M    912M      0B   100%    /test


7. Hash
Code:
# md5 /test2/dump
MD5 (/test2/dump) = e9cfb1762976345d87155d99c14319c0

8. Pushes the boundaries
Code:
areca-cli
CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 ARC-1220-VOL#00  Raid Set # 00   Raid0      1.0GB 00/00/01   Normal
  2 ST380817AS       Raid Set # 01   PassThr   80.0GB 00/00/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> set password=0000
GuiErrMsg<0x00>: Success.

CLI> vsf modify vol=1 capacity=2
GuiErrMsg<0x00>: Success.

CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 ARC-1220-VOL#00  Raid Set # 00   Raid0      2.0GB 00/00/01   Normal
  2 ST380817AS       Raid Set # 01   PassThr   80.0GB 00/00/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

# camcontrol readcap da1 -h
Device Size: 1.9 G, Block Length: 512 bytes

9. Look what happened
Code:
# gpart show da1
=>     34  1952701  da1  GPT  (954M)
       34  1952701    1  freebsd-zfs  (953M)

10. Now you have that would freebsd determined that there is free space.
If a reboot or do any transactions with any drive, the system still recognizes that the disk has changed. In the first experiment was enough for me "rescan", in the second and had to reboot (I should think).
Code:
# camcontrol rescan all
Re-scan of bus 0 was successful
zavhoz# gpart show da1
=>     34  1952701  da1  GPT  (1.9G)
       34  1952701    1  freebsd-zfs  (953M)

11. Install sysutils/gdisk
On amd64 Edit the Makefile. The port is marked as working only on i386, but it is not.

12. Unmount pool
The trouble is that while a pool online with markings do nothing.
Hurry, and in the hope that the pool will be restored killed him.
But destroy - that destroy ... The correct command is::)
Code:
# zpool export test2
# zpool status
no pools available

13. Edit the GPT (if briefly press v, x, e, w, y)
Code:
# gdisk /dev/da1
GPT fdisk (gdisk) version 0.6.9

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): v

Problem: The secondary header's self-pointer indicates that it doesn't reside
at the end of the disk. If you've added a disk to a RAID array, use the 'e'
option on the experts' menu to adjust the secondary header's and partition
table's locations.

Identified 1 problems!

Command (? for help): x

Expert command (? for help): e
Relocating backup data structures to the end of the disk

Expert command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed, possibly destroying your data? (Y/N): y
OK; writing new GUID partition table (GPT).
Warning: The kernel may continue to use old or deleted partitions.
You should reboot or remove the drive.
The operation has completed successfully.

14. Look what happened
Code:
# gpart show da1
=>     34  3905981  da1  GPT  (1.9G)
       34  1952701    1  freebsd-zfs  (953M)
  1952735  1953280       - free -  (954M)

15. Resize partition
Code:
# gpart resize -i 1 da1
da1p1 resized

16. Again, look what happened.
Code:
# gpart show da1       
=>     34  3905981  da1  GPT  (1.9G)
       34  3905981    1  freebsd-zfs  (1.9G)

17. mount pool
Code:
zavhoz# zpool import
  pool: test2
    id: 12254915794173548567
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        test2       ONLINE
          da1p1     ONLINE
zavhoz# zpool import test2

18. Check
Code:
# ls /test2 
dump
# df -h /test2
Filesystem     Size    Used   Avail Capacity  Mounted on
test2          1.8G    912M    952M    49%    /test2

# md5 /test2/dump 
MD5 (/test2/dump) = e9cfb1762976345d87155d99c14319c0

It came! :)
P.S. I do not know how to change the status of the topic at - resolved.
 
A small addendum, unmount ZFS pool is better before rescan disk, then GEOM will handle resizing after rescanning.
Ie 12 point should be before 10 or even 9.
 
Pardon me, for asking this silly question:
Did you just resize GPT partition with zpool on it?

I think it's plain wrong and won't work if you have more data on it, and had been using zfs for few months.
I'm not zfs developer, but I think data would be spread over entire pool, and if you resize gpt partition, you would lose data....

[that is my assumption, as I haven't done any testing]
 
The act of resizing a partition (by deleting it and recreating it larger) should not damage the original filesystem if you are careful. As long as the recreated partition starts at the same sector and ends at a sector equal or greater than it did before, there should be no problem.

I decided to test whether this was true or not with a ZFS setup and my findings are here.

In short, I made a raidz pool out of three 512MB freebsd-zfs partitions on different disks. I then resized each of the freebsd-zfs partitions to 1GB. Once all partitions had been grown, the zpool expanded itself to fill the space and it remained intact at every step.

I also tested growing a single freebsd-ufs partition from 512MB to 1GB. After the partition resize, the filesystem was undamaged but was still 512MB in size. growfs() then grew it to 896MB, 127MB short of the full 1GB. This may be due to a problem with growfs or limited UFS data structures set up when the filesystem was originally created.
 
I also tested variants with UFS, but when using growfs was there were too many warnings, so somehow ssykotno use it. The only thing that is not checked, this is what will happen with the active use of the array and simultaneously rebuild it.
 
any reasons why not forget about the 'raid' features of the areca controller and simply build ZFS on top of the JBOD or single-disk volumes?
 
Update for 9.1-RELEASE

[cmd=]gpart backup[/cmd]/[cmd=]gpart restore[/cmd] fixes the GPT, no need for gdisk already
Code:
[root@fbsd32 ~]# gpart show da1
=>      34  16777149  da1  GPT  (16G) [CORRUPT]
        34  16777149    1  freebsd-ufs  (8G)

[root@fbsd32 ~]# df
Filesystem 1K-blocks    Used   Avail Capacity  Mounted on
/dev/da0p2   1928028  800612  973176    45%    /
devfs              1       1       0   100%    /dev
/dev/da1p1   8106680 1814244 5643904    24%    /usr/ports
[root@fbsd32 ~]# umount /usr/ports 
[root@fbsd32 ~]# gpart backup da1 > /tmp/1
[root@fbsd32 ~]# gpart restore -F da1 < /tmp/1 
[root@fbsd32 ~]# gpart show da1
=>      34  33554365  da1  GPT  (16G)
        34  16777149    1  freebsd-ufs  (8G)
  16777183  16777216       - free -  (8.0G)
(the topic is still actual for VPS/VDS users)
 
Back
Top