Solved zpool max space barrier?

I'm baffled. Hence my post. Searching doesn't seem to return a relevant information to my problem.

New install, 12.0-RELEASE amd64. Installed base, kernel and lib32 sets.

I have a ciss0 hardware raid controller, with two logical drives defined, a da0 of 500GB, and remainder (about 5TB) da1. dmesg reports both sizes accurately, gpart list shows da1 provider accurately, but zpool create only defines a 1.9TB pool, out of the 5TB available space. See outputs below.

Code:
root@warden:~ # gpart list da1
Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 10672312967
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da1s1
   Mediasize: 5464224219136 (5.0T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(1,GPT,454fc82d-6878-11e9-a6c9-e4115b1388c8,0x28,0x27c1e9260)
   rawuuid: 454fc82d-6878-11e9-a6c9-e4115b1388c8
   rawtype: 516e7cb4-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 5464224219136
   offset: 20480
   type: freebsd
   index: 1
   end: 10672312967
   start: 40
Consumers:
1. Name: da1
   Mediasize: 5464224260096 (5.0T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Code:
root@warden:~ # mount
ciss0d0 on / (zfs, local, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
ciss0d0/usr on /usr (zfs, local, nfsv4acls)
ciss0d0/var on /var (zfs, local, noexec, nfsv4acls)
ciss0d1 on /jails (zfs, local, nfsv4acls)

Code:
root@warden:~ # df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
ciss0d0        479G    428M    479G     0%    /
devfs          1.0K    1.0K      0B   100%    /dev
ciss0d0/usr    480G    1.0G    479G     0%    /usr
ciss0d0/var    479G    6.1M    479G     0%    /var
ciss0d1        1.9T     88K    1.9T     0%    /jails

I only used the most simple command, zpool create ciss0d1 /dev/da1s1a
 
Last edited by a moderator:
you are using a hardware raid controller. You might or might not know this - using a hardware raid controller with zfs is not the best way to do it, zfs prefers to be as close to the hardware (in other words the bare disks themselves, not some piece of hardware + firmware that claims to know how to do raid better than zfs).
Also, the output above shows gpart for da1, but the other commands use 'ciss0d0' and 'ciss0d1'. We don't know how the 'ciss*' are related to 'da1', and we don't know if the 'ciss*' are disks, and if they are, what size they are.
 
Why did you use a legacy freebsd partition type as a ZFS provider? That's not right.

From gpart(8)
Code:
PARTITION TYPES
....
freebsd                    A FreeBSD partition subdivided into filesystems with a BSD
                                disklabel.  This is a legacy partition type and should not
                                be used for the APM or GPT schemes.  The scheme-specific
                                types are "!165" for MBR, "!FreeBSD" for APM, and
                                "!516e7cb4-6ecf-11d6-8ff8-00022d09712b" for GPT.

The right partition type would be freebsd-zfs.
If you have further questions, you should rather post the output from
gpart show
zpool status
zfs list
and maybe...
camcontrol devlist
 
you are using a hardware raid controller. You might or might not know this - using a hardware raid controller with zfs is not the best way to do it, zfs prefers to be as close to the hardware (in other words the bare disks themselves, not some piece of hardware + firmware that claims to know how to do raid better than zfs).
Also, the output above shows gpart for da1, but the other commands use 'ciss0d0' and 'ciss0d1'. We don't know how the 'ciss*' are related to 'da1', and we don't know if the 'ciss*' are disks, and if they are, what size they are.

I understand all of this, tingo. Good points, I will still use it though. Noted your points.

ciss0 is the controller who presents da0 (500GB) and da1 (about 5TB), I shortcutted the pool name for both of those, ciss0d0 and ciss0d1. "tank" or "data" aren't useful enough for me when it comes time to troubleshoot it later.
 
Why did you use a legacy freebsd partition type as a ZFS provider? That's not right.

From gpart(8)
Code:
PARTITION TYPES
....
freebsd                    A FreeBSD partition subdivided into filesystems with a BSD
                                disklabel.  This is a legacy partition type and should not
                                be used for the APM or GPT schemes.  The scheme-specific
                                types are "!165" for MBR, "!FreeBSD" for APM, and
                                "!516e7cb4-6ecf-11d6-8ff8-00022d09712b" for GPT.

The right partition type would be freebsd-zfs.
If you have further questions, you should rather post the output from
gpart show
zpool status
zfs list
and maybe...
camcontrol devlist

Thank you loads for this information, manpage snippet etc. All very good information. Thank you.

I followed MBR Root on ZFS as my basis for setting up da0 (pool named ciss0d0). I just snagged enough directions to get a zpool on da1 (pool named ciss0d1). With your snippet, I recreated the partition type with the proper FreeBSD GUID id type. zpool still only creates a 1.9TB (should we just call it a 2TB?) pool

Find the requested command outputs below

Code:
root@warden:~ # gpart show da1
=>         40  10672312928  da1  GPT  (5.0T)
           40  10672312928    1  freebsd  (5.0T)

Code:
root@warden:~ # zpool status
  pool: ciss0d0
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ciss0d0     ONLINE       0     0     0
          da0s1a    ONLINE       0     0     0

errors: No known data errors

  pool: ciss0d1
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ciss0d1     ONLINE       0     0     0
          da1s1a    ONLINE       0     0     0

errors: No known data errors

Code:
root@warden:~ # zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
ciss0d0      1.47G   479G   428M  /
ciss0d0/usr  1.05G   479G  1.05G  /usr
ciss0d0/var  6.23M   479G  6.23M  /var
ciss0d1       508K  1.92T    88K  /jails

Code:
root@warden:~ # camcontrol devlist
<HP RAID 5 OK>                     at scbus0 target 0 lun 0 (pass0,da0)
<HP RAID 5 OK>                     at scbus0 target 1 lun 0 (pass1,da1)

Thanks in advance for any updated information.
 
Now it has become more clear to me what you were doing.
Your problem is, that the MBR scheme doesn't support partitions bigger then 2GB.
I don't see the point of using a BSD partition as your zfs provider anyway. Your computer obviously supports GPT, so why not use it?

You should destroy the jail pool, recreate the first partition as freebsd-zfs (that's pure GPT, no stoneage MBR)
I assume your disks do not hold any valuable date yet. (don't blindly execute commands)

Code:
# zpool destroy ciss0d1
# gpart delete -i1 da1
# gpart add -t freebsd-zfs da1
# zpool create ciss0d1 /dev/da0p1

Now zfs list and you'll see you got 5GB available.
 
Now it has become more clear to me what you were doing.
Your problem is, that the MBR scheme doesn't support partitions bigger then 2GB.
I don't see the point of using a BSD partition as your zfs provider anyway. Your computer obviously supports GPT, so why not use it?

You should destroy the jail pool, recreate the first partition as freebsd-zfs (that's pure GPT, no stoneage MBR)
I assume your disks do not hold any valuable date yet. (don't blindly execute commands)

Code:
# zpool destroy ciss0d1
# gpart delete -i1 da1
# gpart add -t freebsd-zfs da1
# zpool create ciss0d1 /dev/da0p1

Now zfs list and you'll see you got 5GB available.

Would like to call attention to some typos, but I'm feeling I understand your post. MBR maxes at just shy of 2*TB*, and if I replace the freebsd GUID type with freebsd-zfs type, I might get 5*TB*

Strike that. Commands below, still at 1.9TB with a freebsd-zfs type


Code:
root@warden:~ # zfs unmount ciss0d1
root@warden:~ # zpool destroy ciss0d1
root@warden:~ # gpart delete -i 1 da1
da1s1 deleted
root@warden:~ # gpart add -t freebsd-zfs da1
da1p1 added
root@warden:~ # ls /dev/da1p*
/dev/da1p1      /dev/da1p1a
root@warden:~ # zpool create -fm /jails ciss0d1 /dev/da1p1a
root@warden:~ # zfs mount
ciss0d0                         /
ciss0d0/usr                     /usr
ciss0d0/var                     /var
ciss0d1                         /jails
root@warden:~ # df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
ciss0d0        479G    428M    479G     0%    /
devfs          1.0K    1.0K      0B   100%    /dev
ciss0d0/usr    480G    1.0G    479G     0%    /usr
ciss0d0/var    479G    6.2M    479G     0%    /var
ciss0d1        1.9T     88K    1.9T     0%    /jails
root@warden:~ # gpart show da1
=>         40  10672312928  da1  GPT  (5.0T)
           40  10672312928    1  freebsd-zfs  (5.0T)

You are correct it's just a base install, no patches, no updates and no data. Using the new 12.0-RELEASE
I noticed (and remembered) while seeing gpart's output.. 'da1s1' deleted being a MBR slice, and 'da1p1' being a GPT. I still have no better progress, but appreciate the response time.

Thanks again, looking forward to a reply.
 
root@warden:~ # zpool create -fm /jails ciss0d1 /dev/da1p1a
There were no typos. It must be da1p1 not da1p1a.
Please destroy the whole GPT partition table, to remove all traces of MBR.
Code:
# zpool destroy ciss0d1
# gpart destroy -F da1
# gpart create -s gpt da1
# gpart add -t freebsd-zfs da1
# zpool create ciss0d1 /dev/da1p1
 
Back
Top