ZFS Imported Pool, cannot create snapshots : out of space

In the process of rebuilding a machine off of 9.3 and onto 11.2, I imported the remaining 64-GB member of the old machine's pool onto a running 11.1-RELEASE-p11 machine
$ sudo zpool import -f -R /port19-mnt port19

When I try to snapshot it for send/recv to another pool for later reference, it fails with
$ sudo zfs snapshot -r port19@2018-07-15-final
cannot create snapshots : out of space
no snapshots were created


Admittedly, there's not a lot of space, but puzzling is that zpool status shows available space (750 MB), but zfs list does not.

Is there something that I missed in the process that is preventing the creation of a snapshot and the "hard zero" for available space? That wasn't the case when this was the boot drive for the system; snapshots were "fine" on the running system with no out-of-space indications of any sort.

Code:
$ zpool get all port19
NAME    PROPERTY                       VALUE                          SOURCE
port19  size                           27.5G                          -
port19  capacity                       97%                            -
port19  altroot                        /port19-mnt                    local
port19  health                         DEGRADED                       -
port19  guid                           11036724123318587500           default
port19  version                        -                              default
port19  bootfs                         port19/_root                   local
port19  delegation                     on                             default
port19  autoreplace                    off                            default
port19  cachefile                      none                           local
port19  failmode                       wait                           default
port19  listsnapshots                  off                            default
port19  autoexpand                     off                            default
port19  dedupditto                     0                              default
port19  dedupratio                     1.00x                          -
port19  free                           750M                           -
port19  allocated                      26.8G                          -
port19  readonly                       off                            -
port19  comment                        -                              default
port19  expandsize                     27.8G                          -
port19  freeing                        0                              default
port19  fragmentation                  -                              -
port19  leaked                         0                              default
port19  feature@async_destroy          enabled                        local
port19  feature@empty_bpobj            active                         local
port19  feature@lz4_compress           active                         local
port19  feature@multi_vdev_crash_dump  disabled                       local
port19  feature@spacemap_histogram     disabled                       local
port19  feature@enabled_txg            disabled                       local
port19  feature@hole_birth             disabled                       local
port19  feature@extensible_dataset     disabled                       local
port19  feature@embedded_data          disabled                       local
port19  feature@bookmarks              disabled                       local
port19  feature@filesystem_limits      disabled                       local
port19  feature@large_blocks           disabled                       local
port19  feature@sha512                 disabled                       local
port19  feature@skein                  disabled                       local

Code:
$ zfs list
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
port19                                        26.8G      0   144K  /port19-mnt/port19
port19/_root                                  2.38G      0  1.39G  /port19-mnt
port19/tmp                                     368M      0  54.6M  /port19-mnt/tmp
port19/usr                                    22.2G      0   214M  /port19-mnt/usr
port19/usr/home                               12.6G      0  12.6G  /port19-mnt/usr/home
port19/usr/local                              8.62G      0  1.25G  /port19-mnt/usr/local
port19/usr/obj                                 200K      0   188K  /port19-mnt/zfs-port19/usr/obj
port19/usr/ports                               600K      0   204K  /port19-mnt/zfs-port19/usr/ports
port19/usr/ports/distfiles                     192K      0   192K  /port19-mnt/zfs-port19/usr/ports/distfiles
port19/usr/ports/packages                      192K      0   192K  /port19-mnt/zfs-port19/usr/ports/packages
port19/usr/src                                 200K      0   188K  /port19-mnt/zfs-port19/usr/src
port19/var                                    1.76G      0  26.6M  /port19-mnt/var
port19/var/crash                               236K      0   152K  /port19-mnt/var/crash
port19/var/db                                 1.27G      0  21.7M  /port19-mnt/var/db
port19/var/db/pkg                              559M      0  44.5M  /port19-mnt/var/db/pkg
port19/var/empty                               144K      0   144K  /port19-mnt/var/empty
port19/var/log                                12.0M      0   664K  /port19-mnt/var/log
port19/var/mail                               4.74M      0  2.53M  /port19-mnt/var/mail
port19/var/run                                5.38M      0   308K  /port19-mnt/var/run
port19/var/spool                              18.3M      0  1.78M  /port19-mnt/var/spool
port19/var/tmp                                13.6M      0  10.4M  /port19-mnt/var/tmp
 
I'm not 100% sure from mind if this will or can affect it, but even so: a pool set to DEGRADED can't be good either way.
 
Any vdevs in there, like for a swap partition? They need a lot (well, their size) extra.
 
DEGRADED is "expected" as the former members of the mirror are long gone (they were USB sticks or card readers over the years). The pool was functioning "well" and passed a scrub on the new host
Code:
$ zpool status port19
  pool: port19
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
    the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 0h6m with 0 errors on Sun Jul 15 13:33:34 2018
config:

    NAME                      STATE     READ WRITE CKSUM
    port19                    DEGRADED     0     0     0
      mirror-0                DEGRADED     0     0     0
        15178359394392615998  UNAVAIL      0     0     0  was /dev/gpt/zrootdisk2
        label/zrootdisk-ssd   ONLINE       0     0     0
        4428486766055510360   UNAVAIL      0     0     0  was /dev/da0p3

errors: No known data errors

No vdevs that I recall -- this dates back to the days of hand-crafting a ZFS pool. Not sure why the GPT shows "corrupted"
Code:
$ gpart show /dev/da0
=>       34  125045355  da0  GPT  (60G) [CORRUPT]
         34          6       - free -  (3.0K)
         40       1024    1  freebsd-boot  (512K)
       1064    8388608    2  freebsd-swap  (4.0G)
    8389672  116655712    3  freebsd-zfs  (56G)
  125045384          5       - free -  (2.5K)

Since I'm failing to get the 11.2-RELEASE installer to boot with the LSI raid controller on the new build, I'm tempted to shove this back into the old box and see if I can bring it up there (after I at least grab a tar off it).
 
To free up some space you could remove things like /usr/src and /usr/ports since you can download fresh ones easily there's no point in keeping them. There's also some data locked up in /var/tmp which could be removed.
 
Still a mystery on this one as I booted the old carcass with the drive, there was plenty of space (not the "hard 0" shown on the 11.1 system) and was able to make the needed snapshot. Importing it back on the 11.1 system and again, "0" for AVAIL on every filesystem in the pool.

On the bad GPT, looking at the console/logs, it is the secondary GPT that is corrupt, which I recall being annoyingly common in the my early days of ZFS with FreeBSD.
 
On the bad GPT, looking at the console/logs, it is the secondary GPT that is corrupt, which I recall being annoyingly common in the early days of ZFS with FreeBSD.
This has nothing to do with ZFS. I suspect this pool was created at first with smaller disks which, over time, got replaced by bigger disks but copied the 'old' partition tables.
 
  • Thanks
Reactions: jef
I don't have a solution and no concrete knowledge.
But from what I know how zfs works, is that it reserves some space in each pool.

I came across similar situations from time to time, and because of that have a dataset on my pools with a reservation, and no data on it. So in case of a full pool, I can just lower the reservation a bit, and have free space again.

So if that reserved percentage did change from version 9.3 to 11.1 that would be an explanation.

You may read:
https://lists.freebsd.org/pipermail/freebsd-fs/2014-November/020428.html
https://lists.freebsd.org/pipermail/freebsd-fs/2014-December/020666.html
"
sysctl vfs.zfs.spa_slop_shift=6 would tune down the reserved space to
1/(2^6) (=1.5625%).
"
I cannot test this just now.
You may even read:
https://lists.freebsd.org/pipermail/freebsd-stable/2013-December/076163.html

edit:
Here is another one, with a similar problem:
https://forums.freebsd.org/threads/no-more-free-space.49095/

I did not look so closely on your relevant numbers, but this looks similar.
Yours:
"port19 capacity 97% "
https://lists.freebsd.org/pipermail/freebsd-fs/2014-November/020424.html
Apparently, the above happened somewhere between 96.0% and 96.9% used.
 
Back
Top