ZFS ZFS CKSUM errors replacing last drive in 4TB -> 10TB upgrade

Michael Bushey

New Member

Reaction score: 1
Messages: 14

I upgraded a 6 drive RaidZ2 set from 4TB drives to 10TB drives. This is what I used to swap out the last drive:

# zpool replace logrus diskid/DISK-PK1334PBG6AWBSp1 diskid/DISK-2YJ7LSZD

This is the result:
# zpool status logrus

  pool: logrus
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 3.47T in 0 days 14:02:22 with 281 errors on Wed Apr  3 06:44:32 2019

        NAME                        STATE     READ WRITE CKSUM
        logrus                      DEGRADED     0     0   555
          raidz2-0                  DEGRADED     0     0   562
            replacing-0             UNAVAIL      0     0     0
              9474383208536637920   UNAVAIL      0     0     0  was /dev/diskid/DISK-PK1334PBG6AWBSp1
              diskid/DISK-2YJ7LSZD  ONLINE       0     0     0
            diskid/DISK-JEGUHU5N    ONLINE       0     0     0
            diskid/DISK-JEGRVEUN    ONLINE       0     0     0
            diskid/DISK-JEGW17PN    ONLINE       0     0    16
            diskid/DISK-JEGW6HEN    ONLINE       0     0     0
            diskid/DISK-JEGVT0VN    ONLINE       0     0     0

errors: 275 data errors, use '-v' for a list
I'm really confused what all the different error counts are. Nothing seems to agree, and I've never seen checksum errors on the raid name and "raidz2-0" part before. Any ideas?

# uname -a
FreeBSD suhuy.local 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC amd64

using -v flag on zpool status:
errors: Permanent errors have been detected in the following files: