ZFS zpool status -v - Permanent errors in <0x6afd>:<0x2f10e>

That usually means there is corruption in metadata. If file data is corrupt it will list the filename. If metadata is corrupt, either storing the file details or "higher up" zfs metadata, it can't provide a filename. In this case it just gives the location of the corrupted data (not that it's much use as far as I'm aware).

Unfortunately I'm not aware of any easy way to tell which part of the filesystem is corrupt and remove it though.
 
I may be wrong but scrubbing won't actually remove anything. You can do a zpool clear which will clear the errors, but unless you somehow get rid of the corrupted data, a scrub will just find and report it again.
 
I know that the second hex is the inode, the first hex is probably the dataset. Though usually it is capable of identifying it by name, though perhaps that info is lost. If you can figure out the dataset, you could do a find <path> -inum ## -delete to make the corrupt file go away. I found that often the reason it couldn't identify the filename was that the file had hardlinks, and the original file had been deleted (which is the only one it tries to associate. And, had previously reported.)

When I was having this problem it was a mirrored zpool for sysutils/backuppc using early after the flood 2TB Seagate drives. I had numerous failures until they eventually exchanged them for a different model....with 3yr warranties. Which was about how long the lasted. (and then made the mistake of accepting WD Purple which were slight cheaper than WD Reds, with seemingly identical specs. -- read performance was horrible...and generally do more reads than writes on workstation.)

Oh yeah, things get tricky when the referenced inode is part of a snapshot.

The Dreamer
 
Back
Top