ZFS Failing drive with GELI encrypted partition needs replacing, do I have the procedure correct?

Typically run FreeBSD as virtual machines on VMWare and those machines have hardware RAID, so this is my first experience with a personal storage box at home that is running FreeBSD on an older bare-metal server. Shortly after upgrading to FreeBSD 14 I started experiencing long read delays when accessing the server over NFS of Samba. Sometimes login takes a very long time and ls can hang for several minutes in some cases.

After checking logs and looking at SMART data, one of the drives is definitely having issues reading, and I admit all well past their advertised lifespan. The setup is a four disk RAIDZ1, it was configured using the installer's Auto (ZFS) option with encryption. Currently a scrub is running at a very slow pace and is encountering a lot of read errors.

I believe the best course of action is to stop the scrub, remove the failing device from the pool and replace it, but I'm looking for confirmation if the steps I have is the correct procedure for this with GELI encryption. Unfortunately the machine doesn't have hot-swap bays, so I will need to remove the drive from the pool, power off, and then replace. I am assuming that when I power off the server and replace the drive it will retain the same device identifier. Further below are additional details about the pool/device.

Another question I have is why hasn't ZFS decided to offline the drive or degrade the pool given the large amount of errors? I'm not able to 'use' the server for any of its intended purposes (mostly media streaming), and kind of figured ZFS would have kicked it out already, but maybe this particular type of read error doesn't meet the conditions.

Code:
zpool scrub -s zroot
gpart backup ada3 > ada3_part
zpool offline zroot ada3p3.eli
# shut down and replace drive
gpart restore -l ada3 < ada3_part
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
geli init -g /dev/ada3p3
geli attach /dev/ada3p3
zpool replace zroot ada3p3.eli ada3p3.eli
Hundreds, maybe thousands of the following are logged:
Code:
(ada3:ata3:0:0:0): CAM status: Command timeout
(ada3:ata3:0:0:0): Retrying command, 0 more tries remain
(ada3:ata3:0:0:0): READ_DMA48. ACB: 25 00 08 71 01 40 80 00 00 00 e8 07
(ada3:ata3:0:0:0): CAM status: Command timeout
(ada3:ata3:0:0:0): Error 5, Retries exhausted
GEOM_ELI: g_eli_read_done() failed (error=5) ada3p3.eli[READ(offset=1082379079680, length=1036288)]
Code:
# zpool status
  pool: zroot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub in progress since Thu Dec  7 09:37:42 2023
    3.30T / 10.6T scanned at 13.3M/s, 2.27T / 10.6T issued at 9.18M/s
    209M repaired, 21.42% done, no estimated completion time
config:

    NAME            STATE     READ WRITE CKSUM
    zroot           ONLINE       0     0     0
      raidz1-0      ONLINE       0     0     0
        ada1p3.eli  ONLINE       0     0     0
        ada2p3.eli  ONLINE       0     0     0
        ada3p3.eli  ONLINE   5.40K     0     0  (repairing)
        ada4p3.eli  ONLINE       0     0     0

errors: No known data errors
Code:
# smartctl -a /dev/ada3
smartctl 7.4 2023-08-01 r5530 [FreeBSD 14.0-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Hitachi/HGST Ultrastar 7K4000
Device Model:     HGST HUS724040ALA640
Serial Number:    PNV331P1GMPDPY
LU WWN Device Id: 5 000cca 22bc8f36b
Firmware Version: MFAOAA70
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5528
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sun Dec 10 09:43:34 2023 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
See vendor-specific Attribute list for failed Attributes.

General SMART Values:
Offline data collection status:  (0x80)    Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (   24) seconds.
Offline data collection
capabilities:              (0x5b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 552) minutes.
SCT capabilities:            (0x003d)    SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   067   067   016    Pre-fail  Always       -       322503865
  2 Throughput_Performance  0x0005   138   138   054    Pre-fail  Offline      -       76
  3 Spin_Up_Time            0x0007   179   179   024    Pre-fail  Always       -       403 (Average 460)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       31
  5 Reallocated_Sector_Ct   0x0033   001   001   005    Pre-fail  Always   FAILING_NOW 1999
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   140   140   020    Pre-fail  Offline      -       26
  9 Power_On_Hours          0x0012   090   090   000    Old_age   Always       -       76011
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       31
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       121
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       121
194 Temperature_Celsius     0x0002   150   150   000    Old_age   Always       -       40 (Min/Max 22/48)
196 Reallocated_Event_Count 0x0032   001   001   000    Old_age   Always       -       2078
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged
 
There's no need of the fist scrub. It will take a lot of time on damaged disk and you are going to replace this disk anyway.
The new disk may have different start sector and different sector size. It's better to take a note with gpart show ada3 and create the same partitions manually on the new disk instead of using gpart restore
 
There's no need of the fist scrub. It will take a lot of time on damaged disk and you are going to replace this disk anyway.
The new disk may have different start sector and different sector size. It's better to take a note with gpart show ada3 and create the same partitions manually on the new disk instead of using gpart restore
Thanks, the scrub was started automatically from cron. I am replacing with the same model of drive, but will take note about the starting sector.
 
Back
Top