Hello,
My storage/ pool doesn't seem to be alive. . . Let me show the stdout's:
As you can see from the first stdout, it complains of an iterative I/O issue with, something, and when doing a status on the pool in question I can see a single faulted drive. I earlier tried replacing the faulted drive with the hotspare, but as you can see from the output above it crapped out after a reboot.
So! A few simple things to mention is the system, obviously, had issues which were "fixed" for the time being, but failed after a reboot. I had to specify the shell path before even booting into the rpool, and manually starting services. What steps can I take to bring this pool alive again so I can fix it?
My storage/ pool doesn't seem to be alive. . . Let me show the stdout's:
Code:
# zfs list
cannot iterate filesystems: I/O error
NAME USED AVAIL REFER MOUNTPOINT
rpool 2.41G 91.6G 18K none
rpool/root 72.2M 91.6G 72.2M legacy
rpool/usr 1.35G 91.6G 1.35G legacy
rpool/var 137M 91.6G 137M legacy
temp 67.5K 1.69T 18K /temp
Code:
# zpool status storage
pool: storage
state: FAULTED
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
see: [url]http://www.sun.com/msg/ZFS-8000-5E[/url]
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage FAULTED 0 0 1 corrupted data
raidz2 DEGRADED 0 0 6
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
da9 ONLINE 0 0 0
da10 ONLINE 0 0 0
da11 ONLINE 0 0 0
spare DEGRADED 0 0 0
da12 FAULTED 0 0 0 corrupted data
da22 ONLINE 0 0 0
da12 ONLINE 0 0 0
da13 ONLINE 0 0 0
da14 ONLINE 0 0 0
da15 ONLINE 0 0 0
da16 ONLINE 0 0 0
da17 ONLINE 0 0 0
da18 ONLINE 0 0 0
da19 ONLINE 0 0 0
da20 ONLINE 0 0 0
da21 ONLINE 0 0 0
Code:
# zpool replace storage da12 da23
cannot open 'storage': pool is unavailable
Code:
# zpool scrub storage
cannot scrub 'storage': pool is currently unavailable
As you can see from the first stdout, it complains of an iterative I/O issue with, something, and when doing a status on the pool in question I can see a single faulted drive. I earlier tried replacing the faulted drive with the hotspare, but as you can see from the output above it crapped out after a reboot.
So! A few simple things to mention is the system, obviously, had issues which were "fixed" for the time being, but failed after a reboot. I had to specify the shell path before even booting into the rpool, and manually starting services. What steps can I take to bring this pool alive again so I can fix it?