Hi there,
I am running FreeBSD 8.1 which means I have ZFS Ver 3 Revision 14. I have a RAIDZ2 pool setup with 15 (2tb Samsung HD204UI) drives (I'm aware this doesn't match the ZFS Best Practices, I was more interested in static storage than IOPS), one of which has failed.
As I understand it from my reading here and in other documentation the correct procedure to replace a drive is:
1. Offline the Drive (using zpool offline poolname device)
2. Power off, replace the drive, power on
3. Replace the drive (using zpool replace poolname device) assuming they have the same name.
Initially I had some troubles offlining the disk, as it gave me a "no valid replicas" message despite the pool still being available when I physically unplugged the disk - but after I ran a scrub it resilvered ~500mb and I was able to offline it.
My issue is I have used GEOM labels for the disks so rather than my offlined disk appearing in the zpool as da0, it's appearing as label/disk1.
Do I need to label the replacement drive? As I understand it, the Glabel is written to the last 512 bytes of the disk so could this play havoc with the zpool if I don't label it?
Running the glabel list command still shows a glabel for disk da0 pointing to label/disk1 (which is my faulty disk) when the drive is unplugged. Also, with the drive unplugged, the results of camcontrol devlist show that there is a da0 plugged into the system. Does FreeBSD label its drives sequentially based on controller? If no other hardware in the system changes - I have pulled the disk previously located at /dev/da0 and replaced it with another disk - is it safe to assume that the replacement disk will also show up as /dev/da0?
If so - this will make labelling the disk fairly straightforward, so in order to resolve my problem I believe all I need to do will be:
I am still reasonably green when it comes to FreeBSD so if there's something that you think I've missed or don't quite understand properly, please let me know. I'm anxious to get this resolved quickly so I can bring my fileserver back online.
I am running FreeBSD 8.1 which means I have ZFS Ver 3 Revision 14. I have a RAIDZ2 pool setup with 15 (2tb Samsung HD204UI) drives (I'm aware this doesn't match the ZFS Best Practices, I was more interested in static storage than IOPS), one of which has failed.
As I understand it from my reading here and in other documentation the correct procedure to replace a drive is:
1. Offline the Drive (using zpool offline poolname device)
2. Power off, replace the drive, power on
3. Replace the drive (using zpool replace poolname device) assuming they have the same name.
Initially I had some troubles offlining the disk, as it gave me a "no valid replicas" message despite the pool still being available when I physically unplugged the disk - but after I ran a scrub it resilvered ~500mb and I was able to offline it.
My issue is I have used GEOM labels for the disks so rather than my offlined disk appearing in the zpool as da0, it's appearing as label/disk1.
Do I need to label the replacement drive? As I understand it, the Glabel is written to the last 512 bytes of the disk so could this play havoc with the zpool if I don't label it?
Running the glabel list command still shows a glabel for disk da0 pointing to label/disk1 (which is my faulty disk) when the drive is unplugged. Also, with the drive unplugged, the results of camcontrol devlist show that there is a da0 plugged into the system. Does FreeBSD label its drives sequentially based on controller? If no other hardware in the system changes - I have pulled the disk previously located at /dev/da0 and replaced it with another disk - is it safe to assume that the replacement disk will also show up as /dev/da0?
If so - this will make labelling the disk fairly straightforward, so in order to resolve my problem I believe all I need to do will be:
Code:
#glabel label disk1 /dev/da0
#zpool replace poolname label/disk1
I am still reasonably green when it comes to FreeBSD so if there's something that you think I've missed or don't quite understand properly, please let me know. I'm anxious to get this resolved quickly so I can bring my fileserver back online.