ZFS Root Install

Can someone tell me what might be wrong in the following procedure,

Code:
zpool create -O mountpoint=none -o cachefile=/tmp/zpool.cache -o autoexpand=on pool1 md2.nop md3.nop md4.nop

zpool replace pool1 md2.nop gpt/disk0
zpool offline pool1 md2.nop
mdconfig -d -u 2
rm /tmp/tmpdsk0

zpool replace pool1 md3.nop gpt/disk1 
zpool offline pool1 md3.nop
mdconfig -d -u 3
rm /tmp/tmpdsk1

zpool replace pool1 md4.nop gpt/disk2 
zpool offline pool1 md4.nop
mdconfig -d -u 4
rm /tmp/tmpdsk2

After mounting all the correct directories and editing all the required conf files during FreeBSD install, I reboot the system and get the following error,

Code:
Default: zroot:/boot/kernel/kernel
boot: ZFS: i/o error - all block copies unavailable

Is this because I replaced the gnop devices with real hard disks and zfs didn't have a chance to resilver the drives? Does anyone know how I can fix this issue?
 
If you have a single pool with 2 disks (vdev mirror), can you add a third raw disk to the pool which will not be associated with the mirror?
 
jem said:
Did you copy /tmp/zpool.cache to /boot/zfs/zpool.cache on your pool before trying to boot the system?

Yes I did that. I think the problem might be that I forced replication level in such a way,

1. Create a pool with a mirror vdev containing 2 disks.
2. Force add gnop device to pool which is not associated with mirror vdev (forced replication level).
3. Take gnop device offline and replace with single raw disk.
4. Proceed with install steps.

I believe this is causing the boot problems. Maybe i should try adding the 3rd disk after OS install is completed.
 
@einthusan

You could boot up from an install-CD or memstick in live mode, and import the pool from there to see the extent of the damage.

Correct replacing order is:
# zpool offline pool1 md4.nop
# mdconfig -d -u 4
# rm /tmp/tmpdsk4
# zpool replace pool1 md4.nop gpt/disk2

I notice that the file names of the files used for creating the md-devices differ from the md-device names for what you wrote in the first post. I always make sure to create files to match the names of the md-devices so that you don´t remove the files in the wrong order:
# mdconfig -a -t vnode -f /tmp/tmpdsk[b]2[/b] md[b]2[/b]
# mdconfig -a -t vnode -f /tmp/tmpdsk[b]3[/b] md[b]3[/b]

And you never, ever force in a single-drive vdev into a pool if you care about your data. The first two drives are mirrors of each other, so they´re alright, but when something happens with the third one, you are *beep* out of luck.
Or if you are that desperate to chase after those measly more IOPS gained, why not build a striped pool(RAID0) and disregard redundancy altogether?

/Sebulon
 
Sebulon said:
@einthusan
You could boot up from an install-CD or memstick in live mode, and import the pool from there to see the extent of the damage.
/Sebulon

I wish I knew how to import/export pools. I will try reading it up somewhere and see how that goes.

Sebulon said:
@einthusan
Correct replacing order is:
# zpool offline pool1 md4.nop
# mdconfig -d -u 4
# rm /tmp/tmpdsk4
# zpool replace pool1 md4.nop gpt/disk2
/Sebulon

You can't offline md4.nop because it complains about not having a spare or something along those lines. Sorry forgot to save the exact error message.

Sebulon said:
@einthusan
I notice that the file names of the files used for creating the md-devices differ from the md-device names for what you wrote in the first post. I always make sure to create files to match the names of the md-devices so that you don´t remove the files in the wrong order:
# mdconfig -a -t vnode -f /tmp/tmpdsk[b]2[/b] md[b]2[/b]
# mdconfig -a -t vnode -f /tmp/tmpdsk[b]3[/b] md[b]3[/b]
/Sebulon

I didn't know you had to remove the files in order! Thanks for letting me know.

Sebulon said:
@einthusan
And you never, ever force in a single-drive vdev into a pool if you care about your data. The first two drives are mirrors of each other, so they´re alright, but when something happens with the third one, you are *beep* out of luck.
Or if you are that desperate to chase after those measly more IOPS gained, why not build a striped pool(RAID0) and disregard redundancy altogether?
/Sebulon

Well the drives i have are 2x 1TB and 1x 2TB and i can't buy anymore drives so I tried to figure out whats the best way to make those drives work together. I was trying to do a raid0 but the system wouldn't boot after a restart so then I tried a mirror vdev, which also gave the same problem. I think its because of the 3rd drive I'm trying to add during OS install. I'm going to try again with just 2 drives during install as a vdev mirror. I have done OS install many times but that was using 4 drives ( 2x vdev mirror ).

Thanks for your awesome help throughout the forum.
 
Back
Top