Old ZFS pool information will not erase...tried dd

Where or how do you erase any old zfs pool information on a HDD? Where does ZFS store pool information? Every time I enter zpool import I see an un-accessible pool that was created during the testing phase and I just want it gone at this point.

I have tried:
  • dd if=/dev/zero of=/dev/ada0 count=1 bs=512k
  • dd if=/dev/zero of=/dev/ada0
  • The Parted Magic internal drive erase software...nothing
Any ideas?

Thanks,

Tony
 
ZFS places four 256KB vdev headers on disks, two at the beginning and two at the end. You'll probably need to erase the end of the disk as well.

That said, your second command should have erased every block on the disk (and taken quite a long time).

Did you actually do a zpool destroy of the pool, or just erase the disks in the pool? If the latter, your /boot/zfs/zpool.cache file might have some stale information left in it.
 
jem said:
ZFS places four 256KB vdev headers on disks, two at the beginning and two at the end. You'll probably need to erase the end of the disk as well.

That said, your second command should have erased every block on the disk (and taken quite a long time).

Did you actually do a zpool destroy of the pool, or just erase the disks in the pool? If the latter, your /boot/zfs/zpool.cache file might have some stale information left in it.

Unfortunately, I didn't get a chance to run zpool destroy because I was testing by disconnecting the drive and maybe due to being a cheap motherboard, all I got were "poool unavailable" replies after that.
Since then I have moved over to a better board but can't get rid of the pool info.

I have even reinstalled the OS...nothing.

This should do the trick as well:

# zpool labelclear ada0


I tried that as well...

I would really like to clean out the drives, but assuming that it is not going to happen, would there be any issues with just creating a new pool using these drives?
Will the old zfs pool information left behind affect zfs?
 
If you don't have any other zpools on your system, you could try deleting /boot/zfs/zpool.cache. That should start you from a blank slate.
 
@mrtonyg

I have a tool you can use:
Fast and easy delete of partition and filesystem data

Copy the script and paste it into a new file named e.g.
/usr/local/bin/cleandrives
# ee /usr/local/bin/cleandrives
then
# chmod 755 /usr/local/bin/cleandrives
and then you can use it like:
# cleandrives ada0
or
# cleandrives ada0 ada1 ada2 ada3 da0 da1
will clean all drives, regardless of type or size.

Be careful though, there no undo;)

/Sebulon
 
@Sebulon

I tried the script and unfortunately for me it didn't work.

Thanks anyways.



@break19

I don't have an issue with creating a pool, it's trying to get rid of old pool info.
 
@mrtonyg

I´m sorry to hear that. I should have told you that cleandrives uses dmesg to grep out size of the disks so you may need to reboot for the script to work properly. I made it that way so that you can use the script on both *BSD and linux.

/Sebulon
 
jem said:
The output from zdb() might give some clues where this stale pool information is coming from.

I couldn't get this command to work, it sounds like if the pool has to be accessible for it to work.

Thanks.
 
Back
Top