Solved Zeroing out a disc in prep for warranty exchange

The new disc is now integrated in place of the failed one, and I now need to zero out the failed one before I exchange it.

Trying to mount the drive in my workstation box failed, which at first surprised me. But I supposed it was because it used to be part of a zfs mirror.

So I gpart deleted both partitions. Now I've to decide what's the best way to finish the job.

Should I gpart create a full-disc swap partition and dd if=/dev/zero of=/dev/ada2 bs=100M or will something else work better?

If this is okay, is there a source of 0xFF bytes comparable to /dev/zero? (Back in the day, we were told that real unrecoverability requires 10-100 iterations, alternately with 0x00 and 0xFF bytes)
 
Just zero the whole disk once. While it's theoretically possible to retrieve the previous data, after it has been overwritten, it's going to be a very, very costly exercise. Unless you're on the top 10 of the world's most wanted criminals "they" are not going to bother with it.
 
Just zero the whole disk once. While it's theoretically possible to retrieve the previous data it's going to be a very, very costly exercise. Unless you're on the top 10 of the world's most wanted criminals "they" are not going to bother with it.

Fair enough -- it'll be quicker for sure :) Is the gpart create -s GPT gpart add -t freebsd-swap the best way to set up for the dd?
 
For modern SATA drives you can always use the secure_erase feature. See camcontrol(8) security -e

This usually takes a bit less time than approaches with dd, but still can take up to several hours, depending on disk size and speed.
 
Thanks for the tip about secure_erase. I'll give that a try next time (though I hope there won't be a next time, silly me).

I ran this dd on an AMD FX-8320 machine with 32GB, and to zero out 1TB (an WD RE SATA) took 2.71 hours using a 100MB buffer. Not too awful.
 
That will work but /dev/random is terribly slow (it needs entropy). Using it to randomize a 1 TB drive can easily take a day to complete.
 
Increasing the block size will speed things up a bit.

Code:
 dd if=/dev/zero of=/dev/ada1 bs=10M
Any reason this would be any less reliable than the byte by byte zeroing?
 
No, it will give the same result at a sector level. With a very large sequential block size (multiple megabytes, longer than the physical track size), you should get about 90% of the raw write performance of the hard disk. Getting to 100% requires a multi-threaded or asynchronous (but sequential) writing program: that makes sure that the next write operation is already queued up on the disk when the previous one finishes, so if there is any turn-around delay going through the kernel and the userspace writing program, the disk doesn't miss one revolution. Most people don't have access to such programs though, and writing them isn't trivial.

Honestly, I think we're all over-analyzing this. In reality, a hard-disk vendor won't have any incentive to read the garden-variety defective disk that's return for warranty. If they hook it up at all (which I doubt), they'll probably just read the internal error and performance statistics, to determine why the drive failed, and then throw it in the trash.
 
Back
Top