ZFS wipe data

What is the safest way to securely wipe data from zfs filesystem? (beyond recovery of even expensive forensic tools)
 
Well, fwiw its probably safer to try and burn the hard drive.

But I was looking for more software based solutions ?
 
If your really paranoid after 3 passes the platters should be OK so you need to do something about the disk cache and controller electronics.
These are all mounted on a pcb on the exterior of the drive. So take a flat tipped screwdriver to the pcb to remove and destroy it.
Then slam the drive against some concrete a couple of times until you hear rattling.
 
....

The DoD 5220.22m Wipe solution, a method that wipes data by changing the configuration of zeros and ones stored on a device, has long been treated as a secure method to erase data from a hard disk that you want to use for another purpose. That final point is absolutely critical, as the DoD 5220.22m Wipe is not considered a secure way to erase data and adequately protect that information. While the DoD 5220.22m Wipe has emerged as a popular solution, it was originally created by the National Industrial Security Program, and it was revised back in 2006. It is a dated method that, according to a Lifewire report, is no longer permitted by the DoD, Department of Energy, Central Intelligence Agency or Nuclear Regulatory Commission.
According to the news source, it is not only worth noting that the method is no longer in use, but also important to recognize that software-based data wipes are no longer considered acceptable by these agencies. Software wipes have been definitely rejected by government agencies that handle sensitive data, and with good reason.

Software erasure methods leave large quantities of data recoverable from hard disks, making them inadequate when businesses want to fully erase data. There may be some place for software wipes if organizations want to reuse a hard disk within an organization, but even then, there is plenty of risk as a user may be able to leverage access to the hard drive to access data that isn’t meant to be viewable based on the employee’s authorization level.
 
Remote secure data wipe: ...

the last one ( no. 24) survived ....
6695


.. what implicates that a mirrored ZFS-array, secure from remote-attacks, should consist of at least 24 Hard Drives 😂


But I was looking for more software based solutions ?
there is none
 
I have not seen one example where bits were recovered off a properly wiped harddisk. I think this is a red herring.

I agree the article fully explains that no reuse is possible. Full destruction required.
Those agencies have much to hide and need to make sure 100 years in the future no one can tell their secrets.
 

......
Three Guardian staff members – Johnson, executive director Sheila Fitzsimons and computer expert David Blishen – carried out the demolition of the Guardian's hard drives. It was hot, sweaty work. On the instructions of GCHQ, the trio bought angle-grinders, dremels – a drill with a revolving bit – and masks. The spy agency provided one piece of hi-tech equipment, a "degausser", which destroys magnetic fields, and erases data. It took three hours to smash up the computers. The journalists then fed the pieces into the degausser.
...
 
Observe that the OP said "file system". At the level of a normal, unencrypted file system, the best you can do is delete the file, using rm. Trying to overwrite data within a file system is nearly hopeless, since most file systems (ZFS in particularly not) will not overwrite data in place.

If the file system is encrypted, and you know that the encryption is really good (how can you know that? tough question), then you just get rid of the encryption key, and destroy all copies of the key. This is actually the best way to securely wipe the data.

All the other posts above talk about securely wiping the actual hardware. Which is much harder than it seems; three passes of random data definitely don't do it. The industry-standard answer for spinning disks today are shredders; the platters are made out of glass, and in a shredder they tend to get cut into very tiny pieces. I don't know what technology is recommended for SSDs, but I would think that after a good shredder, it would require extraordinary technical means to recover anything.

How far you need to go there depends on who you are worried about: How sophisticated is the attacker? How valuable is the data on the drive? Are there easier avenues of attack? For example, in many cases, the best way to recover data from a hard disk is to use a garden hose: take the garden hose, and fill it with lead shot. Find the person who knows the secrets stored on the disk, tie him to a chair, and threaten to hit him with the garden hose if he doesn't divulge the secrets.

Seriously, the question you need to answer if you want more detail: Why do you want to do this, and what are you really worried about?
 
...
SSDs,...
..

SSDs use over-provisioning to provide better endurance and reliability. When an SSD is made, it has more flash memory chips than its advertised capacity. The extra memory, which can sometimes be as much as 20% of the advertised SSD capacity, is used to balance wear across different cells (so called SSD wear-levelling) so that all memory cells degrade at roughly the same rate and no one cell fails much earlier than others. This overprovisioned space is not accessible via normal interface (SATA, SAS, or whatever) and thus cannot be overwritten at will. If one disassembles the SSD, removes flash memory chips, and reads them directly, some data may be obtained even after SSD had all its sectors zeroed. Exactly how many and what data is recovered is determined by SSD controller algorithms.

However, this is not the case of being able to recover overwritten data. Conversely, this is the case of not being able to overwrite some part of the data.
.......
interesting copy & paste orgy ;-)
 
I don't have a Scanning Tunneling Microsocope yet so I have to go by what others say!!!
But Ralph is on point in that it depends on what target threat you are. Everyday Joe it would not be worth the time.
Foreign ambassador throws away her old notebook and there you have justification and budget.
 
SSD overprovisioning used to be 3x, in the early days of SSDs (when SSDs still cost $5000 apiece). Today, 20% sounds reasonable for consumer-grade SSDs. The problem is that flash cells (the little structures on the silicon that store data) can only be written this many times, and today the write endurance is about 10K cycles. So the way SSDs are built is: The controller tries to write, and if they write "doesn't stick", it disables that part of the chip, and switches to another part. Together with overprovisioning and wear-leveling, this gives reasonable lifetimes; for most consumer and light enterprise usage, SSDs don't wear out in the first 5-7 years (although it is perfectly possible to run them into the ground within a few months too).

But for SSDs, there is a relatively secure way to wipe them: Send them the special SCSI or SATA command to reformat themselves. At that point, any read request over the interface will not return the data that was there before the format. Now, someone with a lot of time can take the SSD apart, and actually read the chips using a lot of electronics (the format command does not actually overwrite the data, that would be insane with the low write endurance). By the way, the same trick works reasonably well to somewhat securely wipe a hard disk, but on hard disks, a format tends to run relative long (used to be many hours, I haven't manually reformatted a spinning disk in ~10 years, so I don't even know how long it takes today).

To secure your disk against everyday Joe, all it takes is overwriting the partition table, for example with gpart. Then if someone puts the disk into a computer, it will look like a blank disk, and most people don't know how to find partitions by hand. Against an experienced computer hacker or person with storage experience, this trick will not work, and forget about law enforcement or non-existing agencies. Again, non-existing agencies probably wouldn't need to actually look at the disk: they will instead look at their archives of what you downloaded from the internet and put on the disk, or they will just break your kneecaps until you sing like a bird. Or they might be even more brutal and inhumane: Their lawyer might send you a subpoena!
 
...
... non-existing agencies probably wouldn't need to actually look at the disk: ...

I'm not even sure if Phishfry is with such an agency... I was just searching for the term STM, which he posted.
Maybe he saw that because he's logged in on my machine? Because he changed `STM` to Scanning Tunneling Microscope seconds later to help me understand his posting ...

6696


😁
 
The only secure software method I know is using geli, and then forgetting the key.
 
I'm not even sure if Phishfry is with such an agency..
I work on CG/ACoE/NAVY/Commercial boats. Believe me I wish I had a air conditioned job.
It's been a hot muggy summer.
If in doubt about data security please take the back side of an axe to your device. Scatter pieces over 50 square mile area of ocean.
 
Look I am usually the tin foil hat guy everybody is laughing at. This is not a threat in my eyes.
In this case I think only nation level threat actors who could recover entire drives worth of overwritten bytes. If at all.
Probably only a handful of countries. We sure don't hear much about it.
I really like the guys real-world test. Inquire at 'data recovery labs' and see what they offer.
Send out a drive zeroed and see what they get. That is best you can do in a commercial capacity.
Assume the best talent is at play with three letter agencies.
 
The hackernews article cited from above did have someone making some pretty innovative claims for 2009.
120GB disk will produce about 1TB of statistical data from the SEM process - which we can analyse.
You are talking about a weeks processing on 25 node cluster (100 cores).

The ZFS advantage
I suspect that a RAID would foil it. For a start we would need to program in the facility to rebuild the RAID
(and analyse based on Chunks). I doubt it would work out.
We do quote a price for SEM Raid recovery but it is in the 10's of thousands - a.k.a no thanks :D
 
Here's are two simple use cases:

1. You download a super secret key (say banking). You copy the key to a USB or SD card.

2. You work on super proprietary code downloaded from a secured repo.
You commit your changes back to the repo.

Now you want to securely wipe the local copy.
Seems like ZFS should offer a special API to allow this.

It looks like ralphbsz's suggestion is the way to go. It takes some effort.

 
ralphbsz thanks for being specific. This question was mainly about the zfs filesystem. However, it is very interesting to see, how the answers touch upon the general case. So thank you for the suggestions.

Although I don't agree with some of the destructive solutions provided, since they leave fragments of the data that might survive the damage (disposing off the fragments is another step in the chain which needs human intervention, even if successful).

Scenario (feel free to suggest alternatives)
Mainly looking for a solution in a scenario where the data (that resides on a zfs) has to be wiped out by some trigger (remote/local). Zfs is the filesystem to be used - unfortunately I'm not sure it provides the capability to delete something securely on it due to its very nature.

The adversary could either be a non-existent agency or an opposing agency from another country. The assumption is that the only weak-link is the disk, assuming that the person(s) owning the disk have the capability to trigger a wipe-out. Looking for a technical solution - the human element shouldn't matter.

Crivens - Forgetting the key is not an option since the data is valuable. Looking for plausible deniability angles. Any other suggestions ?

Phishfry , ucomp - from what I understand - the first 2 passes are used to ensure that the bit is flipped - what makes the third pass neccesary, if we are to assume that the 3 pass technique works to securely wipe out the data ?

unitrunker - a zfs flag for deletion functionality would be amazing, if possible. Agree.
 
Mainly looking for a solution in a scenario where the data (that resides on a zfs) has to be wiped out by some trigger (remote/local). Zfs is the filesystem to be used - unfortunately I'm not sure it provides the capability to delete something securely on it due to its very nature.
ZFS at that point makes no difference. It will neither help nor hinder the data destruction, you have to do it at the layer below.

And that would be an encrypted disk, where data loss can be deliberately caused by forgetting or destroying the key. For FreeBSD, I think the best solution is geli, as Crivens said. The only problem is that you would need to audit everything that happens to the key: You must make sure it is not stored or cached anywhere, because otherwise the attacker could retrieve the key from any place it is stored. The other possible solution is to use hardware, namely a self-encrypting drive. Those are sold by all major drive vendors. The problem is that integrating key management into drives is awfully hard, and operating systems tend to get all knotted up when faced with such a disk.
Crivens - Forgetting the key is not an option since the data is valuable. Looking for plausible deniability angles.[/quote]
Well, forgetting or destroying the key is the industry standard solution to this problem. What do you mean by "forgetting is not an option"? What if they key is stored on a small security device, and then you trigger physical destruction of that device, is that an option?

For plausible deniability, it gets even harder. One option I've seen discussed is implementing a harmless-looking content which is available when the correct key is not present. This gets quite close to steganography: When used normally, the device (disk + file system + security software) returns the correct content; when used incorrectly (for example when a "panic flag" has been triggered), it hides the correct content, and instead returns sensible but harmless content. For example, the file system could pretend to contain complete copies of the FreeBSD manuals and install packages, or something like that. Implementing such a system would be hard.

Phishfry , ucomp - from what I understand - the first 2 passes are used to ensure that the bit is flipped - what makes the third pass neccesary, if we are to assume that the 3 pass technique works to securely wipe out the data ?[/quote]
With reconstructing bits from magnetic hard disks, it comes down to statistics. Every time the head writes a sector, it takes a slightly different path, and writes them in a slightly different place. That means that on the edge of the "correct" bits you can find some magnetic material that contains older copies. So what one does is to read the whole surface of the disk, using a magnetic microscope (which is where the STM comes in), find each track, and then search for older copies along the sides. The more times the data has been overwritten, the less likely it is that there is anything left on the side, because the effective width (the convex hull around it) grows with every write.
 
What will help here may be sysutils/pefs or one of the crypto fuse filesystems. You can encrypt one of the folders, or many, and drop the key when unmounting it. Then delete the binary files that store your data. Even STM will only restore the encrypted data.
 
Back
Top