Solved How to delete a file by deleting and overwriting it with random data.

As you know, the usual deletion of files, is done by removing the header, but the contents of the file still remain. Which can sometimes cause problems. If a new file with the same name but different content, it will appear in the same place.

I once hit and remember that FreeBSD had options to delete files by overwriting the file with zeros?, however, I don't remember the syntax of the spell.
 
I don't think it's possible for ZFS. It would be against the copy on write principle.
I don't know concerning UFS.
 
As you know, the usual deletion of files, is done by removing the header, but the contents of the file still remain. Which can sometimes cause problems. If a new file with the same name but different content, it will appear in the same place.

I once hit and remember that FreeBSD had options to delete files by overwriting the file with zeros?, however, I don't remember the syntax of the spell.
Most filesystems are "robust", meaning they work good out of the box.
The formulation "which can cause problems" is rather vague...
 
ZFS that would be something interesting to try. Snapshots hold onto blocks from "time 0" up to "timestamp of the snapshot", so even if you could securely delete the current file, any snapshot referencing it would hold the original data. In fact if you rolled the dataset back to that snapshot you wind up with the original file.

There may be (probably are?) utilities in ports that would securely delete a file, but I would expect it to only work on UFS not on ZFS because of the way ZFS works.
 
Which can sometimes cause problems.
Could you elaborate which problems you mean? Most likely confidentiality issues? If so, and if your scenario is an "offline attack" (physically accessing the disk), the easiest solution is using full disk encryption.
 
On a SSD drive it is almost impossible to do a "secure delete" of a file, ever with overwrite (gshred etc does not work)

"Zeroing" a file (on zfs) does not works at all

So if the question is: how can I prevent the contents of a file from being recoverable (against from example carving)?
The answer is: very hard for solid state drives. Very difficult for filesystems like zfs. Very close to impossible for solid state zfs

If the question is: I just want to delete a file
The answer is: there is no need to do anything more than an rm
 
As you know, the usual deletion of files, is done by removing the header, but the contents of the file still remain. Which can sometimes cause problems. If a new file with the same name but different content, it will appear in the same place.

I once hit and remember that FreeBSD had options to delete files by overwriting the file with zeros?, however, I don't remember the syntax of the spell.
Check -P option for rm. Maybe that is what you where remembering?
But as stated, not useful in zfs filesystem.
 
You seem to be concerned about the security of deleting files. When thinking about security issues, you need to think about concrete attack vectors. What are you really trying to prevent here?

First off: If you are using snapshots in your file system (both UFS and ZFS support them), then a delete does not affect the snapshots. That's the intent.

As you know, the usual deletion of files, is done by removing the header, but the contents of the file still remain.
It's more complicated than that. Deletion of a file means that the file is not accessible through the file system afterwards (and I'm deliberately glossing over files that have multiple hard links, or that are open while being deleted, let's restrict the discussion to the normal case). It is done by updating the metadata of the file system to reflect that the file no longer exists, and by marking the area on disks (the blocks) where it was stored as free, meaning likely to be overwritten with new files soon.

The content of the file may remain on the disk. That's virtually impossible to prevent, unless you go to the extreme of using encryption (either whole disk, or per-file).

Which can sometimes cause problems.
Like what? Nobody will ever access the content of the file through the file system again. They can find the raw data by accessing the disk underneath the file system. If they are root, they can do that through the OS. The sensible defense against that: Don't allow them to get on your computer if they have bad intentions, and make it impossible for them to become root. If they can take physical control of the disk, they can bypass all these protections, including the disk's internal ones (important for SSDs, which internally contain multiple copies of the data). The sensible defense against that: Don't allow people to take your disks.

In the old days, when file systems were simple, and disks were simple, there was "tribal knowledge" that overwriting your files with zeroes before deleting them would make the content unreadable. Like most forms of voodoo, it is partially based on correct science: Most of the time, an attacker that has access to the disk won't be easily able to read the bytes, they will get zeroes. Today, with modern disks being much more complex (in particular SSDs), and modern file systems having more interesting space allocation mechanisms (in particular ZFS, which does not overwrite data in place, it is log structured), this has become wrong. It is particularly wrong if you are worried about a sophisticated attacker, who can for example interact with the disk internals (using undocumented commands) or with the file system.

If a new file with the same name but different content, it will appear in the same place.
Nonsense. The name of a file is not correlated with where it will be stored. If someone creates a new file, it will be stored in a random place on disk, which may be the place the old file was deleted, but much more likely it is not.

I think the OP needs to think through (and perhaps tell us) what they are really worried about.
 
Back
Top