UFS Best way to recover (very) recently deleted files on UFS?

I have a situation where a user accidentally deleted all of the photos in a directory and its sub-directories, ironically to save some space in preparation for a backup. (Helpful hint: Wise Duplicate File Remover apparently removes the originals along with the duplicates). The photos in question are stored on a FreeBSD server that the user was accessing over Samba as a network drive from Windows.

The first thing I did was immediately unmount the drive, remount as read only, and take a full image with dd. What do we know? There were probably somewhere around 2,000-5,000 pictures in the deleted directories, we don't have the exact file names, but most of them were taken on an iPhone, a Panasonic Lumix, and a Kyocera phone, so all of the filenames were in the format IMG_xxxx.JPG, P10xxxxx.JPG, and KIMGxxxx.jpg respectively.

So, right now I'm running a pass with recoverjpeg, which is in the ports collection. It's taking forever because it's a 1TB disk and there was a lot of other stuff on there. It's a little over halfway done and it's found over 77,000 files. Not sure how many are duplicates, but the same developer that created recoverjpeg also has a tool to deal with that. From a cursory glance it looks like most of the deleted files are being recovered, but with some issues. Most of the recovered images have very specific defects: either a horizontal line about a fourth of the way down the image, or multiple images weirdly mosaiced together. None of the images have their original filename or attributes, but most still have their EXIF data.

image00128.jpg image75541.jpg

Tomorrow I'm going to take a crack at it with Sleuthkit Autopsy, which is actually a digital forensics tool, but its PhotoRec Carver module looks promising. I'll report back here on how well that works (or doesn't).

If you have any ideas on how to best recover deleted files from a UFS filesystem, please do feel free to post it. It would be excellent if there was a way to recover just the files that were deleted from that one directory tree, and even better if there was a way to do that while preserving the original attributes like the filenames and date modified. Both free and paid software suggestions are welcome.
 
The photos in question are stored on a FreeBSD server that the user was accessing over Samba as a network drive from Windows.
There are two types of people in the world, those who diligently backup and those that haven't lost anything yet. Besides recovery I hope you're also planning some sort of backup strategy to prevent future disasters. Because these things can and will happen.

Anyway, give sysutils/testdisk and photorec(8) a go. But don't get your hopes up.
 
sysutils/magicrescue
Code:
Magic Rescue scans a block device for file types it knows how to recover and
calls an external program to extract them. It looks at "magic bytes" in file
contents, so it can be used both as an undelete utility and for recovering a
corrupted drive or partition. As long as the file data is there, it will
find it.

It works on any file system, but on very fragmented file systems it can only
recover the first chunk of each file. Practical experience shows, however, that
chunks of 30-50MB are not uncommon.
 
I played around with it in Autopsy a little today and it managed to find some deleted files, none of the ones the user was missing though. Weirdly, it did find a ton of much older files, from around 2017. The ones I'm looking for are from 2019-2023. I'm not really familiar with this program yet, so I could be doing it wrong.

This evening I'll give the ones you guys suggested a try.

There are two types of people in the world, those who diligently backup and those that haven't lost anything yet. Besides recovery I hope you're also planning some sort of backup strategy to prevent future disasters. Because these things can and will happen.
Oh lord tell me about it. I'm sure if you've ever worked as a consultant you know users don't ever listen to IT people. This particular customer thought routine backups were too expensive, but a used backup drive and handful of tapes would have cost a lot less than what I'm going to have to charge them for this amount of work.
 
Weirdly, it did find a ton of much older files, from around 2017. The ones I'm looking for are from 2019-2023. I'm not really familiar with this program yet, so I could be doing it wrong.
Careful what you might uncover. Certain files could have been deleted for, ehm, dodgy reasons. And some deleted files have a tendency to linger around a lot longer than you might expect. That's how a lot of criminals get convicted too.

This particular customer thought routine backups were too expensive,
There's still no hardware capable of dealing with human stupidity ;)
 
There are two types of people in the world, those who diligently backup and those that haven't lost anything yet. Besides recovery I hope you're also planning some sort of backup strategy to prevent future disasters. Because these things can and will happen.

Anyway, give sysutils/testdisk and photorec(8) a go. But don't get your hopes up.
Unlike FAT filesystem, one cannot "undelete" files on virtually any other filesystem except for maybe NTFS. The reason for this is that FAT filesystem maintains file allocation table pointers (I use the term pointers liberally) whereas UFS, EXT* and ZFS maintain actual disk addresses in inode structures. Once a file's inode is gone there is no "undeleting" files.

We've been spoiled by the simplicity of FAT filesystems, which only delete the pointer to the first cluster, while the file allocation table still maintains which clusters and their order belonged (past tense) to a deleted file. Back in the MS-DOS days if no writes were performed one could reestablish the pointer to the first cluster and the file would magically reappear. Again, UFS and other filesystems have no such simplistic construct. Once a file is deleted, it's gone.

One might be able to visually identify bits and pieces of a file using some kind of disk scanning tool but what if you stitched the wrong extents "back together?" It's risky and likely to fail.

I have stitched files back together on IBM mainframe VTOC disks. But in that case I restored a copy of the disk to free disk and examined the differences, using that to stitch files back together. In that case I did manage to recover three VSAM catalogues but unfortunately could not stitch a database back together due to I not knowing the internal structure of the database files. It's hit and miss.
 
one cannot "undelete" files on virtually any other filesystem except for maybe NTFS.
You can. If by "undelete" we mean attempt to find inode and follow its structures to recover a file. Inode's metadata is being removed, data itself stays there. Deep analysis/heuristics with some magic and luck is needed. It's not 100% but better than nothing. And it's definitely not impossible.

I was saved by ext4magic many times before. OP here is also able to recover some files.
 
You can. If by "undelete" we mean attempt to find inode and follow its structures to recover a file. Inode's metadata is being removed, data itself stays there. Deep analysis/heuristics with some magic and luck is needed. It's not 100% but better than nothing. And it's definitely not impossible.

I was saved by ext4magic many times before. OP here is also able to recover some files.
Good luck with that. I as a mainframe kernel developer spent an all nighter stitching files together using a binary editor (IMASPZAP: super zap). Modern tools that understand the underlying fs may help but one needs to have a solid background to navigate through that maze.
 
I can say that RS Office Recovery saved my life.🙏
It just found all my LibreOffice files and not only their last version but previous ones too (don't know how it can do it but it does. Maybe, SDD turnover management).
Drawback, it is proprietary and works on ordinary OSes.
 
Last edited:
Good luck to you. I once upon a time had to reverse engineer the allocation strategy of a certain camera as a relative had formatted the card by accident. FAT had been gone. On the airport, on the flight back. When I sent her a DVD with all pictures, including most of the deleted ones, all was good again. But I started with sleuth and autopsy also.
 
photorec is not good at all and that's what I can say with the experience I had recently.
Accidentally deleted home folder and the first thing I did after that was a system shutdown.
Removed the disk from the laptop and installed as external USB drive.
Mounted it readonly in another system and used photorec to recover deleted files.
1. It takes eternally because it gets into a loop.
2. disabling the "brute force" it recovers multiple files but most of them are corrupted and I'm quite sure the file is perfectly fine before getting delete, I don't think it's following the inodes properly.
3. files recovered keep getting repeated over and over
4. very small images in a 100MB jpeg file, impossible.

The last time I used it I was able to recover near 20% to 30% of deleted files (not corrupted) but less than 5% of the files I really needed to recover, very bad stats.
I think it's a useless tool if you want to lose your time.

I'm not touching this filesystem until I find a good recovery tool, one that really works.
 
many of these tools have no knowledge about the file system internal structure so they work on any fs.
they just scan the disk and check for a jpg of png or whatever header at the start of a physical block , try to understand the data size and then copy enough blocks to cover the data size.
this works well if each file is a contiguous string of blocks but these is often not the case. this is the source of corruption and visual artifacts and what not.
recovering the exact blocks that a file hold before deletion requires file system understanding and even that its not enough if the allocation data is destroyed on deletion
(like fat/msdos case). msdos undelete programs just look at the first block in the dir entry and allocate the number of freeblocks from there on.
works well on 'static/new' filesystems but bad on a busy/fragmented one
netware used to keep deleted files for while and they could be recovered with "salvage" or discarded with purge
i have no idea what an ufs deletion employs if the dir entry is just masked or cleared, if the inode is totally or partially cleared
and if a (much) better tool could be created or not
 
All this tools have another bad side:
They imply backups are not needed, because in case you can use the 'magic' tool.
Belongs into the category "learning by burning" 😁
 
I myself never tried (except quite simple old-school FAT [12|16] cases that all files in the diskette[!] are NOT fragmented), but recovering would need to be:
  • Stop the FS just in time deletion is done and assure no more writes are done, even for single sector. This is prerequisite of everything below, unless you're extremely lucky.
  • Look for fingerprint (i.e., JFIF in jpeg) in ALL marked-as-free blocks to determine the 1st sector of the file.
  • Check ALL marked-as-free blocks to look for proper blocks to be llinked next to the 1st block, repeat until the size reaches to the size recorded in the header area, and calculate checksum (depends on the file format).
  • If it matches, the file can be salvaged. If none, try next candidate (including re-ordering of already-tested blocks) until matching combination is found.
  • If none, salvage fails.
If the file is NOT a structured data like plain text, you'll need to manually read and try.
If the tool knows 100% correctly about the structure of the filesystem, filesystem metadata would help the job, partially.

So... It depends.
 
All this tools have another bad side:
They imply backups are not needed, because in case you can use the 'magic' tool.
Belongs into the category "learning by burning" 😁
I've read somewhere (lost track with it) that what professional data recovery engineers does BEFORE they start working is to create 100% clones of all the drives the filesystem contains (in huge scale RAIDs, a plenty of drives!), keeping original drive unwritten at all.
 
I've read somewhere (lost track with it) that what professional data recovery engineers does BEFORE they start working is to create 100% clones of all the drives the filesystem contains (in huge scale RAIDs, a plenty of drives!), keeping original drive unwritten at all.
exactly. That's the first thing you start with. With ZFS, it's a joy to make a snapshot, then try various things, maybe rollback, try something else ... never touch the original disk after creating the image.
 
exactly. That's the first thing you start with. With ZFS, it's a joy to make a snapshot, then try various things, maybe rollback, try something else ... never touch the original disk after creating the image.
Snapshots of ZFS is almost what I dreamed (even better!) when I was using DOS (late 80's and early 90's).
At the era, I wanted 3 staged filesystem that are consisted with fixed (committed) area, dynamic structured cache (demi-filesystem) area and full jounaled log area.
Data are written to journal with metadata (how it should be seen to user), once re-reads exceeds threshold, written to cache area to be read as simply mem-cacheable filesystem (no replay of journal required), and once I decide the file is almost static, commit to fixed area.
Why not implemented? Simply because it beyonds me.
 
cases that all files in the diskette[!]
Me too. In 1980something, MS-DOS 3.11, on 5,25"-diskettes.
After examined a bit with some hex editor I figured out deleted files weren't really deleted, but just the first letter of the filename was replaced with a `?`
Knowing that I didn't need the tool anymore, but could un-delete files by myself (the hacker (@ age 12) 😂)
Back then I already learned:
The slightest write access to a disk drops your chances for a file recovery extremely. Even with diskettes, where it was much easier to prevent write access.
With HDDs (SSDs), mounted in a running OS, with modern filesystems trying to clean up, reduce fragmention etc., having almost continous write access (/var/), especially when shutting the system down - who wants to "hot unplug" - brutally rip out - a powered HDD/SSD from a running system? Besides you only risk even more damage that way, most of the times it's too late, anyway - I have no hopes to recover a deleted file that way anymore.

Maybe on an almost empty FAT-16, or FAT-32 drive, yes, there is a slight chance, as others here already said.
But I wouldn't get any hopes up. Especially not rely on it.
I rely on backups.
Having a sophisticated system of backups, and snapshots it's the only reliable way, anyway.

BEFORE they start working is to create 100% clones of all the drives
That's standard procedure.
If I have, or a friend, or neighbour brings me a corrupted disk, very first thing before I do anything else always is to produce an exact clone of this disk - doesn't matter if dd takes days for that.
No write access to such disks at all!
All data rescue attempts and experiments from, and on the copies, only.
 
And what made undelete on DOS nearly promising was because it was "single task" "Disk" Operating System and caches for diskettes were basically "write through, sync".
 
Best way to recover:
  • /sbin/restore
  • tar x
  • svn/git/fossil restore
Btw: what happened to the idea to have hammer2 on FreeBSD?
 
I use dump/restore for UFS backup and restore.

Additionally all my systems have alternate boot partitions to help me recover from catastrophic failures -- I'm a developer that sometimes gets myself into trouble testing patches. Though I do try to test them on the sandbox machine some commits to 15-CURRENT by others have broken my systems as well.

I've very rarely needed to recover from backup as other measures, like alternate boot partitions, alternate loaders and alternate kernels have 99% of the time allowed me to recover. But in the very worst case when things are so messed up that even attempts to recover won't work the backup is quite handy.
 
Back
Top