UFS How to recover a trashed file stored in a UFS2 partition accessed the disk with thunar

Hello,

accidentally I've trashed my two most important bhyve Linux virtual machines and the recycle bin has emptied automatically. How it happened ? I've used thunar to reach the folder where those images were located (in a disk formatted with the UFS2 file system) and I've clicked on the file and then delete. I didn't use the rm command from the Terminal. Maybe this choice can help me to recover easier the files ? So,what can I do to recover those files ? Which tools and tecniques can I use ? Please help me. In those image files I had put so much efforts that I shudder thinking that I have lost them.
 
I have never used bhyve. I assume you just want to retrieve a .cow or .img type container file.
Thunar may have a size limit for trashing. I would start by booting in read mode so as not to overwrite, and run something like testdisk. You can also search for the .Trash-1000 folders if it was on another drive.
 
In the past, Xfce moved files to /another-drive/.Trash-userid folder if they were on a different drive, instead of the usual ~/.local/share/Trash in main drive.
If you didn't have the necessary permissions, it may have offered permanent delete instead of moving to trash. Then you will need a recovery tool.
 
So,what can I do to recover those files ?
There are two types of computer users, those that diligently backup everything and those that haven't lost files yet. Disks break. People make stupid mistakes. Files disappear. Backups are like insurance against fire, you hope you're never going to need it but when you do you will be glad you signed up for it.
 
There are two types of computer users, those that diligently backup everything and those that haven't lost files yet. Disks break. People make stupid mistakes. Files disappear. Backups are like insurance against fire, you hope you're never going to need it but when you do you will be glad you signed up for it.

I do the backups of my files,but I still get screwed because I don't do them on a regular basis. You know I'm a greedy experimenter. I make a lot of experiments and after some time I remove the files of those experiments if I understand that wasn't the right way to accomplish that task. In this scenario,where my informations are always in turmoil,it's not easy.
 
Quarantine files before deleting them. This is definitely not a technical problem. I was in that situation and I still fall into the same error. Disk space is cheap these days.
 
I do the backups of my files,but I still get screwed because I don't do them on a regular basis.
Just to give you an example. Had to update an important server today. I know we have full (daily) backups of it. Still made a snapshot of the machine at the VMWare level before running the updates. Better safe than sorry.
 
A little bit offtop.
I have a my own script which makes UFS snapshots, deletes old snapshots
and keeps few monthly.weekly,daily snapshots fullauto.
Tune it and use it to prevent similar issues like file detetion.
It is important to know, a lot of UFS snapshots may slow down disk IO, especially write IO.
Also, while you have a UFS snapshot, if you will delete a lot of files then disk's free space will be DECREASED.
Keep in mind that snapshot is not a backup.

/root/bin/backup_mksnap_ffs.sh
Code:
#!/bin/sh

date_now=`date "+%Y%m%d-%H%M"`

#/etc/rc.d/jail stop

/sbin/mksnap_ffs /home/backup/snapshot/snap-${date_now}-$1

#/etc/rc.d/jail start

ls /home/backup/snapshot/* | sort -r
ls -al /home/backup/snapshot/* | grep -c "root"
echo ""

echo Snapshots:
ls /home/backup/snapshot/snap-*-* | sort -r
echo ""
echo Old snapshots:
ls /home/backup/snapshot/*-daily | sort -r | /usr/bin/sed -n 4,5p
ls /home/backup/snapshot/*-weekly | sort -r | /usr/bin/sed -n 4,7p
ls /home/backup/snapshot/*-monthly | sort -r | /usr/bin/sed -n 3,7p
echo ""

echo Deleting old snapshots:
for oldsnap in `ls /home/backup/snapshot/*-daily | sort -r | /usr/bin/sed -n 4,5p`
do rm -vf ${oldsnap}
done

for oldsnap in `ls /home/backup/snapshot/*-weekly | sort -r | /usr/bin/sed -n 4,7p`
do rm -vf ${oldsnap}
done

for oldsnap in `ls /home/backup/snapshot/*-monthly | sort -r | /usr/bin/sed -n 3,7p`
do rm -vf ${oldsnap}
done

# crontab -u root -l | grep snap
Code:
22       2       *       *       1-6       /root/bin/backup_mksnap_ffs.sh daily
33      3      *       *       7       /root/bin/backup_mksnap_ffs.sh weekly
44      4       1       *       *       /root/bin/backup_mksnap_ffs.sh monthly
 
I keep my data on two USB disks,one is 10 TB large,the other one is 9 TB large. I tried to format the 9 TB disk using the exFAT fs,but gparted and gnome-disks refuses to do that. And yes,I've unmounted the disk and I ran gparted as root. As partition table I've chosen msdos,but again,it won't create the exFAT fs.
 
Back
Top