About SSD with UFS

Hi, Has anyone had problems with an SSD with UFS? I have had many problems, the system is corrupted to the minimum and has happened to me more than once.

I use my freebsd for desktop, so I do not need the advanced options that ZFS gives me but it is the only one that works stable on my SSD.

Someone knows something about it? Any similar experience?

The SSD is a "samsung Corsair Force GS" of 240 GB.

Thanks
 
...
...
For internal drives, with GJournal you will have only on very rare occasions to recheck the filesystem in single user mode with "fsck". This can eventually happens in cases of some severe system crashes.

GJournal is a little slower than UFS Journaling, but the only reliable solution....

There are too much people in this forum complaining of UFS file corruptions, and just ignoring to set up Gjournal

Thanks for the info. I have recently had a corruption of an SSD drive. Fortunately I had a "mirror" image on another SSD, so it was easy to fix. I was not running *any* kind of journaling so it was my fault. However; I see such conflicting info on this subject, including this forum post:

https://forums.freebsd.org/threads/54689/

I am tempted to give gjournal a try.
 
I had a weird thing happen, using UFS on an SSD:

  • On this particular SSD, I had originally set up an MBR and partition table, and created a newfs on /dev/da1s1. All was good for a while.
  • Then, I decided to redo the SSD, but I didn't use a partition table (just newfs on /dev/da1). All was good for a while.
  • I noticed that, while I had used newfs on /dev/da1, I could still see both /dev/da1 and /dev/da1s1 in /dev. No problem, right?
  • Wrong.
  • One day I accidently tried to mount the SSD with /dev/da1s1. It was trashed. I couldn't recover it, and had to start over with fresh newfs (but I used dd to make sure I'd cleared out the first few megs before the newfs).
 
I get the part about dangerously dedicated file systems. But, I've had fewer issues with media (that's used for storage) - using dangerously dedicated file systems. I think it's because I tend to trade around on various operating systems when it's NOT dangerously dedicated. Maybe it's more likely it gets clobbered as it's passed around, simply because I DO pass it around more often, whereas with the dangerously dedicated disks I don't (pass it around). The odd scenario that I described is, as far as I can remember, the only time I've lost data with "dangerously dedicated" media.

What I'm curious about is the mechanics of what transpired (the dedicated disk versus the standard partition table and file system). I would think that newfs would clear the space to be sure there was nothing that could cause trouble later. But, I guess it does not. My dedicated device was never on a non-FreeBSD machine. A quote from somewhere:

It's "dangerous" because that partitioning format is rare outside of
BSD-based systems.

So, "passing it around" makes the other operating systems mangle it if it's dedicated. I mark all my dangerously dedicated disks with bright yellow fingernail polish so I don't plug them into the wrong system, and I always use them on the same machine. I actually prefer to work it that way.

Another snippet from a mailing list:
The reason it's called 'dangerously dedicated' is that other
systems - or even the same system months/years later, when you forget and
run the wrong tools - won't know there's a filesystem there

So, it's clear it's not a recommended practice, although I've been doing it for years. A later version of the OS may also trash the data on a dedicated disk, and there is mention of the BIOS causing trouble. What I do is transfer the data rather than directly accessing it when upgrading to a later version of the OS. Newbies (or maybe anybody) should probably avoid this one tho, so it was a good point made by Tingo. Caveat Emptor.
 
I had a weird thing happen, using UFS on an SSD:

  • On this particular SSD, I had originally set up an MBR and partition table, and created a newfs on /dev/da1s1. All was good for a while.
  • Then, I decided to redo the SSD, but I didn't use a partition table (just newfs on /dev/da1). All was good for a while.
  • I noticed that, while I had used newfs on /dev/da1, I could still see both /dev/da1 and /dev/da1s1 in /dev. No problem, right?
  • Wrong.
  • One day I accidently tried to mount the SSD with /dev/da1s1. It was trashed. I couldn't recover it, and had to start over with fresh newfs (but I used dd to make sure I'd cleared out the first few megs before the newfs).

If you were seeing /dev/da1s1 on your "dangerously dedicated" drive, you didn't properly remove the partition table. If you had, it wouldn't have shown up. In my experience, a "dangerously dedicated" disk ignores the partition table but doesn't actually use the space it is stored in.

Like you, I use "dangerously dedicated" filesystems on systems that are completely dedicated to FreeBSD (no dual boot, not movable to other systems). I've done this since the late 1990's. It has never caused a problem.

Even when I've run the wrong tool (like running fdisk), the tool spits out an error message about there being no current partition table.

The "dangerous" part is that other operating systems will think the drive is completely unused an many of them will helpfully put a partition table on the drive.


Ironically, with ZFS, I've started creating partitions on my drives. Since drives are not consistently the exact same number of sectors even though they have the same rating, hiding a few hundred megabytes give some buffer in case a drive fails and it needs to be replaced by a drive that is a bit smaller.
 
If you were seeing /dev/da1s1 on your "dangerously dedicated" drive, you didn't properly remove the partition table. If you had, it wouldn't have shown up. In my experience, a "dangerously dedicated" disk ignores the partition table but doesn't actually use the space it is stored in. ...

Thanks - after the fact, obviously true for me. I usually wipe the first few megabytes of a drive before I create a dd disk, but this time I didn't. Intuitively, allowing different operating system's partition code (with their attendant bugs/ideosyncracies) - play with the very critical initial sectors of drives that include partiton tables and boot loaders seems likely to cause trouble eventually. That's especially true if (like me) a person tends to use a pretty wide range of systems. Hence, my use of fingernail polish.

In recent years (with the addition of GPT and lots of code changes) I think we went through a period of transition where the results of intermingling techniques may have been more cause for trouble (at least for me). Perhaps the official response for this would be to use FreeBSD's normal partitioning and just dedicate that. But, (like you) I've had very excellent success with dedicated disks. It only works if one is very careful and can keep it all straight, with i's dotted and t's crossed, which my fingernail polish does for me. Each to his own.

BTW: it's not always yellow. As a fellow who normally doesn't use nail polish, I find that the array of different, bright colors is pretty impressive. The women who are standing in the same store isle look at you funny when you pick up a large collection of these! I also use it on my keys, and some other things. Very versatile. I put a big dot on each board or machine.
 
No issues with Samsung 850 pro evos, running 3 and I beat the heck out of them. They just work. I set soft updates and trim, leave journaling off. No issues so far.
 
I bet 99% of users going away from the 1st time of FreeBSD are scared to death because of endless reboots,
because default UFS on stable, it goes witthout TRIM disabled and with SUJ enabled -> frequent fs corruption -> core dump -> FreeBSD enters endless reboot cycle.
User goes away.
When I enabled TRIM and disabled SUJ corruption stopped.
Lenovo Ideapad 700isk17
 
I bet 99% of users going away from the 1st time of FreeBSD are scared to death because of endless reboots,
because default UFS on stable, it goes witthout TRIM disabled and with SUJ enabled -> frequent fs corruption -> core dump -> FreeBSD enters endless reboot cycle.
User goes away.
When I enabled TRIM and disabled SUJ corruption stopped.
Lenovo Ideapad 700isk17

Why would jounaling soft updates help getting your fs corrupted when its purpose is the exact opposite? SU +UFS have saved me way more times after a crash or a power supply sudden interruption, than Journaling+EXT3/4 or Jounaling+ReiserFS, not speaking about old days of unjournaled FAT32 with Windows upon, when I should have prayed not to break my system any time I had to reset it for a BSOD. I'd say that in my experience frequency of data loss with SU+UFS is around 2/3 compared with EXT4 or Reiserfs with journaling.

SU+J prevents you from ever needing to run a background fsck at boot in order to fix leaked blocks, wipe out garbage and free up space.

In my opinion and experience UFS is a very good file system. It's simple, performing, well documented, made rock-solid throughout years, doesn't require great resources, easy to understand, and yet tunable enough in FreeBSD. I have had experience with many other FS and the only ones I like better are HAMMER and Apple's new APFS (which does not use journaling either), although still there are things which I prefer UFS over them for.

Someone comes to FreeBSD maily because of ZFS....I stay with FreeBSD because of UFS, and not only obviously.

Regarding TRIM, how would it ever involved into fs/metadata curruption? Really unless you keep writing and deleting a large amount of big files for a long time, from my point of view having trim enabled/disabled doesn't affect performance noticibly (SSD are always suprisingly fast). All the more on a Unix system, where you'll get a 3% defragmentation rate after 10 years or so.

Still, not having TRIM enabled can't account for system crashes, kernel panics and sudden reboot. It must have been a local problem, and you could open a thread explaining the situation in order to get support
 
Back
Top