Due to my profession I do a lot of writing, implying re-drafting, and sometimes I decide - from memory - that the second re-draft of the fifth re-draft was the best.
Suggestion: Use software that internally keeps versions. For example, if you write your documents using Google Docs (which works on the web and stores them in the cloud), it automatically keeps versions at some frequency. I don't know what that frequency is, nor how to adjust it, but that's a homework problem. I suspect that other editing programs (MS Word, LibreOffice, ...) have similar functionality. If you use something like emacs, you get an automatic backup in foo.txt~ (the file name with a twiddle attached); you could make it a habit to just rename all twiddle files to have a date in them (for example foo.txt#202403142106), and there is your manual versioning strategy. But using snapshots is a much better idea, easier, more efficient, and less error prone.
Now, if I set the snapshots at, let us say 15 minutes, I can return back to it and not having to re-create it from memory. Is there any utility that would let me set the snapshot frequency on-the-fly? Meaning easy switching between e.g., the time when I am writing and the rest of the time?
At the lowest level, snapshots are taken when you say so. A normal user (doesn't have to be root) can snapshot a directory at a time, if they are given some specific permissions (I think that is done with a "zfs allow ..." command, don't remember the details).
There are also tools around that take snapshots automatically. If you are using root on ZFS, then any system upgrade does a snapshot. I bet those tools are adjustable, but I don't use any: I take snapshots when "the spirit moves me" (because I know I'm about to take on a big task, which is likely to explode in my face). Because ZFS snapshots are relatively cheap (use little disk space, only what was changed and minimal overhead), feel free to take snapshots by hand whenever you feel that you may have to go back to an older version.
I know, that it is partially off-topic, but in your post # 12, you wrote (emphasis supplied):
"if your backup solution is badly built, and most are"
could you please elaborate on the emphasized part?
Eric Borisch already mentioned it: A backup is useless (but also looks like it is perfect), until you have to restore from it. And that's usually when the fight starts. Restore is difficult, and not well tested, documented, and practiced. As an example, I've written my own backup software for home use; it has some very intelligent features (which are hard to find any any free backup software, and not even common in commercial stuff), is highly efficient, and is nicely tailored to my needs. Except for one thing: I was too lazy to actually implement restore, meaning it would have to be done by hand. If you just copy files back from the backup file system, you'd probably miss 20% of all files; if you want to get those restored, you're going to spend hours doing database queries, writing awk and python scripts, and copying extra file. And while I've restored a handful of files here or there, I've never done a full restore; sadly, one has to expect that it won't really work, never having been tested.
Another horror story is my wife's company (she's not a computer person): they had a "professional" IT company managing their servers, which included configuring the storage, and making nightly backups onto tape (an IT technician came in every night and took the old tape out and put it into a safe storage vault, and put a new blank tape in). Then one day ONE of their disks failed (just one). The server with all the engineering data and files went down hard. My first question was: "don't you guys use RAID for redundancy"? So my wife read up on RAID, and asked the IT guy. He was very proud of the answer: "Yes, for efficiency we use RAID-0, like that we get the maximum capacity". He didn't even understand why it didn't work. Then there were the backups: they were paying a good amount of money for high-quality backup software (I think they used Legato), but it turns out the IT guy had configured it wrong, and every night it backup up NOTHING onto a fresh tape: the list of file systems to back up was empty, and had never actually been configured. No wonder the backups ran so fast! Ultimately, they ended up giving a hardware data recovery company a lot of money (tens of thousands), and they managed to save 90% of the files from the damaged disk drive. Plus they fired their IT service company (duh).
In the meantime, they needed to continue getting work done (the whole 20-person engineering department was completely stranded, not having any files). Fortunately, my wife often worked from home, and in the era before laptop computers and fast internet at home, she did that by copying files onto USB sticks; she had a giant key ring with two dozen USB sticks, one for each project. For several weeks, the files scraped off my wife's USB sticks, and files found on floppies on other engineer's desks were the only backup they had, until the data recovery company found the up-to-date copies. The whole thing was a nightmare.
This is why I keep preaching "do what I say, not what I do". Don't build your own storage solutions, instead outsource it to competent people. And those competent people are not small fly-by-night IT service providers (see horror example above), but go to big companies, such as HP, IBM, Oracle, Amazon, Google or Microsoft. Really, for the most reliable experience, edit your documents on the web in the cloud, and upload any files to a cloud provider. As an example, you can find utilities to connect from FreeBSD to Amazon S3. If you look at the pricing guides of the cloud companies, you'll find that they all have a "free tier": if you use only a little bit of storage/CPU/network/..., the prize is (near) zero.
Based on ralphbsz's writing, I gather that he is well versed in filesystems including reliability and protection, I am very interested in his ideas.
I've been working in storage and its durability for the last ~25 years; of the companies on the above list, three are former employers. In those 25 years, I had to go and officially tell a customer that we've lost their data TWICE (and a third time, I had to tell them that their data was no longer redundant and at high risk of being lost any moment, but fortunately we got super lucky and no hardware failed until we could put things back together correctly). Those are very uncomfortable experiences, and were only tolerable because I had colleagues and managers who stood by me. Interestingly, the two customers whose data was indeed lost took it very well, didn't get mad, and understood that humans (who build systems and software) are not perfect. The third customer (whose data was not actually lost) became really rude and acted like a complete asshole. Which proves that humans are also irrational and difficult to deal with.
Another example: Several big cloud storage companies claim that their data is reliable to the level of "11 nines", or to put it mathematically: The probability that an individual file or object is damaged is less than 10^(-11) per year. In the cases where I was able to measure it, I can attest that the claim is truthful (and no, I can not give details on that, it's confidential).