FreeBSD on UFS, yes? There are edge cases where the file system is not as it should be following an interruption (but let's not jump to any conclusion).
Worth noting: at least one of the bugs that were fixed by the commit above was known to affect the
mysql user – see for example <
https://www.google.com/search?q="pw:+user+'mysql'+disappeared+during+update"&tbs=li:1#unfucked>. (I don't know enough about MySQL to tell whether corruption, in your case, was a consequence of disappearance of the user during
freebsd-update(8) with 11.3-RELEASE; I imagine not.)
<
https://www.freshports.org/databases/mysql57-server/#message> there's the hint to run
mysql_upgrade
, is it possible that something related failed (and caused corruption) in the absence of the mysql user? (Again, I'm not educated but I imagine not.)
<
https://cgit.freebsd.org/ports/tree/UPDATING> nothing recent re: MySQL.
<
https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436365&product=Ports & Packages&query_format=advanced&resolution=---&short_desc=databases/mysql57-server&short_desc_type=allwordssubstr> for 5.7, at a glance I don't see anything matching.
<
https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436364&product=Ports & Packages&query_format=advanced&short_desc=databases/mysql56-server&short_desc_type=allwordssubstr> (all closed) for 5.6, no mention of
corrupt on the page.
Yes, FreeBSD on UFS. Frankly, I don't understand ZFS (especially the snapshot stuff), and those servers were UFS before there was a ZFS.
I really don't know what happened to mysql. I used the word corruption because the server wouldn't start and couldn't read any of the dbs. I changed back to 5.6 and it still wouldn't start. I don't understand mysql well enough to overcome a problem like that. Someone else may have been able to restore the dbs, but the folks responsible for the server didn't want to spend the money for a pro to fix it.
Historically, when a mysql instance went sideways, I simply wiped it, reinstalled it, and then recreated the dbs, which mysql was then able to read the files on the hard drive, and everything was back to normal. That didn't work this time, and I have no idea why. The files are still there. But, if they won't load, I assume something got corrupted.
I did something really stupid with my backups (since corrected). As I mentioned earlier, I wrote a script that writes tar.gz files to the /var/backup partition and also uploads a copy to Dropbox (using dropbox-uploader.sh), but the script deleted the previous day's file both on the hard drive and on dropbox. So, when the system crapped out, and the backup script ran, poof. All gone.
That has since been corrected. The local file is deleted each time the script runs, but the Dropbox duplicates are kept for seven days. That at least gives me a chance to gather my thoughts and preserve good copies before it's too late. Live and learn. The reason for the daily file deletion was space on the hard drives, but space isn't a problem on Dropbox. I should have thought of that, but I didn't.
It's all water under the bridge now. The folks that "owned" that server clearly didn't care that much, because it's been down for five months and they've not done anything to correct the problem or even to start over from scratch.