I need help understanding pkg upgrade

… tired of wrestling with ports. …

I understand, but it need not be a fight.

ports-mgmt/poudriere-devel is our friend; "… most people will find it useful to bulk build ports …".

Fears of mixing packages and ports are often unjustified. Here:

Code:
     -b name  Specify the name of the binary package branch to use to prefetch
              packages.  Should be "latest", "quarterly", "release_*", or url.
              With this option poudriere will first try to fetch from the
              binary package repository …

– that's mixture by design, and it's not heresy. It works.

… 11.4-RELEASE on both servers. I had a bad experience upgrading to 13.x-RELEASE on another server, so I've opted to stay with 11.4-RELEASE for now.

Is there a record of the experience and if so, can you share a link? Thanks.

After 11.4-RELEASE dies: the longer you defer an upgrade, the greater the risk of you encountering a wrestler.
 
I don't use portmaster (never have) but can't it be used to pull in packages in lieu of building from source to satisfy dependencies?
When you start a build with ports-mgmt/portmaster it checks to see what dependencies are going to be have to pulled in for the build and displays a list for you to peruse before You start the build.

You can choose not to, and it will give you a message telling you that if you don't want to update everything it lists you can use the portmaster -i. It will interactively go through the list and ask whether or not you want to update that particular port or not.

When you've gone through every port in the list and issued the command to begin the build it will update only the ports you've chosen.
 
Is there a record of the experience and if so, can you share a link? Thanks.

After 11.4-RELEASE dies: the longer you defer an upgrade, the greater the risk of you encountering a wrestler.
No, there's no record of it, except in my head, and the site is down (likely permanently), so there's no url either.

First, I have done multiple OS upgrades with no problems encountered. But the upgrade to 13.x-RELEASE was a PITA.
1) After rebooting, I could not login. Turns out the /etc/passwd db had gotten corrupted somehow. The hosting party had to fix it and then create a new password for me, after which I could login and change my password.
2) Updating ports was a major disaster. I tried to update mysql 5.6 to mysql 5.7 and the dbs were corrupted. By the time I realized what was going on, my backups were all corrupted as well. That site is now down and has been for over five months. (Long story - the length of the outage is not my fault. The parties involved are still trying to decide what to do.

Unfortunately, I did not backup the dbs immediately before the upgrade. I assumed my backups would be fine. They were not. (Yes, I know about assuming things.) The entire system was db-based, so it's all gone now.
 
Thanks,

… /etc/passwd …

If it was the bug with which I'm familiar – and if you encountered it when upgrading from FreeBSD 11.3 or less:
  • the bug was not specific to FreeBSD 13.0-RELEASE
  • there was a sense of randomness, like, it was difficult to understand why one upgrade from x.z to y.y failed when an apparently equal upgrade from x.z to y.y succeeded
  • the bug should not recur for you with any upgrade from 11.4-RELEASE.
<https://cgit.freebsd.org/src/commit/?id=2ca137b4306dea2dbe1db31c44102060caedb19a&h=releng/11.4> committed to releng/11.4 2021-02-24.
 
… tried to update mysql 5.6 to mysql 5.7 and the dbs were corrupted. …

FreeBSD on UFS, yes? There are edge cases where the file system is not as it should be following an interruption (but let's not jump to any conclusion).

Worth noting: at least one of the bugs that were fixed by the commit above was known to affect the mysql user – see for example <https://www.google.com/search?q="pw:+user+'mysql'+disappeared+during+update"&tbs=li:1#unfucked>. (I don't know enough about MySQL to tell whether corruption, in your case, was a consequence of disappearance of the user during freebsd-update(8) with 11.3-RELEASE; I imagine not.)

<https://www.freshports.org/databases/mysql57-server/#message> there's the hint to run mysql_upgrade, is it possible that something related failed (and caused corruption) in the absence of the mysql user? (Again, I'm not educated but I imagine not.)

<https://cgit.freebsd.org/ports/tree/UPDATING> nothing recent re: MySQL.

<https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436365&product=Ports & Packages&query_format=advanced&resolution=---&short_desc=databases/mysql57-server&short_desc_type=allwordssubstr> for 5.7, at a glance I don't see anything matching.

<https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436364&product=Ports & Packages&query_format=advanced&short_desc=databases/mysql56-server&short_desc_type=allwordssubstr> (all closed) for 5.6, no mention of corrupt on the page.
 
FreeBSD on UFS, with MySQL, and started back in the FreeBSD 7.x days (can't remember the MySQL version - might have been 4.x in those days).

So far never encountered upgrade corruption issues (as soon as I post this, I will probably be punished!) I've seen the missing user error - seems to be addressed in 13.x. Don't think I ever encountered it in production - just when setting up development servers or pushing things or testing upgrades.

Your story definitely a reminder to have backups, rotational backups, offsite backups, etc. And backup before upgrades. And test environments that you can trash and rebuild and re-test on. I understand these things weren't applicable in your case/environment and doesn't help you, just saying the above for people who might read this thread.
 
Thanks,



If it was the bug with which I'm familiar – and if you encountered it when upgrading from FreeBSD 11.3 or less:
  • the bug was not specific to FreeBSD 13.0-RELEASE
  • there was a sense of randomness, like, it was difficult to understand why one upgrade from x.z to y.y failed when an apparently equal upgrade from x.z to y.y succeeded
  • the bug should not recur for you with any upgrade from 11.4-RELEASE.
<https://cgit.freebsd.org/src/commit/?id=2ca137b4306dea2dbe1db31c44102060caedb19a&h=releng/11.4> committed to releng/11.4 2021-02-24.
Well, that's comforting. The system I upgraded was 11.3, but both of the ones we've been discussing are 11.4.
 
FreeBSD on UFS, yes? There are edge cases where the file system is not as it should be following an interruption (but let's not jump to any conclusion).

Worth noting: at least one of the bugs that were fixed by the commit above was known to affect the mysql user – see for example <https://www.google.com/search?q="pw:+user+'mysql'+disappeared+during+update"&tbs=li:1#unfucked>. (I don't know enough about MySQL to tell whether corruption, in your case, was a consequence of disappearance of the user during freebsd-update(8) with 11.3-RELEASE; I imagine not.)

<https://www.freshports.org/databases/mysql57-server/#message> there's the hint to run mysql_upgrade, is it possible that something related failed (and caused corruption) in the absence of the mysql user? (Again, I'm not educated but I imagine not.)

<https://cgit.freebsd.org/ports/tree/UPDATING> nothing recent re: MySQL.

<https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436365&product=Ports & Packages&query_format=advanced&resolution=---&short_desc=databases/mysql57-server&short_desc_type=allwordssubstr> for 5.7, at a glance I don't see anything matching.

<https://bugs.freebsd.org/bugzilla/buglist.cgi?component=Individual Port(s)&list_id=436364&product=Ports & Packages&query_format=advanced&short_desc=databases/mysql56-server&short_desc_type=allwordssubstr> (all closed) for 5.6, no mention of corrupt on the page.
Yes, FreeBSD on UFS. Frankly, I don't understand ZFS (especially the snapshot stuff), and those servers were UFS before there was a ZFS.

I really don't know what happened to mysql. I used the word corruption because the server wouldn't start and couldn't read any of the dbs. I changed back to 5.6 and it still wouldn't start. I don't understand mysql well enough to overcome a problem like that. Someone else may have been able to restore the dbs, but the folks responsible for the server didn't want to spend the money for a pro to fix it.

Historically, when a mysql instance went sideways, I simply wiped it, reinstalled it, and then recreated the dbs, which mysql was then able to read the files on the hard drive, and everything was back to normal. That didn't work this time, and I have no idea why. The files are still there. But, if they won't load, I assume something got corrupted.

I did something really stupid with my backups (since corrected). As I mentioned earlier, I wrote a script that writes tar.gz files to the /var/backup partition and also uploads a copy to Dropbox (using dropbox-uploader.sh), but the script deleted the previous day's file both on the hard drive and on dropbox. So, when the system crapped out, and the backup script ran, poof. All gone.

That has since been corrected. The local file is deleted each time the script runs, but the Dropbox duplicates are kept for seven days. That at least gives me a chance to gather my thoughts and preserve good copies before it's too late. Live and learn. The reason for the daily file deletion was space on the hard drives, but space isn't a problem on Dropbox. I should have thought of that, but I didn't.

It's all water under the bridge now. The folks that "owned" that server clearly didn't care that much, because it's been down for five months and they've not done anything to correct the problem or even to start over from scratch.
 
It's all water under the bridge now. The folks that "owned" that server clearly didn't care that much, because it's been down for five months and they've not done anything to correct the problem or even to start over from scratch.
Lost count of the number of times I've fretted about something and the people who should have cared didn't. But like you say, learn what you can and move on. When you've got restricted resources, just have to do the best you can and sometimes you lose.
 
Lost count of the number of times I've fretted about something and the people who should have cared didn't. But like you say, learn what you can and move on. When you've got restricted resources, just have to do the best you can and sometimes you lose.
Sometimes, in big organizations, it's possible to lose track of servers you actually 'own'. And then they get forgotten about - until it's time to either upgrade or they're so broken it's easier to just get rid of them. I've seen quite a few madhouses over that - but after doing some thorough homework, I just sit at my desk, drink tea, and watch a burning dumpster barge float past me (figuratively, of course).
 
Trihexagonal, I try not to disrupt service operations any more than I have to, so I update infrequently. But there are some ports that take forever to update. llvm80, llvm90, and cmake are three that make me groan every time they show up on the list (which is often.) There are some others, but those three always take a very long time to build.
I ran portmaster -a on my Thinkpad T400 yesterday morning and it returned 60 ports to update.

This being the machine I used last time with the 2nd Round of the Online Turing Test. I had to keep a Firefox interface for my bot up on i for 24 straight or be disqualified. That was 3 months ago and it could still be up running now if I had stayed connected, my .W520 .mp3 player is at over 4 months.

But I wanted to stay current with System patches and keep my programs up to date for stability, so I went ahead and started it running yesterday morning. It finished this morning, I issued # rehash, updated security/rkhunter, my ports tree and ran portmaster -a again this morning. Overnight lang/rust and www/firefox-esr got an update, and both take a long time to compile.

But the Test date isn't till next Saturday. So it's toiling away with the only oPolar gaming fan I have keeping it cooled and will be done in a matter of a few hours. That fan was well worth the $30 or so it cost and I'm going to get another one, or I'd be doing this T61 along with the T400.
 
Back
Top