Port Firefox installation taking a very very long time

I'm in the "no swap will cause the OOM-killer" camp so it will be interesting to see what actually happens if you try the no swap scenerio. But might be a long wait to see what actually happens if it takes many many hours to grind through.
 
Either that will fix my problem, or it will hopelessly break my system beyond repair. Eitherway, I'm very tempted to tinker with it, just because I like to cause problems with my own system. In all seriousness, I should probably learn more about what I'm doing before I try anything else.
Removing all swap won't break your system beyond repair. It'll just guarantee any memory hungry app will never run. And if those memory hungry apps are anything like a DBMS, your database could corrupt.

The answer is add RAM or reduce the number of concurrent jobs.
 
I'm in the "no swap will cause the OOM-killer" camp so it will be interesting to see what actually happens if you try the no swap scenerio. But might be a long wait to see what actually happens if it takes many many hours to grind through.
There's no sense playing with fire to see if one will get burnt.

But, the OP should be able to see how much swap it's using just by running top. Subtract much of ZFS ARC and UFS buffer cache. That should tell you. However, with a tiny ARC and buffer cache the disk will be hammered because reads won't be from cache, they'll be from disk. Whether it's paging I/O or I/O because files or portion of files are not in cache, it's still I/O and it will be slower than if there was sufficient RAM.

Yeah, I used to do performance analysis and tuning on the IBM mainframe. The rule of thumb there was if paging uses more than 5% of system resources you need more RAM. IMO the same rule applies with FreeBSD. RAM is the best investment one can make for a computer.
 
Is your laptop a amd64? Just in case. And my laptop is over 14 year old. Installing firefox listed over 70 more before it. Use pkg.
 
Either that will fix my problem, or it will hopelessly break my system beyond repair. Eitherway, I'm very tempted to tinker with it, just because I like to cause problems with my own system.
You will see if process goes faster and can wait to finish or break it in first 5-10 minutes. What can be broken? Now with unfinished Firefox compilation, is it in better condition? Many programs allocate more RAM if there is free RAM (including the swap). IMO compiler will work normally with 8GB RAM (I hope there are at least 4GB free RAM when PC is idle - you can verify this).
 
I hope it will display "Out of memory" error and stop if RAM is full (without OS crash and broken file system).
 
Removing swapspace to "avoid paging" is an illusion.

You remove the system's ability to page one kind of VM pages, but not the others. Specifically you still allow paging of readonly mapped regions. Because those are not moved to swapspace, they are just dropped from RAM and when needed later loaded from the filesystem again. Obviously, by removing the ability to page one kind of pages while keeping the workload the same you increase the rate of paging for the other kind.

Readonly mapped regions are notably including code segments mapped from the ELF executables. So if you have a compilation going on with a compiler written in C or a similar language you constantly drop the actual compiler from RAM.
 
Even for things like rust and llvm? I’m not going to try just curious!

Compilers will work with 8 GB RAM just fine, the question is how many of them you can run at the same time. That's why it is critical to discuss RAM requirements while knowing how many CPU cores are in play.

BTW, the real RAM killer is linking. WHen building LLVM with debug on the linker processes, of which there can be many in parallel, take up 4 GB resident each. (ports do not turn debug on for LLVM builds)
 
Is your laptop a amd64? Just in case. And my laptop is over 14 year old. Installing firefox listed over 70 more before it. Use pkg.
Below is the information regarding my laptop, so I guess it is an older AMD. Well, I will be more carefule in the future.

Code:
steven@RB_HP_FreeBSD_2:~ $ sysctl hw.model hw.machine hw.ncpu
hw.model: AMD A6-7310 APU with AMD Radeon R4 Graphics   
hw.machine: amd64
hw.ncpu: 4
 
I've used poudriere so long I forget how things work when ran natively and without modification. Does the ports tree define -j with # of detected cores? If not then some could be gained there. Limiting it on big tasks like rust would be good to keep swap use out of the picture.

Not using swap on FreeBSD and running into low memory scenarios can lead to bad performance and odd issues that require a reboot to resolve. I also thought that the kernel handles some memory organization/maintenance only with swap and that it can't do that properly without it.

If you just want to play with building your own software then you can also limit use of compiler optimizations, link time optimization, etc. but any changes to how things are made always opens up a chance that you find ways things broke that others don't see. If you have access to anther machine, you may be able to gain performance using distcc to offload some work to it.

A weird performance balancing act is tuning ZFS compression & record size to optimize compression so more things can be in cache but trying to do so without it having a negative impact from starving CPU from other CPU tasks too much. How much it matters depends on speed of drive too.

If using Poudriere, you can build multiple ports at a time, set restrictions in general or for specific ports on how much they get of cpu threads and RAM for RAM filesystem. You can also set some to not be able to put as much in a RAM filesystem (or set it to not use it at all). Parallel port building does mean resources for each of them are needed but RAM filesystems can help get through many smaller ports much quicker. Poudriere and Synth are also used to perform the build in a clean environment and give you a set of packages to be used after for install; building from the ports tree directly means you have many things installed the build may not need and sometimes they alter how things build (its a bug in the port if so, but it does happen).

If you want to upgrade without a clean build environment, you will want to get into portupgrade or portmaster to take on that task. For remaining debris, you should make sure there are no "work" directories in any ports directories in the tree.
 
As mentioned earlier, I have some limited experience with Linux, but I'm very new to FreeBSD, so I probably don't have a good understanding of everything that is occurring on my screen.

I decided that I wanted to use the Ports method to install some drivers for my graphics hardware, but before I did that, I decided to test out how the Ports process works by installing Firefox as a test.

I started out by issuing the following command as su: # pkg install git

Next I used the below command to download the nessary files: # git clone https://git.FreeBSD.org/ports.git /usr/ports

At this point everything seemed to be proceeding correctly, so next I entered the below command to install Firefox:
cd /usr/ports/www/firefox/ && make install clean

Well, it has been almost eigtheen hours since I invoked the above command to install Firefox, and right now the information displayed in my terminal doesn't seem to indicate that this process is going to end at anytime soon. I'm assuming that this cannot be normal, so I tried using Ctrl+C to try and stop it, but that didn't have any visible effect.

Should I allow this to continue for a few more hours? What would happen if I simply close the terminal before this process is complete? If I interrupt this process will my system become unbootable? Would running fsck possibly fix it afterwards?
Web rendering engines and cross-compilers take the longest to build. Especially if you don't have all the dependencies installed via pkg. It takes me 13 hours to build qt-webengine on my system.
 
`make clean` cleans up recursively for dependencies while `make clean -DNOCLEANDEPENDS` would skip the dependencies. If you've lost track of where extracted data may be from working with different ports, change options for the current port and any dependencies before cleaning (dependencies change with some port options), etc. then `find -x /usr/ports/ -type d -path "*/*/work" ! -path distfiles ! -path package` can locate those and `find -x /usr/ports/ -type d -path "*/*/work" ! -path distfiles ! -path package -delete` can remove them. If others know better optimizations for find I'd be interested. If you have redefined where work directories go then find and other manual steps need to be reworked accordingly and you probably have a work directory only area you can safely delete everything in without searching.

Running `make clean` in /usr/ports is likely undesired as it will enter each port's folder and clean it + its dependencies recursively; any port that is depended on by ports will be cleaned once for itself + once for each dependency depending on it (or at least it did last I accidentally ran it). It didn't define NOCLEANDEPENDS for some reason so the run was very slow and wasteful to attempt.

I'm not a fan of running things like an unknown delete command from the internet so that is why you should evaluate output first or do some step to make it interactive. If manually typing the "-delete", be careful where you put it as I thought I recall earlier placement will cause the delete to happen earlier in the matching/searching process and is very likely not intended. Similarly, I usually run a `ls` command in place of a `rm` command and after it runs in an acceptable way I then "edit" the ls (and parameters) into rm from the command history. Forget how but shells usually have a way to rerun new commands with a previous (from history) command's parameters which is more efficient.

If using ZFS and noticing slower seek activity considering making a copy (block cloning and dedupe need to be disabled during copy) or removing and reextracting/cloning the tree. It has more of an impact on magnetic drives and after several updates to the tree over time took place. For git checkouts, I find running `git gc` after several pulls have made a lot of changes seems helpful in keeping up performance; running it regularly (such as every update) seemed to not be so beneficial but I haven't done "proper" measurements.

As a reminder, if you will not continue to build the port on your system in the future, you can run `pkg autoremove` to remove dependencies that were installed to build it but are not needed to run it. Running that periodically will also remove older unused dependencies if they change over time with updates.
 
Below is the information regarding my laptop, so I guess it is an older AMD. Well, I will be more carefule in the future.

Code:
steven@RB_HP_FreeBSD_2:~ $ sysctl hw.model hw.machine hw.ncpu
hw.model: AMD A6-7310 APU with AMD Radeon R4 Graphics  
hw.machine: amd64
hw.ncpu: 4
Thanks. I had trouble with pkg install firefox. By chance I deleted w3m. That solved the problem.
 
Last edited:
Back
Top