Thanks, I guess that I must build with
X11MON=on, however I suspect that it will not work as expected …
X11MON=on, however I suspect that it will not work as expected …
Mark BROKEN with X11MON, required libfam is not linked which breaks the installation
- X11MON option is not longer broken, fixed in Makefile.in patch
I'd like to counter that... Today the approach to building a 'small' (in terms of functionality) program is sadly more often than not: "lets use this framework, which needs that ecosystem with this interpreter and drags in those few hundred libraries and dependencies and needs exactly *this* one version of that graphical framework and exactly *that* version of this obscure library someone abandoned in 2005"
I'll agree with both of these and posit they are actually almost the same thing.From what I have seen it is down to middleware.
That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.Another fun thing about frameworks that I've run into? Writing abstraction layers around and over them
How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.
To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.
That's probably because programs are written mostly for people to read. So sacrificing performance for the sake of much better maintainability in future could be worth it.That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.
In fact one example is Bjarne Stroustrup in his book, where he writes a weird incomplete abstraction layer over FLTK rather than using it directly (or safely).
Of course not, that's my point exactly.Even FreeBSD's own Ports tree?
Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?Of course not, that's my point exactly.
The OS should be managing the packages, that's its job. The ports tree should be the only package manager. What I dislike is when I install node.js via the ports, and then start using npm to manage a complete marketplace of scripts the OS knows nothing about.
This opens a big security whole for all kinds of malicious code, by the way.
Good point! Still, doesn't any other lib do the same? Take x11/libinput as an example. It installs it's own headers into /usr/local/include, shared objects into /usr/local/lib etc.Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?
FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.Portage in Gentoo for example has the concept of "slots". So for PHP for example you have a 7 slot and an 8 slot and you can have multiple versions installed at the same time. I find this neat. You need then means of switching between them in runtime.
Hardware is cheaper; software is expensive.
Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.I would pointing out a contradiction...
This is not actually true, making hardware is more expensive than programming, it requires more workers, professionals, energy, materials etc, rather than programming software which is can be doing even with very old hardware.
However bloated software helps to sell newer hardware hence the deal is made, for the real problem is, especially with closed software, because programming time costs more than computing time, commercial software is burdening by legacy code that is hidden under the mat and end users cannot see it (but they can feel the side effects).
Depends on how you look at it. Making one piece of hardware is terribly expensive, so it must be mass-produced in the millions of units just in order for manufacturers to recover their cost of goods sold.Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.
Yes, but by means of putting the version number inside the package name. I don't like the idea of versioning the same product in multiple packages like that.FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.
… monstrosity: when products start baking their versioning schema into the package names. For instance: php6, php7, php8,
… like nextcloud-php7-mysql105 … other package that uses a different mysql version and I can't have both simultaneously. …
Yeah... Shouldn't the FreeBSD version of pip/pypi check for presence of stuff like ninja, cmake and meson on the system? I would think that the ports system (not the pkg) would be smart enough to pull them in as deps, instead of letting pip/pypi pull that in from Python repos. Sometimes, those language package managers really reinvent the wheel.The problem I see on everything having their own package manager, isn't so much about the package manager it's self. It is more of the various package managers don't limit to just their stuff. An prime example, would be pip/pypi. On pip you can easily install meson, cmake and ninja. Those 3 programs aren't python libraries, nor the bindings; but the build agents themselves. That by it's self wouldn't be so bad, but the package manager defaults to wanting to install system wide; overwriting what the system had installed. I know some upstreams are resorting to using like pip and other language package managers over the system one; more of because it's the only way to have a common base between various platforms/distros.
Indeed. Their train of thought is always the same."sometimes"...
Old Fart C Programmer may have a point here (here's looking at you, Geezer )Indeed. Their train of thought is always the same.
1) I have a nifty, easy to use language
2) Ah, to do anything useful I need to call into a native binary
3) Ah, calling into native binaries isn't easy because my language doesn't have the ability to parse header files unlike C, C++ and Obj-C
4) I'll create a binding generator
5) Ah, turns out this is a big faff to use as part of a workflow
6) I'll create some tooling that fetches pre-generated bindings as dependencies
7) Holy shite, there are millions of GB worth of these things! Turns my language really just glues together C libraries.
8) I'll develop a package manager to deal with them all.
Rust is doomed to fail in the same way unless they bolt on a small C compiler frontend and properly solve the issue around #3.