(Suggestion) Rating System

There is some Popularity Contest for Debian or Devuan. Here also we can find which DE or browser is most popular. Still we start using some, then settled for some. Instead of ' rating this blue or that blue ' , discussing pros and cons, configuration, maintainer attention for issues, PR etc would be more useful in decision making. Use of database, language and similar needs more in depth study and consultation.
 
BSD OSes, ports and pkgs are geared towards research-oriented users. Such feature is not paramount when there is not yet adequate hands to keep them update with increasing need for new functionalities.


It could however be implemented as a stats package like the one gathering number of (Free)BSD OS users.
 
  • "Rating the software": useless. Rating systems work in commercial markets, with all the drawbacks that were already discussed. They won't ever work for opensource software. Usecases are different, and users are free to try out whatever might solve their problem best and quickly decide. Upstream authors normally don't have "as many users as possible" as their motivation (unlike with commercial software), so there's no incentive in ratings either.
  • "Rating the ports": useless. FreeBSD ports already have high quality standards, the tools to check them and committers with the necessary mindset, including reviews for many non-trivial changes. This can't avoid quality problems 100%, but if there are any, bugzilla is the way to go and much more useful than some rating system.
  • "Download counts": useless. It would only ever measure downloads of binary packages. Some ports can't be built as such, they'd never see a "download". Some people would dislike it (avoid unnecessary data collection) and either avoid it (e.g. by building their own packages) or just move away. Some packages might be installed a lot, but removed a lot as well, as many people find it's not what they expected -- you will never know. Some packages are only useful to very few people, so will have a very low download count, but for them, they are extremely valuable. The list probably goes on, just don't do such a thing.
 
Some ports can't be built as such, they'd never see a "download". Some people would dislike it (avoid unnecessary data collection) and either avoid it (e.g. by building their own packages) or just move away.

So?

Some packages might be installed a lot, but removed a lot as well, as many people find it's not what they expected -- you will never know.

Do we even care? I don't.

The list probably goes on, just don't do such a thing.

Yeah, better use absolutely worthless bsdstats.
 
  • "Rating the software": useless. Rating systems work in commercial markets, with all the drawbacks that were already discussed. They won't ever work for opensource software. Usecases are different, and users are free to try out whatever might solve their problem best and quickly decide. Upstream authors normally don't have "as many users as possible" as their motivation (unlike with commercial software), so there's no incentive in ratings either.
  • "Rating the ports": useless. FreeBSD ports already have high quality standards, the tools to check them and committers with the necessary mindset, including reviews for many non-trivial changes. This can't avoid quality problems 100%, but if there are any, bugzilla is the way to go and much more useful than some rating system.
  • "Download counts": useless. It would only ever measure downloads of binary packages. Some ports can't be built as such, they'd never see a "download". Some people would dislike it (avoid unnecessary data collection) and either avoid it (e.g. by building their own packages) or just move away. Some packages might be installed a lot, but removed a lot as well, as many people find it's not what they expected -- you will never know. Some packages are only useful to very few people, so will have a very low download count, but for them, they are extremely valuable. The list probably goes on, just don't do such a thing.

How about statistics of manually installed ports (excluding pre-installed) reported by pkg query --all?
 
  • "Rating the software": useless. Rating systems work in commercial markets, with all the drawbacks that were already discussed. They won't ever work for opensource software. Usecases are different, and users are free to try out whatever might solve their problem best and quickly decide. Upstream authors normally don't have "as many users as possible" as their motivation (unlike with commercial software), so there's no incentive in ratings either.
  • "Rating the ports": useless. FreeBSD ports already have high quality standards, the tools to check them and committers with the necessary mindset, including reviews for many non-trivial changes. This can't avoid quality problems 100%, but if there are any, bugzilla is the way to go and much more useful than some rating system.
  • "Download counts": useless. It would only ever measure downloads of binary packages. Some ports can't be built as such, they'd never see a "download". Some people would dislike it (avoid unnecessary data collection) and either avoid it (e.g. by building their own packages) or just move away. Some packages might be installed a lot, but removed a lot as well, as many people find it's not what they expected -- you will never know. Some packages are only useful to very few people, so will have a very low download count, but for them, they are extremely valuable. The list probably goes on, just don't do such a thing.
Novice users find themselves in thick fog confronted with BSD's ports tree, whose categorization is, well, sub-optimal -- to say it politely. PC-BSD/TrueOS added a more user-friendly level on top of that, but that's history now.
While there is some truth in your arguments, ratings and usage statistics are useful for novice users, if they are accompanied by some short commonly accepted notes and reviews, like e.g.
  • The Mate desktop is a complete, mature, and near full-featured desktop environment for UNIX-like systems. It is a fork of GNOME 2.x (don't nail me on the exact numbers here, I'm using KDE)
  • LXQT is a new attempt to provide a leightweight dektop for UNIX-like systems. It is neither complete nor rich on features yet, due to it's young age.
With such guidance, a user can build his (*) own decision, where rating & download/usage numbers are one component among others like use-cases, numbers of open security issues, etc.pp. Maybe a good solution would be to have a rating for such short notes & reviews?

When confronted with a new task and the decision to select a software to help with that, even an experienced user becomes a novice. You have some advantage, but as you said yourself, it's try & error. If we had some rating system accomplished accompanied with notes & reviews, tightly coupled to the ports infrastructure, that would be a really useful enhancement IMHO.
* "A user" is grammatically male (?), so I'm using "he" but that includes female/trans/unspecified users, as well.
 
What you suggest as "short reviews" here should be in pkg-descr of the ports, see for example x11/mate. If you think the text there doesn't give enough "guidance", you can always report that on bugzilla. Apart from that, you'll find loads of in-depth reviews for software that's more widespread anywhere on the Web, and I don't see any sane way to integrate that with FreeBSD. Usecases differ, and there are a lot of opinions involved. The mature user can do his own research and come to an informed decision, based on his requirements.

As for download counts / statistics of any kind, I still fail to see what additional value those should give. Apart from the problems to collect them reliably, a lot of valid arguments how they can be harmful were already made in this thread.

* "A user" is grammatically male (?), so I'm using "he" but that includes female users, as well.
*giggle*
 
How about statistics of manually installed ports (excluding pre-installed) reported by pkg query --all?
That would be almost a equivalent to the Debian "Popularity Contest". It's used there to decide which packages should be pre-installed. FreeBSD does not pre-install packages.

I don't think a result of this has any meaning; It would have if a sysadmin shares the output on a server as much as perhaps a KDE home user - and that won't be the case. The one doesn't want to share any information about the system (and therefore doesn't participate), and the other one participates just because it's cool to be there. Different groups will participate with different frequency - the result would be distorted.

What should happen if a port has only a few installations? Can it be removed? Is it less important? I think not.

And if the result is "the masses use KDE"… is KDE that great, or did the masses just look at what the masses use so far? I don't belief in a "swarm intelligence" (anymore?).

And finally: The software I developed does exactly what I want. Would a survey tell me that users want XYZ to be different... why should I do that? I offer my software because it might be useful to others - but if it doesn't, I don't care: I programmed it for myself for a good reason, without any users. But if my intention was "fishing for compliments", then a survey would be great. This is a mistake made by many Linuxers: The belief that "we" face down companies like Microsoft together. But this collective of developers with exactly this common goal does not exist, the mass of open source developers simply don't care about Windows. There is no common strategy for a future, rather what is needed or desired is developed. And no statistics influences this. So it would be a statistic without usage… why then create it?
 
While a developer might start writing a piece of software solely for his own needs, s/he might also be happy to adjust that to similar, but slightly different needs of others. And then a rating system is a way to provide feedback. Which in turn is known to be an important topic in the development life cycle. The existence of bugtracking infrastructure and history of Open Source software both show that many developers do care about the quality & acceptance of their work.

2nd, many Open Source products became so good & widely deployed (some are de-facto industry standard) that their creators spilled off commercial support companies. Then the bare numbers of usage do have some importance.
 
And then a rating system is a way to provide feedback. Which in turn is known to be an important topic in the development life cycle. The existence of bugtracking infrastructure and history of Open Source software both show that many developers do care about the quality & acceptance of their work.
I agree, partially. The thing is, a rating system will almost never give you what you're really interested in. You don't want to see "4 of 5 stars", cause, what will you do with that? You want to see written comments, to learn how your tool works for others, in which scenarios it is used, and what might be issues with that. Seriously, how would you improve anything based on some numeric score people give, without knowing the reasons?
 
[...] Seriously, how would you improve anything based on some numeric score people give, without knowing the reasons?
That is the crux! As I plan to become the maintainer of GNU Prolog (it's not in the ports anymore), I suggest: let's build an AI for that! With automated reasoning, this AI gives you some suggestions based on your use-case, hardware, other requirements, preferences (e.g. you already have gkt vs qt desktop) etc. pp. You ask: "which port does xyz", the AI answers "port A, B, and C, where B seems to fit your requirements, preferences & environment best of all, as it's the only one to integrate in Kerberos". Seriously, bare numbers are the basic base of any heuristics.
 
While a developer might start writing a piece of software solely for his own needs, s/he might also be happy to adjust that to similar, but slightly different needs of others. And then a rating system is a way to provide feedback.
Of course I have formulated too harshly, but the type of feedback you describe can only happen in a dialogue - you have to look closely what the user really wants, what's the problem behind. If the feedback is just "tipped off", I have a sentence on my garbage dump that says "XYZ does it this way and that way, why doesn't your software". Yes, because I was dissatisfied with XYZ and wanted to have exactly that differently… And at least: With such a statistic it is however less about details, much more about the alignment in the large. And that doesn't fit to our port system.
 
I tried to decribe one problem with the ports tree above: a novice is nearly helpless with the question: "how do I find the software I want/need?", because
  • the ports tree is restricted to one level of categories - IMHO it would be a huge step forward to have a subdir for libraries in each category
  • there are no attributes like e.g. CLI vs. GUI software, small helper tool/full-featured software-system,...
  • you have to grep (and guess) inter-operability from the runtime depends (an experienced user can, but not a novice)
  • you are slayed by the mere count (huge number) of ports in each category
  • you search for monitor but the port's description uses the term screen
  • developers are interested in libraries/helper tools, users are not
  • other issues, for which I do not find the right words now
A good rating system could help. Of course, most of us agree that neither bare download or usage counts, nor a popularity contest, alone, are good solutions to this problem. But they can be part of it.
 
You mean this problem: How is somebody supposed to find out that "nomacs" could be the wanted picture viewer…

You're right, that's a problem. But for this a rating system isn't really powerfull, I think that other approaches make much more sense: In my opinion, a successful example is wiki.ubuntuusers.de (which unfortunately only exists in german). There is f.e. a page for graphics, which lists all common or available tools in sub-groups like viewing, editing, scanning etc. with a short description, often clickable with a corresponding detail page for the program including screenshots and common problems. That helps. But if I search with "pkg search image" I get a lot of garbage, but no nomacs. In the directory /usr/ports/graphics there are over thousand entries, and I will hardly work my way up to a "nomacs" manually.

Creating a site like wiki.ubuntuusers.de is a lot of work. And sure, one could prepare a little bit automated with content from /usr/ports/*/*/Makefile and /usr/ports/*/*/pkg-descr, but without further indicators (and screenshots) no whole website of the above level is possible ("editor is for terminal/X11, relevant for editors, devel and www, is an enduser program etc.). Indicators instead of directories might be more useful for searching for ports - a inflexible directory structure hardly takes today's software into account.

The handbook also tries to be a little bit "solution-oriented": A page for text editors exists as well as for webbrowsers, programs are listed. However, help instead of criticism would here be more advisable…
 
Obviously a rating system would be controversial. Another downside is that new and little used--but good--programs would get pushed to the bottom of any list and struggle to gain traction. No one is going to be happy with anything one comes up with.

I suggest someone put together a web site to do this. Categories with top five installed packages. The rest. And "new and upcoming" or "most talked about". This would require work.
 
Unless the data from the rating system is actionable, it simply doesn't seem worth it.

That said, I do find the debian popcon data fairly interesting. For example the following graph: https://qa.debian.org/popcon.php?package=nautilus

[I advise you to stop reading now. Pointless (and weak) analysis of some data follows].

The purple line shows a great increase until around 2013 (when Debian removed Gnome and replaced it with a new desktop environment called Gnome 3). This line then shows a fairly obvious decline from then on! The data also matches up because of the steep rise of the "extension" packages from 2013 which came with Gnome 3 (to make up for many of its design defects).

You can also see Pluma (Part of the Gnome 2 fork) started to rise around 2013 when it was created (https://qa.debian.org/popcon.php?package=pluma).
  • Gnome 3 has 26%
  • Xfce 4 has 15%.
  • KDE 4 has 11%
  • Mate has 6% in popularity
Before Gnome was broken, I imagine there was a *much* larger gap between itself and Xfce. I would love to see the historic data, as in what percentage popularity Gnome 2 had before it was killed. What about KDE 3.5 when it used to be competitive (i.e before it was killed and replaced with something called KDE 4).
 
[...] However, help instead of criticism would here be more advisable…
Before you can help, critics is the way to find out the weaknesses of the current status. Even when it's not constructive critics, it is helpful to find out what we want. Then we find out what we want in a discussion like this. IMHO that's perfectly valid. Besides that, many critics in this thread are constructive. BTW, I was not only joking when I mentioned an AI for this. Let humans fill attributes for software they reviewed, integrate a good natural language processing tool (available e.g. for Python and for Prolog), then AI techniques can act as an expert guide to attend human pre-sets.

Obviously a rating system would be controversial. Another downside is that new and little used--but good--programs would get pushed to the bottom of any list and struggle to gain traction. No one is going to be happy with anything one comes up with.
Yes, this effect exists. Nevertheless, it did not prevent numerous young new software projects to become very popular in a very short period of time. We shall not under-estimate human curiosity.
I suggest someone put together a web site to do this. Categories with top five installed packages. The rest. And "new and upcoming" or "most talked about". This would require work.
Cognitive science tells us: a good number of topics humans can easily overview, is 5-9 (7).
I like this aproach, it's similar to the ARC, which is a good method. When it's a wiki, this work is shared by many.
 
Unless the data from the rating system is actionable, it simply doesn't seem worth it.

Exactly.

That said, I do find the debian popcon data fairly interesting. For example the following graph: https://qa.debian.org/popcon.php?package=nautilus

I've had a look at the graphs of a few other packages and they all look the same: a nice rise, a more or less long plateau and the beginning of a slow fall.

I'm not really surprised, as all of them reflect the attractivity of Debian-based distributions: with the increasing adoption of cloud technologies, applications run more and more often in containers, a use case for which Alpine is generally preferred to distributions such as Debian or CentOS.

Furthermore, for the popcon stats to be meaningful, graphs should not reflect the installations of a single package and its dependencies, but the installations of all packages serving the same purpose (e.g. all text editors, all RDBMS, etc). After all, if you don't show the relative scores of ALL competitors, you cannot call this a popularity CONTEST. ;) Hence the importance of the package taxonomy, as outlined earlier by mjollnir.

An additional issue is the representativeness of the sample. Unless the installation counter is installed and active by default, the reliability of such statistics is questionable. For instance, I use Void Linux, which has a similar application called PopCorn. When you go to its statistics page, you see only approx. 150 unique installations, a figure completely inconsistent with Void's increasing popularity. There's no significant difference between making decisions based on such a sample, or from the appearance of a fish's bowels or the fly of crows, as Romans used to in Antiquity.
 
After all, if you don't show the relative scores of ALL competitors, you cannot call this a popularity CONTEST. ;) Hence the importance of the package taxonomy, as outlined earlier by mjollnir.

Heh, I did warn it was a "weak" anaysis ;).

My reasoning was if I searched for packages that come as part of the desktop environment, that is fairly representative (I couldn't find meta packages basically). So I compared pluma (Mate text editor), dolphin (KDE file manager), Thunar (xfce FM) and Nautilus (Gnome 3 FM). These are all guaranteed to be installed if their respective DE is installed (because it is part of the environment).

Void may have a smaller collection of data but I imagine it will show a similar order (and percentage) of desktop environment popularity. If it doesn't I would assume it is because of the nature of some of the void users being more technically savvy and going for a custom environment.

But other than curiosity, this data isn't particularly useful. Debian's main reason for having it is to decide on which DVD they should put the most popular packages which is admirable. However this isn't relevant for almost all other distributions (they are very much tied to the internet regardless).

Though I do feel that Linux distributions should pool together as much data as they can to calculate an estimation of users. It might help when negotiating driver support (or even source access) with manufacturers.
 
..., I suggest: let's build an AI for that! With automated reasoning, this AI gives you some suggestions based on your use-case, hardware, other ...
You still belief in AI, seriously? This buzzword regularly pops up when trying to hide behind algos, creating a black box digital dominion after all.

Also for being able making "automated reasoning" you need to give away your data and meta-data first.

Fin de siecle is likely to get on it's final stage when people get so convenient that they are no more able or willing to get their needs fit without any automated assistance.
 
  • Like
Reactions: a6h
The rating system only works for android phones and to influence
the users who some or later stop thinking and not choose what they want

here is the same,some or later the user stop to do thinks like search,use and test programs for what working for him
to say it easy "this port is number 1" "dont think" "trust me"
"dont search" "if you dont like it the number 1 go for the number 2"

something like this happen when I search in google for a alternative for example..terminator terminal
the first results are publicity
and most of the pages are crap,based on "user ratings"
 
  • Like
Reactions: a6h
when I search in google ... and most of the pages are crap,based on "user ratings"
I remember the term "crap in crap out" based on big numbers of not so bight individuals.
BTW I do not use Goggle anymore because I do not want to feed the black box algorithms of a global monopolist.
 
  • Like
Reactions: a6h
Back
Top