FreeBSD is well situated

Every generation thinks it will be the last one because we humans have a difficult time coping with our own mortality.
Interesting thought. This may explain the hysteria instead of to deal reasonably with problems.
However, I didn't say it was the end of humankind. I am convinced many will survive, and build something new, whatever it may look like. I said it's the end of modern civilization as we know it. That's a difference.
The facts show currently that we're running into this doomsday scenario.
There still is some hope left, we may stop it. But hope alone ain't enough.
And looking at current politics don't get me my hopes up.

Anyway, today I just read, really hard it will become from 2100. Then I'm not alive anymore, and I don't have no children. Which doesn't mean I accelerate climate change, in contrary. I try my best to reduce my already small footprint even more, and trying to keep the issue in minds. On the other hand I see so many people day by day having children give a shit. So, why shall I bother?
 
So I've been using and working with UNIX and Linux for a long time...

I guess what I find confusing is -- that no one is stopping anyone from doing research. The original "UNIX teams" were all doing research projects - Palo Alto Research Labs (PARC), Bell Laboratories, even Berkeley University with their Computer Systems Research Group (CSRG).

I imagined myself back in the 1990s logged in online using BSD 4.3 running on a DEC VAX reading USENET using CNEWS or BNEWS in the "comp.os.bsd" newsgroup. Suddenly a new USENET post arrives from "someone" I have never heard of before named "Linus Torvalds" who writes a USENET posting that reads:

My name is Linus Torvalds. I am going to write LINUX, and this is going to happen wether you like it not ! Deal with it !

I think I would have been like "... well .. okay! ... go for it dude !" And then pressed the "next" button to proceed to my next USENET news posting.

Not really sure where todays "antics" come from on research projects? In the past we "just do" when it comes to research and development and we didn't make a big deal out of it. If you succeed - great! Here is a gold star for you. In the mean time stop dreaming about all of the money you plan to make, what food is going to be served at the "release party" and what your photo op is going to look like.
 
Not really sure where todays "antics" come from on research projects? In the past we "just do" when it comes to research and development and we didn't make a big deal out of it.

Funding. Profits. The todays research is not research, todays research are step 0 towards profit.

Look at OpenAI, they called themselves "Open" so they can get the funding, scrape the open resources, and then they closed down into a proprietary system.

Basically, where I and AI people part ways, is that I don't believe in research in computer science area which barely touches current applied computer science. The people that work on AI do not know how computers work. Ask your LLM engineer or data scientist what are network sockets, how exactly they work, or what is PCI express. They slap on their shit onto the already standing technology foundation and then they claim they're going to change the foundation themselves, and when they're unable to do so (It is like a child saying he will build a space rocket), they move the goalpost into changing the software platform, changing the hardware platform, change everything because then our stuff will work.
 
we didn't make a big deal out of it
Me neither.
What bothers me is, when there is something new developed, people flapping around, hyping things. Annoying. In german we say: "Nothings is eaten as hot as it's cooked." But above all telling others, all we there is now is obsolete garbage, and now all have to use this new things, is a nuisance, is missionary, sometimes even religious.

Just like I let others be, I don't wanted be urged into follow the swarm, just because everybody else is doing it now. I decide for myself, if something is useful to me, if I need, or want.
That's why I use FreeBSD, and not Windows, or any turn-key Linux distro.
 
Not really sure where todays "antics" come from on research projects? In the past we "just do"
Nowadays people are only concerned about the money as in profits. The guys who made Unix were just trying to make a better OS for the company's sake. No outside investors. No thoughts of profit/loss when it was sold. No concern with how fast they got it done. Just get it done right.

When I worked for a medical company, their top selling product was based on old hardware. They asked me to design a machine on new hardware. They didn't give me a deadline to get it done. They never asked how things were going. In fact, the one time they did ask how long it would take, they doubled that time line which gave me plenty of time to.....think!
 
Imagine a pocket calculator. Imagine you get a model you know it's making mistakes. ...
Q: How many percentage of errors are you willing to accept to use this calculator for your work?
That's a good question, and we have some data on that. I know of three cases where good-quality reliable computers gave wrong answers. The best known one is the Pentium FDIV bug, where certain floating point division operations gave wrong answers (I think they were slightly wrong). Then there was a problem in the FPU of the VAX 11/780, and I don't remember the details and can't find them on the web; I think the square root operation was occasionally completely wrong.

The last one is not published, and is very funny. A former colleague was working at a big university computer center as a systems programmer, and was asked to implement an accounting package for the newly released IBM VM/CMS (because early versions of VM had no functioning batch system, much less accounting for it). So what he did was to write a system program (what today we would call the kernel) that interrupted the CPU a few times per second all the time, and recorded which program was running. The problem with his code was that when it interrupted the running program, it did save all the CPU registers, then did memory accesses and calculations, and the restored all the CPU registers (like in a stack, except the IBM 360/370 architecture doesn't use stacks). The problem was that he forgot to save/restore the floating point registers, and modified them! So any floating point calculation would have a chance to be wrong a few times a second. But a big computer does hundreds of thousands of floating point operations per second, and many don't save intermediate results in the registers for long. He found his bug a long later, and it was never disclosed to the users (and I think only to his manager long later). So for maybe half a year or a year, a small fraction of all floating-point calculations were totally wrong. AND NOBODY EVER NOTICED!
 
The best known one is the Pentium FDIV bug, where certain floating point division operations gave wrong answers (I think they were slightly wrong).
That one was very famous, I remember. When I recall correct it was slightly wrong. But as you know, errors add up quickly in larger calculations.

😂 This story is in deed very funny. And it shows how even errors can be unnoticed.
Interesting question was, why it stayed unnoticed.
All I recall is, some FPU/operations were not used, but people programmed their own, because the given ones provided not what was needed.
 
That one was very famous, I remember. When I recall correct it was slightly wrong. But as you know, errors add up quickly in larger calculations.

😂 This story is in deed very funny. And it shows how even errors can be unnoticed.
Interesting question was, why it stayed unnoticed.
All I recall is, some FPU/operations were not used, but people programmed their own, because the given ones provided not what was needed.
If I know it makes mistakes I use it accordingly.
Creators are fixing the models constantly. And it is getting better, much better. By model tuning methods and/or by the use of agentic processes i.e. self-testing feedback.
When you train a model very specifically i.e. on OS instructions - it will be more precise there.

However it seems the hallucinations etc. are inherent. Despite the hype, it is not a thinking machine, it is mimicking machine.

So, let's do it on the not critical stuff. Like UI - it is basically what LLM is - translator from human to computer. Fuzzy but good enough for most cases.

And that was the scope of muh original post - agentic UI.

When the LLM has to translate fuzzy human input in to code, it seems to me that it will make less mistakes on POSIX fully complaint system like FreeBSD than when trying to create meter long poweshell script just to read a processor type. On Linux it will make mistakes because of the multiverse of how distributions are glued together. And Mac will have no public training data.
 
Like UI - it is basically what LLM is - translator from human to computer.
I've seen and had way more than enough of this "computers trying to anticipate/foresee/think for me helping me to don't make mistakes/know better than me, what I better going to do/suggest me some" - BS.

Doesn't matter if it was "Clippy" (one of the worst). Some telephone bots "helping" customers to get the quickest, best service, with or without speech recognition. "Support" chatbots. Auto word completion. Auto correction. Auto BS creating "my favorite lists for me". Reminders of I better remove from the desktop. Or just FAQtnaea (frequently asked questions that nobody actually ever asked.)
Now we get this crap also in cars:
All is flashing, blinking, beeping, chiming, ringing, honking, burping, farting...for every BS you do, or not do. This drives me nuts! I need silence in the car, need to concentrate on traffic and the road to drive relaxed, to avoid accidents, and not getting stressed by alarm drills every few seconds for any meaningless BS.
Recently my wife and I were driving in a rented car on a road I've driven a thousand times, and we almost had three accidents on those 100km - because of "accident avoiding systems"!
Imagine a road through mountains full of bends. Oncoming cars cut the corners, coming into my track. I want to steer more right to avoid a frontal collision, but this shit of a lane keeping assistant counteracts, simply increased the force, not let me steer a bit more right. So I needed brute force to turn the wheel, to avoid an accident, then almost collided with the right crash barrier. Three times on 100km! Where is the point in building in systems I have to pay extra for, that I neither can refuse nor switch off, which increase the chance of an accident?
While on this tour my wife and I were also trying to talk to each other. Some people do this, you know. Not just staring numbly zombie-like on their tamagotchis all the time, but having a conversation - more than exchanging two monosyllabically words about basic bodie's needs. Every now and then this shit of a on-board computer butted in, interrupting adults talk, with some unasked, useless BS, like if we want to register now, or if we know there comes a McDonald's Restaurant and if we want to go there for a pause. "NO!! STF-UP you [P'§%$O($&'F''*§S'§%$/]!!" 🤬

Since my neighbor has new car he cannot park backwards into his garage parking spot anymore like he did for twenty years. Because this shit not only warns him with lots of flashy and noisy alarms of the post he needs to pass by by 40cm. No, this fancy new car actually stops everytime, and then he has to wait for ten seconds before his car finally let him park.
That's not a feature. That's terror.

But back to computers:
I have in fourty years deleted a zillion files from storage drives. In those fourty years I had not a dozen situations, when afterwards I realized I deleted a file mistakenly. I never took them out of any "paper basket", because I cannot, because I always empty those directly when I "delete" a file. So on my computers those were always completely wiped clean, because when I want to delete a file, I want to delete that file, and not move it to another directory. If I want to do this I'd say "move", and not "delete".
For to protect me for I may mistakenely delete a single file I have to double-approve millions of file deletions?
I am the master over my slave the machine.
When I say "delete this file/directory/filesystem" I want it to be deleted, and not having a discussion about it, particulary not with a machine.
Even if I say rm -Rf /usr/* that has to be done - unquestioned.

Since I don't use any DE anymore, but a simply WM, I am finally released of this useless shit.
This was one the reasons to me to dump DEs and use simple WMs, because I not only do not want a paper basket, I want that there is none at all, and I couldn't remove it from the DEs I used.
I do and have BUs to recover from. That's what BUs for. And when 90% of all computer users don't do BUs, or don't even know what they are, than that's their problem - not mine. Why I also have to be put in a rubber room because 90% feel more secure in one, or don't know otherwise?

I don't know, but it seems to me it must be that >90% of all people must be that incredible stupid imbeciles needed to be guarded and protected from themselves continuously 24/7, or they binge-drink toilet cleaner if there is nobody to tell them not to.
But what I know is that I don't drink toilet cleaner. And I don't need nobody to tell me that again every five minutes. I was told so once when I was four. I cannot speak for everybody, but to me that was enough to make me not drink toilet cleaner for the rest of my life. And I also don't need nobody's fancy idea telling everybody every five minutes not to drink that stuff, otherwise one of 8 billion may actually drink it.
You see, I am an adult, not yet suffering from dementia, so fully capable of not only making own decisions, but to take full responsibility for my own actions - meither any software company, nor any car manufacturer take it for anybody anyway.
I am no child who needs permanently somebody by my side, and guard me of all stupid things imagenable I might do. Especially not a machine, which is less than any human, so not only cannot read my minds, but even not think.

That's why I'm using FreeBSD, and not Windows anymore.

If you like to make sashimi you don't want a speaker telling you continuously to watch out to not hurt yourself, and especially not somebody take away that knife of yours, because it's sharp, and give you some blunt plastic toy knife instead.
You don't have to make sashimi for yourself. There are pros do it for you.
Cut your fingers is part of using sharp knifes. That's why I have plasters ready to hand.
If you worry to cut yourself, that's OK. Then don't use a sharp knife.
But there are others can, want, and are not afraid of cutting themselves.
Just let them keep their knifes as sharp as they are, please.
 
Now this is getting interesting. I wonder how my handsome version is in the universe where I'm handsome (if such a universe exists).

see? for that you not become part of the future of this man, he says "multiverse", of course is exists, you dont see "rick & morty"?
my teory is that OP came from one of that paralell universes,more advanced of course
seriously.. 🤦‍♂️
but is funny to read things like this post
 
If I know it makes mistakes I use it accordingly.
Creators are fixing the models constantly. And it is getting better, much better. By model tuning methods and/or by the use of agentic processes i.e. self-testing feedback.
When you train a model very specifically i.e. on OS instructions - it will be more precise there.

However it seems the hallucinations etc. are inherent. Despite the hype, it is not a thinking machine, it is mimicking machine.

So, let's do it on the not critical stuff. Like UI - it is basically what LLM is - translator from human to computer. Fuzzy but good enough for most cases.

And that was the scope of muh original post - agentic UI.

When the LLM has to translate fuzzy human input in to code, it seems to me that it will make less mistakes on POSIX fully complaint system like FreeBSD than when trying to create meter long poweshell script just to read a processor type. On Linux it will make mistakes because of the multiverse of how distributions are glued together. And Mac will have no public training data.

so take away the drugs from this advanced machines,and wisky just in case
 
We need a minimum age limit for computer use...I'm thinking 40?
You need to have seen this at least once bare-metal to pass an experience check :cool:

Microsoft_Scandisk_(Windows_98).png
 
Back
Top