Depressed

I think it's a mistake to think of them as "boss robots." It is more I believe the case that the "boss" is in many cases becoming obsolote. No boss will be needed. The systems infrastructure will organically accomplish all HR goals, as well as many others.
Does that include handling of sexual harassment complaints? Or responding to labor laws set by local government? or catching embezzlement by senior management? Or enforcing code of conduct for rank-and-file employees?
 
Does that include handling of sexual harassment complaints? Or responding to labor laws set by local government? or catching embezzlement by senior management? Or enforcing code of conduct for rank-and-file employees?
Yes, I'm curious to know what difficulty you think an autogenerated program will have with any of these things?
 
Yes, I'm curious to know what difficulty you think an autogenerated program will have with any of these things?
I'm more concerned about the blind trust, ability / willingness to learn from mistakes, and what can be done to rectify errors that a machine makes. An AI has no capability of saying, "Sorry, I messed up, I will fix the error and pay back the damages".
 
Autogenerated software can most certainly do all of those things many times more efficiently than any person, specially the slimy type that tends to end up in that position. There is nothing remarkable about those people or what they do. I can think of few things more programmable than "corpospeak."

More importantly, though, autogenerated software will be able to deploy solutions that bypass the points where those problems emerge to begin with.

You are also making a mistake ot think about it as a question of trust.

It's a question of money.

Who will pay a hundred million dollars, or a thousand million, for somtheing they can get done, and done better, for ten thousand?

The answer is nobody. Except maybe government agencies, and maybe universities, which are kinds of pseudo-government-agencies.
 
People experienced the same feeling of shock you are experiencing now when it slowly crept up on the world of mathematics that statistics is a many times more powerful field than deterministic arythmetic.

Shoot, Russians still don't like it.

It goes back to ancient philosophical debates, about whether the world is an idea you can figure out, or a work of God beyond your comprehension. If #1, deterministic arythmetic and best practices. If #2, statistical analysis and Amazon bots.
 
The very first thing that a good statistics course will teach you is that correlation does NOT imply causation.

You can correlate the number of tomatoes grown in Australia to how much money your boss in Germany will save by hiring a local teenager who can "talk" to a robot. And your corporate types will blindly swallow that as the ultimate truth that is so holy and not to be questioned, just something that's convenient to program into the local cloud that their robots drink from and make decisions afterwards.

Even money is ultimately about trust. There's a reason that a bank needs to be THE place mediating a loan, especially a large one. Also, one gets hired with expectations of trusting the boss to pay up for the labor provided...

Besides, even statistics is built on basic, deterministic arithmetic. If you don't know that 2+2=4, most of statistics is not gonna make any sense even if it hits you on the nose.
 
Who cares about causation. Correlation is all you need. Statistics doesn't ever suggest causation, it just attempts to predict correlation on the basis of past results. This has turned out to be infinitely more precice than trying to determine causation, which in the final analysis might be a supersticious myth, and is the reason quantic mechanics is the unquestioned status quo of cutting edge science.

An exec doesn't care about causation or correlation. He just can read a balance sheet.

About the trust for payments, if I could tell you the amount of times I got stiffed by managers. Whereas the times I was working for more automated enterprises, my faith in correct payment shcedules was absolute, and in every case warranted. It's just a computer program that autopays you, what more can you trust.

About 2+2=4, quantic mechanics and statistical analysis will rather pose it as what is likeliest if 2+2. It's not so much about discreet uynits, as of ranges. Everything is a range, the point doesn't exist. Which, if you think about it, is closer to reality. It even jives with the way computers think, as anybody that has had floating point problems can attest, or tried to program an operation that gives you a square root. The answer is, aprox. this. The aprox always bears out.
 
Somehow that reminds me of the joke, (using the first one I found with a web search, from reddit
... they are walking through the woods when they spot a deer in a clearing. The physicist calculates the distance of the target, the velocity and drop of the bullet, adjusts his rifle and fires, missing the deer 5 feet to the left.

The engineer rolls his eyes. 'You forgot to account for wind. Give it here', he snatches the rifle, licks his finger and estimates the speed and direction of the wind and fires, missing the deer 5 feet to the right.

Suddenly, the statistician claps his hands and yells "We got him!"
 
(can we get the forum to stop trying to infinipush notificaitons?)

It is even more radical thatn you both seem to understand. Numbers, units, the ones that make each 2 in 2+2, are remnants of a deterministic tradition. They aren't actually necessary for a statistical calculation.

Explicitly, we have no notation for this. Implicitly, when an autogenerated program shifts some bits 'ere and a few yonder, it is making implicit statistical calculations that at no point involve a number or any unit, any discreet thing.

In reality, both the physicist and engineer would be sitting there writing out calculations for their shot while the statistitian already went "...I don't know, about here," and killed the deer with a margin of deviation X from the center of the deer's heart.
 
People shouldn't use ChatGPT or Gemini or whatever AI if they are totally "blind" on the subject.
In my professional point of view these AI helpers are just guides to make you remember how to do something or to give you a direction but not the solution.
In some cases it may point to (generate) the solution, but you must be fully aware when it is just giving you the possibility of reaching nowhere and believe me, it can take some hours of your work until you realize that.
I lost 2 hours with Gemini and then ChatGPT to figure out how to do something, it helped me with several tools I never heard about, working on files and so on, but in the final I had to resolve it myself.
I now know how to use those tools despite it was totally useless. :)
 
You don't know how to use those tools lol. They know how to use you.

In 5 years, those tools will be set to run on some mainframe and your job won't exist. Autogenerated program engineers will have the last laugh.
 
Yeah, the danger of blind faith in AI is flabbergasting.

So much falls by the wayside:

Being decent to other human beings in your immediate vicinity.

Solid education that allows you to see when someone is spouting nonsense. The said nonsense can come from the AI or the user.

Awareness of surroundings. Yeah, you can get to the Eiffel Tower, but no amount of AI on your phone will save you from a pickpocket, who will probably steal your phone as well as your cash. An AI won't tell you that you can't afford the trip to Paris to begin with. An AI wont tell you that correlating tomatoes in Australia to how much an Amazon warehouse manager can save per worker is completely ridiculous. And it won't apologize to you for giving bad advice that ultimately lands you in deep shit.

Oh, and AI won't tell you you've gone offtopic in a conversation. Original intent of this conversation was to offer some encouragement to OP...
 
Being decent to other human beings in your immediate vicinity.

Solid education that allows you to see when someone is spouting nonsense. The said nonsense can come from the AI or the user.
Hey, I'm just as horrified by it as you are. I don't touch anything autogenerated with a ten foot pole. Maybe because, rather than in spite of, my actually understanding what it is. If you have to lie to yourself to feel comfortable, maybe you are the fertile fields in which imbicility grows.

Awareness of surroundings. Yeah, you can get to the Eiffel Tower, but no amount of AI on your phone will save you from a pickpocket, who will probably steal your phone as well as your cash. An AI won't tell you that you can't afford the trip to Paris to begin with
You're missing the point. The problem with autogenerated programs is not that they can't do any of those things. The problem is that they can do all of them. Better than any idiot that would need to use them, that's for sure.

An AI wont tell you that correlating tomatoes in Australia to how much an Amazon warehouse manager can save per worker is completely ridiculous.
Some would say that making sweeping declarations about datapoints that you have performed no analysis on is what is completely ridiculous. What autogenerating prgram engineers know is that there is almost no way to know what data will and what data won't be useful. So they spend a lot of time trying to perfect data refinement, how to determine scopes. It used to be that they considered that volume of data 100% of the time outweighed content of data. With that paradigm they created the autogenerated music that destroyed the authored music industry. That was 10-20 years ago. They have refined their models since then.

And it won't apologize to you for giving bad advice that ultimately lands you in deep shit.

I frankly find the value you put into apologies humorous. I prefer someone not cut my arm off than apologize for it later, thx.

Oh, and AI won't tell you you've gone offtopic in a conversation. Original intent of this conversation was to offer some encouragement to OP...

The topic is OP's depression over his father's takeover by a evil software spirit, and the wider takeover in the world. Also general teen angst. You may think that telling him that all these worries are insubstansial will help. I come from the school that the best cure for depression is a solid dose of reality. Because reality, one can work with. Depression is elementaly the state of feeling powerless.

Well, the stage of powerlesness where you are still trying to fight it. Sometimes it goes terminal, and people walk down the path of self dellusion. I guess that's a cure for depression same way a fire is a cure for a frying pan.
 
People shouldn't use ChatGPT or Gemini or whatever AI if they are totally "blind" on the subject.
In my professional point of view these AI helpers are just guides to make you remember how to do something or to give you a direction but not the solution.
In some cases it may point to (generate) the solution, but you must be fully aware when it is just giving you the possibility of reaching nowhere and believe me, it can take some hours of your work until you realize that.
I lost 2 hours with Gemini and then ChatGPT to figure out how to do something, it helped me with several tools I never heard about, working on files and so on, but in the final I had to resolve it myself.
I now know how to use those tools despite it was totally useless. :)
I do agree for the most part... Well, we do seem to live in a world where admitting that you don't know something is somehow shameful, and a reason for ChatGPT to replace you. The fact that the other person is in the same boat as you (as far as internal knowledge goes) - that seems to not really matter. I think that's unfortunate.

As a helper tool, ChatGPT can be useful - on my last job it took me about an hour to refine a Powershell command that became a go-to tool in my toolbox. And I know this about myself: Without ChatGPT, it would have taken me at least a whole day, possibly even two, to get to that point. I can do my reading of documentation on Microsoft's web site, I can play with PowerShell commands, and get the results I'm after - but yeah, it would take me awhile, mostly deciding on the next step in research. Same things can be said about sed and awk on UNIX, the important part is being able to do research and understand what you're looking at.
 
Yeah, the danger of blind faith in AI is flabbergasting.

So much falls by the wayside:

Being decent to other human beings in your immediate vicinity.

Solid education that allows you to see when someone is spouting nonsense. The said nonsense can come from the AI or the user.

Awareness of surroundings. Yeah, you can get to the Eiffel Tower, but no amount of AI on your phone will save you from a pickpocket, who will probably steal your phone as well as your cash. An AI won't tell you that you can't afford the trip to Paris to begin with. An AI wont tell you that correlating tomatoes in Australia to how much an Amazon warehouse manager can save per worker is completely ridiculous. And it won't apologize to you for giving bad advice that ultimately lands you in deep shit.

Oh, and AI won't tell you you've gone offtopic in a conversation. Original intent of this conversation was to offer some encouragement to OP...

Meta might be onto something with Quest VR standalone: You can get the human element by hopping into VRChat :p

AI on-headset and clear passthrough makes it a wild reality today!


Beat Saber also handles exercise for depression; the future is here :p
 
Some would say that making sweeping declarations about datapoints that you have performed no analysis on is what is completely ridiculous. What autogenerating prgram engineers know is that there is almost no way to know what data will and what data won't be useful. So they spend a lot of time trying to perfect data refinement, how to determine scopes. It used to be that they considered that volume of data 100% of the time outweighed content of data. With that paradigm they created the autogenerated music that destroyed the authored music industry. That was 10-20 years ago. They have refined their models since then.
How about too many logical connections to make? And how solid those connections are? Is there something that can be easily changed that will make the correlation untrue?

Like correlating consumption of ice-cream to drownings, a classic example used by actually competent statisticians? You can drown in a bathtub, but it will be blamed on alcohol, rather than ice-cream. You can eat too much ice-cream, to the point you hate it, and still not drown, like me (I do know how to swim). You kind of have to show that if you remove the ice-cream from the equation, that will affect the number of people drowning in all kinds of situations, even ridiculous ones like Gitmo waterboarding. Yes, ice-cream alone. If ice-cream gets replaced by something else, that doesn't matter, that's same as removing ice-cream from the equation, and therefore should result in less people drowning, even after Gitmo waterboarding. Valid statistics absolutely do require correlations to be that solid. Otherwise, you're just spouting nonsense, and it's not worth taking you seriously.
 
Meta might be onto something with Quest VR standalone: You can get the human element by hopping into VRChat :p

AI on-headset and clear passthrough makes it a wild reality today!


Beat Saber also handles exercise for depression; the future is here :p
Until you try to lift some weights with that helmet on, and break a $5000 helmet with about $100 worth of barbells. Or walk into a wall. Hell, some people got pulled over trying to use Tesla's Autopilot feature while wearing VR helmet.
 
Until you try to lift some weights with that helmet on, and break a $5000 helmet with about $100 worth of barbells. Or walk into a wall. Hell, some people got pulled over trying to use Tesla's Autopilot feature while wearing VR helmet.
Meh, I can walk on a treadmill with a headset on and play a game; if I was confident enough to lift weights with one on it seems easy enough :p

View: https://www.youtube.com/watch?v=-xLePuQL3RQ&t=47s


Had no problem with walls even with Quest 2's odder perspective, but the front of the headset can take hits too; Q2's is a separated faceplate and convex so the cameras don't get hit
 
Meh, I can walk on a treadmill with a headset on and play a game; if I was confident enough to lift weights with one on it seems easy enough :p

View: https://www.youtube.com/watch?v=-xLePuQL3RQ&t=47s


Had no problem with walls even with Quest 2's odder perspective, but the front of the headset can take hits too; Q2's is a separated faceplate and convex so the cameras don't get hit
I wouldn't blow a few grand on a treadmill that I may end up using only a few times if that. I got better things to spend my money on. :rolleyes:
 
How about too many logical connections to make? And how solid those connections are? Is there something that can be easily changed that will make the correlation untrue?

Like correlating consumption of ice-cream to drownings, a classic example used by actually competent statisticians? You can drown in a bathtub, but it will be blamed on alcohol, rather than ice-cream. You can eat too much ice-cream, to the point you hate it, and still not drown, like me (I do know how to swim). You kind of have to show that if you remove the ice-cream from the equation, that will affect the number of people drowning in all kinds of situations, even ridiculous ones like Gitmo waterboarding. Yes, ice-cream alone. If ice-cream gets replaced by something else, that doesn't matter, that's same as removing ice-cream from the equation, and therefore should result in less people drowning, even after Gitmo waterboarding. Valid statistics absolutely do require correlations to be that solid. Otherwise, you're just spouting nonsense, and it's not worth taking you seriously.

That logical chain you are trying to find, and which indeed would take a prohibitive amount of computation, is not what satistics, in the modern, quatic sense of statistics is about. Here's how it work.

You take state: CONSUMED x ICECREAM DURING x ABSOLUTE TIMEPERIOD AND x TIMEPERIOD RELATIVE TO y TIMEPERIOD

Then you take state: DROWNED

Then you superimpose the states.

You gather all possible data where both states exist, where only one exists, and where neither exists.

You derive a statistic for likelyness of one state being accompanied by the other.

You run that number against cases that you didn't use to derive your statistic.

You refine your statistic.

As you can see, the computational power required for this operation is exactly the same as the computational power it would take to determine probable future correlation between state foot kicks ball and state ball goes flying. You are not asking questions of why, you are not interested.

Then you get something that autogenerated programming engineers liek to call "emergence."

God's world is just bigger than your imagination.

Needless to say, if the correlation is weak, you discard it, if it is strong, you keep it, and so on.

Go ahead man, laugh. Einstein laughed too, while this method was used to turn his theories into a city-destroying uranium bomb.

Lol, and as I scroll up, I see it is relied on also by yourself, to derive powershell scripts

The question that people that use these autogenerated programs for work should ask themselves is not whether it is some kind of mark on their honor, which is the approach most people seem to take. The question they should be asking themselves is: if I can type a prompt into the autogenerated software's interface, why can't some exec who never opened a terminal window in their lives (or an Asian pseudo-slave in a labour farm, for that matter)?

Today it may seem like it can only at most inform a trained opinon. That was the case in the late 80's with chess programs. Now chess programs will beat the strongest players 1000 times out of 1000 without rooks and unlimited time for the players.

Another question you should be asking is: is this software possibly collecting statistics on the usefulness of its responses to further refine themselves, as well as my human responses in order to furhter refine psychological operation?
 
I used that for the first time a few mins ago :p https://chatgpt.com/

The one I linked to (https://chatgpt.com/g/g-G7TYuJJCE-gpt-plus) says it is
"GPT Plus" By Marius Lekys. It's NOT the same as "ChatGPT Plus".
It's for "Detailed explanations on varied topics, making technical info easy to grasp."

There's a button inviting you to ask how it's different.
I did that, and it says it gauges your level of expertise from your question,
and adjusts its answers to suit.

So I include some info in my questions.
For example, I asked this last night:

"I'm familiar with H.264's Keyframe and GOP, but Matroska's Cluster and Block seem to be in a different plane; their description at www.matroska.org/technical/diagram.html is not clear to me. A block seems to be only a single frame; perhaps it combines audio and video togeher? A cluster is bigger, but how big? Please clarify."

The answer began with:
"A Matroska `Cluster` and `Block` are not at the same conceptual level as H.264’s GOP and keyframes.
They serve as **container-level** storage units, not codec-level structures."
then provided details that made sense.

At perplexity.ai I had made a "space" a month ago, with this custom instruction:
"Correctness is essential. Be logical and precise (less verbose than usual)."
For free, up to 3 times a day you get Pro access, and can choose "deep research".
Just now I gave the same question as above, with "deep research" checked.
The answer was longer than from "GPT Plus", and it ended with a list of 40+ sources.
Basically it presented the same facts as "GPT Plus". I haven't double-checked them yet.

I was an AI skeptic, but since using these two I think it's useful,
as long as you are careful to make your questions precise.
I guess this is what they mean by "know how to use AI".

I still don't like that they call it "intelligence", and make it pretend to be human.
I want to use it as a machine, and get good results by understanding enough about how it works.
 
That logical chain you are trying to find, and which indeed would take a prohibitive amount of computation, is not what satistics, in the modern, quatic sense of statistics is about. Here's how it work.

You take state: CONSUMED x ICECREAM DURING x ABSOLUTE TIMEPERIOD AND x TIMEPERIOD RELATIVE TO y TIMEPERIOD

Then you take state: DROWNED

Then you superimpose the states.

You gather all possible data where both states exist, where only one exists, and where neither exists.

You derive a statistic for likelyness of one state being accompanied by the other.

You run that number against cases that you didn't use to derive your statistic.

You refine your statistic.

As you can see, the computational power required for this operation is exactly the same as the computational power it would take to determine probable future correlation between state foot kicks ball and state ball goes flying. You are not asking questions of why, you are not interested.

Then you get something that autogenerated programming engineers liek to call "emergence."

God's world is just bigger than your imagination.

Needless to say, if the correlation is weak, you discard it, if it is strong, you keep it, and so on.

Go ahead man, laugh. Einstein laughed too, while this method was used to turn his theories into a city-destroying uranium bomb.

Lol, and as I scroll up, I see it is relied on also by yourself, to derive powershell scripts
If you were taking an actual statistics class, you'd get an 'F' from the instructor, and get told to re-take the class or take a few pre-requisiste classes first, because that post demonstrates an utter and complete lack of understanding of what statistics even is. Of course, you can try lying on your job application like your boss did, and drive the place of employment into the ground for yourself and your boss and everybody else who needs to make decisions based on valid statistics because it's too dangerous to take matters into their own hands.

Otherwise, I have a piece of real estate can sell you for cheap, and then tell you the local laws require you to insure your house. But because of where it is, your attempt to insure the house will be denied based on some pretty valid statistics that actually will tell you with 100% certainty that the house will burn down within a month if you build on that spot. And it will be an act of Mother Nature, so no, you can't get it covered.

Reality is bigger than anyone's imagination. Reality is stranger than fiction, and bites way harder. And yes, it takes solid knowledge of actual reality (and not delusions and imagination) to be able to bite back. Yes, that includes solid statistics, like it or not.
 
I guess that would by why that professor is teaching courses to 18-year-olds for 100k a year, rather than designing autogenerated software for 10M.
 
Back
Top