Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.I am sure the thought is comforting. Even though it really just pushed the ball to "the next generation's lifetime." Not to mention the problems of it being comforting. Does it mean that if they were going to exist in your lifetime, then you would fret?
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.I'm at least old enough to know that history will do what it does. Best one can do is try to understand it as it happens.
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.
Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.
We are lifetimes away from AGI.
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.Clearly, you have lived a blessed life.
As mentioned, in this case, they are both the same. This "AI" risk you are alluding to will not happen in our lifetime. So don't worry about it.Make up your mind. Is it (or I guess anything) not worth worrying about, or is it worth worrying about but only if it happens in your lifetime?
AGI is "Artificial General Intelligence". Nothing to do with anthropomorphism. If the "AI" needs to be tailored to a specific problem domain, then it is basically a glorified algorithm and humans are the sole driving factor. In hundreds (if not thousands) of years, once AI can adapt without needing spoonfeeding by humans to achieve a single prescribed function, then things may get more interesting.In my opinion, an eternity, because artificially generated intelligence is a misnomer (meant to awe people I suppose, like certain religious concepts of the past).
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.
As mentioned, they are both the same.
AGI is "Artificial General Intelligence".
Nothing to do with anthropomorphism.
Being in tech for a while, you get used to the rollercoaster of vaporwareIt is simply that I, like most of humanity, have experienced the horrors of history repeting itself. The experience has taught me the opposite of not to worry.
Those are both different questions to your previous post. And the one I answered.Well, they are not. It's different to say "autogenerated software is not worth worrying about" and "autogenerated software is only worth worrying about beyond X amount of computing capacity."
We would like clarification, if possible.
The term AGI came about in 1997. We still aren't much closer to it now either. This recent LLM stuff is a fun monetizable distraction but might even slow down progress towards AGI.Ok, you got me. I don't really keep up with the marketing.
Disagree. A rabbit has more intelligence than what we are calling "AI" today.Well, intelligence is a human trait.
Come on, this is a silly parsing of words. If you said "my coffee machine's warm embrace," that would be an anthropomorphization, even though a bear could embrace you also.Disagree. A rabbit has more intelligence than what we are calling "AI" today.
This is another silly parsing. Fine, not "computing" capacity, call it "generative" capacity.For this question. The "AI" algorithms today are not a risk, regardless of computing capacity you throw at them. Its like saying a quicksort algorithm will become sentient or take over the world if you run it on a really fast computer. This is not the case.
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all. In fact, I'm more sure of that than your next statement:What do mass lay-offs in software engineers and in general management tell you? That in many cases, they have dispensed with the human opertor altogether, and just turn the machine on in the mornings.
maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all.
Companies don't replace software engineers with LLMs.
They just keep at a much lower headcount and expect each existing engineer to do multiple times as much with the help of machine learning.
That open headcount goes right over to hiring machine learning specialists and data scientists. So if the software engineering departments do discover that they are suffering - too bad, the headcount is gone.
In fact, I'm more sure of that than your next statement:
In others, maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
Who, per capita, cost about 1.5x as much as software engineers (data scientists), and 3x (ML experts). Source: been there, done that, got the T-shirt.That open headcount goes right over to hiring machine learning specialists and data scientists.
Please post an experience report! If the tunnel you have crawled into is nice and well outfitted, I might go in there too.Well, I am downloading Google Antigravity right now. An entire IDE dedicated to vibe coding. Cover me, I'm going in.