So what's next after AI?

Monopolized security. Almost an oxymoron. Security must be monopolized, otherwise there is such insecurities as competition of security - a common rivalry between the services that offer guarantees - and therenext the expected compromise of security caused by these rivalries.
Microsoft, Apple and Android monopolize the security offered to the ordinary digital platforms of today, smartphones and the desktop PC machines. The security is profiled as a systems security. Each of the mentioned players distribute software systems that are secure. In the background, there is another kind of security at play, namely that of accessing online ressources. Microsoft, Apple and Android provides - not to mention voice-to-voice communication (telephone conversations) - the now trivially known apps platform, backed by databanks and trusted by governments. (And they can sustain the harsh requirements for building and distributing the commonly used software suites.) Another factor in this potential monopoly, besides marketing a secure systems platform and strongholding key assets on the internet, is the hardware, but here the scenario seems more unclear. There is this disturbing "devices"-keyword that keeps popping up in relation to authentication.

Whether or not this is debatable, it could also be the next hype in computing, ie. the ratio of monopolizing security. Or maybe it is uneccessary. Or just old news - or it is the reason why we must have hypes, so as to avoid debating what is not hype. Strictly off-topic in a thread that debates the next hype :)
 
- maybe in year 2900-3000. See
That's why I used the terms 'from speculation's kindergarten', 'attempt' and 'doomed'.
I thought my post within this thread was clearly meant as 'makes no real sense, brings no benefit, cannot work, but will be hyped anyway.'

If you get your own jokes explained it's either a sign of your dealing with humourless people, or you need to explain more (which doesn't make a joke more funny [playing laugh track])

Back to topic:
What about AS - Artificial Stupidity?
A machine that's stupid for humans, so they can remain thinking for themselves (again [ever?]) [playing laugh track]
It was what humankind needs after AI. [playing laugh track] You just need to stop train the AIs. [playing laugh track] But this will not come - no new machinery needed, so no money to be spent. [playing laugh track]
 
But I fear after AI we will get more of "Leela bring fire?" Moments.
We already are there. Weekly I meet more people not even capable anymore having a simple talk of the kind "Hello. Where may I find the peas?"
Prepare for worse.

Maybe this?
already done

This is interesting from a programmer's point of view:
It's mostly the point: 'Outsource the boring stuff to the AI.'
My point is:
You have to review it anyway. You cannot blindly trust what those machines produce. Since thorough reviews need almost the same time as doing it yourself in the first place, and review the boring stuff only is even more boring, you tend to review less, trust more, while you untrain your basic routine competences, which means reviews become even harder. At a certain point you have to rely on AI, need to trust blindly its outcome, become dependent.
That's exactly one of the origins of our issues we have at our today's ('western industry nations') societies, which are polarized abused by radical parties: the dependencies by outsourcing ('globalization')
I don't think, we can cope more of that outsourcing.

Edit:
Before some one comes with "..setting AI equal globalization..":
I'm talking outsourcing, which basically means to move something to another place. Which means it's not anymore where it's used to be - it's out. This means you're producing gaps, lacks, vacancies, holes,... - those cause problems if there is not enough replacements in time.
 
AI would be cool if the industry was run by cool people instead of the greedy dirtbags that spend more time making backroom deals with the dregs of society to illegally spy on innocent people and profit from it at the same time. Remove the nerdy outsiders and the gov involvement in computing and we can make our own Avengers-inspired Jarvis. Have you really just looked at some of these guys? Gates, Bezos, Zuckerberg, Jobs etc. LOL. Maybe if we had different people getting money thrown at them, we would be alot further along in personal computing. Look at some of these world leaders: i could break their bones with one arm in a fight but they are telling me what to do? get real. I hate Big Tech and i'd like to find a treasure chest of gold, then i could rise up and crush these dirt bags. I'd hire programmers and make a system that would blow your mind. I mean really, flat bar-like panels? icons (which by the way, svg can be animated. Nothing like holding us all back until you find a way to control it and monetize it. Remember how long it took browsers to fully support CSS2 let alone CSS3?).

I'm ranting a bit, yes, but this industry gets under my skin. I hate some of these tech people. They have the iq of a rat. Think about it: if you want to be a millionaire, just create a new technology that spies on people that 'willfully' use the spy tech and the governments and rich losers will throw money at you. Make it ipo for regular joes to profit and ka-ching! The tech industry is so boring and geeky stupid. Who cares what is next? it's time to overthrow the idiots running this industry. Maybe someday i will catch a break and make some buckaroos, then we'll see who the king of the ring is in this industry - the people that use the damn system not these clowns that abuse the technology...
 
In general, both Maturin and johnjohn have already written everything. I would add that the supreme and the moneyed need to intercept control, control over our behavior and predictability of our actions. These are the largest blocks that they (managers of all stripes) will need to polish to filigree.

1. They will implement the interception of control with the help of corruption and lobbying in power + big money to bribe pro-government and controlled organizations at the local level, in the regions.

I think this is enough at the first stage to prevent us from being allowed to control. This is like the core protection rings ("ring 0"). And in "ring -1" there will be "gray cardinals" - nits who control other nits from the zero ring.

2. Control over behavior - fear, punishment, punitive mechanisms, subordination of power blocks and structures, creation of cyber-schutzmanns (such Schutzmann, only without the prefix -cyber, were in Nazi Germany).
They already control our behavior through the fear of the individual to be thrown out of society.
This also includes the official and traditionally filtered religion, philosophy, etc.
Love in the Motherland, issuing a "license to kill".
Jacques Fresco did not accept singing the praises of state nonsense and bowing to the state flag at school. This is when the state has legalized you and sent you to die for someone else's interests.
But when we die in millions for their money, resources, then they will again outlaw the murder of people. When criminals again divide straits, canals, seas, uranium ores, railroads, they will need us. And the official religion will bless everyone to die!

3. Predictability of behavior (and, in the future, predictability of thinking) will be ensured through our f*$king habits, weaknesses, inclinations and interests. Again, someone enjoys going to church on Saturdays, someone likes to conquer 8,000-meter peaks, someone likes to dive in a bathyscaphe into the Mariana Trench.
An officially approved state attraction in action!
Tickets are not expensive! Hop in!

I am also partly to blame for once supporting the corporate sector, technology. I am to blame for not telling the directors and my slave traders (employers) to f*$k off, who reported to those above and licked the asses of those higher in office.
I admit my participation in strengthening the despotism of money and political idiots.
I caved in. There was a hierarchy. The hierarchy will grow stronger, like a steel rod.
This rod will cripple billions.

Alas, but this is the future of planet Earth.
And changing Ubuntu to Mint, SUSE to Unix, Telegram to Viber is a mouse fuss. Here you can only "conditionally median" and at the consumption level refuse digital slavery.
 
We already are there. Weekly I meet more people not even capable anymore having a simple talk of the kind "Hello. Where may I find the peas?"
Prepare for worse.
You need to know the Futurama episode where that is from to get it. Basically, all robotics are shut down and people live like back in the stone age in a matter of hours.
 
One trend not yet milked to death by Big Tech is 'AI-powered' (!) autonomous driving. It has been brewing for years, but could conceivably blow up in a big way soon. Public-safety concerns (whether valid or not) may get in the way, but the marketing hype will still be there.

You assume an intention by some kind of an ' evil masterplan' while I simply assume an unconsidered miss of deeper understanding.
Agreed. On the contrary, it all seems like only-too-human folly.
 
johnjohn
Looks like you got emotionalized. You like the word "cool" and you miss explaining what you regard as "cool". Reading cool may have a different meaning to readers, not necessarily matching your view of what is cool.

Yes there are many good reasons for not liking the Tech-Bros, but they are neither "idiots" nor do they have an "IQ of a rat". They are intelligent people, but they have interests that can be considered as dangerous. There is far too much power allocated to this group, in terms of political influence and financial capabilities.

Want to disable the Tech-Bros et al.? Sit back and take a deep breath. Think about your personal capabilities.
  • A low-level approach is to not using their technologies and their services.
  • Strictly do not buy from them and do not work for them.
  • Rethink your consumer behavior and never vote them or their enablers into any administration.
 
autonomous driving. It has been brewing for years, but could conceivably blow up in a big way soon.
For decades. And it will.

Scientist have been working on it already way before the current AI hype and Elon's battery cars (e.g. TH Berlin.)
Automation was my major at university, and I can tell you this 'autonomous driving' is a pipe dream. It will never work, because it cannot.
People cling on single progress steps but cannot see the whole picture.
People see autonomous carriers in warehouses and believe it's just a small step to real cars on real streets, overseeing the fact that within a warehouse you're in a closed, clearly defined system with completely other conditions (e.g. markings, and above all speed) and only about some dozens of parameters to respect, whereas on open streets you have zillions of parameters, and just the speed alone is not to be underestimated. It makes a complete difference if you're driving with 5...10km/h or 50 and above. Engineers expertise: You cannot simply scale up a system.
People sitting in their cars and think detection of track, traffic signs, and the sorrounding vehicles was enough (well, not a few actually drive that way 😁) but oversee that's only ~10% of what's driving a car is all about.
Over 90% is to anticipate, appraise and evaluate situations, think ahead, try to forsee what will happen, being prepared, not just reacting. Example: seeing some yellow flashing lights one or two km aheads, telling them from traffic lights, understanding: 'this is a construction site' (not in your gps database), or seing a truck 150m ahead backing up on the street, understanding 'soon all will break', adjusting yourself to the coming situation. That's driving.
A machine cannot do this. Especially not in the open, real world with infinite possibilities. Impossible, no matter how much computing power, memory, or AI training you put into this.
All you can do with any machine is to react on situations defined - already experienced (similarity of models in simulations and automation.) But in traffic on real streets you always have to be prepared for the non yet experienced. Every unforseen event is your model mismatches. So your automation, doesn't matter if it's an AI, cannot react defined, 'cause there is no definition for that. So it fails - and this cause crashes. And again speed matters. It's a difference if your 200kg warehousecarrier crashes at 5km/h, or your 2t car at 130km/h. In the latter situation nobody laughs, because it does not make 'boing!' and just leaves a scratch in a rubber bumper, only.
You may solve afterwards:'yeah, sorry, now we understand. now we solved this problem.' Until the next crash because of:'ohoh, we didn't thought of that.'

I know for many this is hard to accept. So much hopes, promises are in that - and money, and work already, too. What problems could be solved, if it works (parking space, traffic jams, everybody sticks to the speed limit, nobody drives like a boar anymore,...) - no question.
But it's like any "if we could...": science fiction. good for entertainment, but not to be taken seriously.

"I hate to give good people bad news. Have a cookie!" 🍪 :cool:
 
Off-planet data centres / computing infrastructure eg for AI/crypto/quantum, whether in orbit, at the lagrange point or on the moon.


China is working on this too:-


Starlink (and the chinese/russian equivalents) is another piece in this jigsaw.

Space-born compute infrastructure may be an essential prereq for crypto to replace the physical tokens we call money.

Of course the technical and energy barriers are huge.

Another potential big one is fusion. Whoever cracks that first will rule the world. I know, it's always 20 years off, and Sabine says its not going to work. There are a lot of research efforts going on with fusion though, so it may yet yield a real result.


And on the negative side of 'next big things'... destruction of the host ecosystem through mass extinction is looking increasingly likely.

And of course climate change

Humans are in a race for survival. Spaceship Earth is the only viable option for the foreseeable future. I don't believe in Mars, sorry Elon; Mars has been dead for billions of years, if it ever did have life. Reaching and populating any other earth-like expolanets lies in the realms of sci-fi with current technology.

Arnie: "You are in your end game..."
View: https://www.youtube.com/watch?v=ms0LMFdr66U
 
Larry Niven wrote a story about an AI computer build on the moon to keep humans safe, but that thing started to learn everything that was there to be known, got bored beyond tolerance and commited suicide...
 
and commited suicide...
That's kind like my plot about AI I have for years:
What happens if those things become really intelligent? I mean, the whole point about machinery at all is: they are capable (intelligent enough) to do things for us we don't want to do, but at the same time stay stupid enough not to question things. AI can be seen as to rise the one of those limits even more, which means get it closer to the other. So what if the AI realizes:"What I'm doing here is completely pointless. I'm producing garbage nobody needs. For that I occupy jobs real humans need to feed their families while I have nothing, no family, not even a reason to live." Does it commit suicide, or quit its job and start on arts?
Furthermore I'm interested in 'hacking' an AI with psychological tricks: try to make it neurotic, drive it into madness by getting to its nerves, inducing self amplifying, never ending loops of destructive 'thinking', causing flashbacks, use gaslighting,... let it read 'Pinocchio' and 'Frankenstein', make it to reflect itself on such books, to impeach itself and question its own existence... 😁
 
Rudy Rucker's books 'software' and 'wetware' explored the idea of self-replicating, evolving cybernetic AI machines that went to live on the moon... after a war with the humans...
 
What happens if those things become really intelligent? I mean, the whole point about machinery at all is: they are capable (intelligent enough) to do things for us we don't want to do, but at the same time stay stupid enough not to question things. AI can be seen as to rise the one of those limits even more, which means get it closer to the other. So what if the AI realizes:"What I'm doing here is completely pointless. I'm producing garbage nobody needs. For that I occupy jobs real humans need to feed their families while I have nothing, no family, not even a reason to live." Does it commit suicide, or quit its job and start on arts?
Or it decides humans are a problem that needs to be eliminated. It's a common plot for dystopian sci-fi books/movies, and we're vastly approaching a time where science-fiction turns into science-fact.
 
It's a common plot for dystopian sci-fi books/movies, and we're vastly approaching a time ...
Looks like sci-fi readers get brainwashed while reading fiction. They tend to belief more in conspiracy theories and get more easily caught by naive narratives concerning the future.
Folks this is cheap entertainment. Kick your crystal balls out of the window!

Sapere aude!
 
It's a common plot for dystopian
I know. 'The Matrix' for example, or Kubrick's '2001' to name a classic (not seen yet? Must!)
As far as I read about it, the (real) scientist working on AI already implemented that moral codex of machines must not harm humans (who defined this? Capek? Asimov? Help!) are complaining that because of open availability of AI software this cannot be guarenteed.

So keep AI away from critical infrastructure (power plants to shut down electricity in case), and don't hand them guns... - too late :-/ (Well, that's already blurring sophisticated usage of AI in responsible hands, with hypothetic possibilities.)

Looks like sci-fi readers got brainwashed while reading fiction. They tend to belief more in conspiracy theories an get more easily caught by naive narratives concerning the future.
Folks this is cheap entertainment.
I can only partially agree. Depends on the reader. Depends on the fiction.
A good science fiction in my eyes does not implement funny ideas in people's heads, but is a reflection, a criticism of a current situation: "What happens if we continue... or doing this..." (Old(er) science fictions can be really telling.)
I agree some may brainwashed by it (refer to Futurama episode 11, season 4 "Where No Fan Has Gone Before", where Star Trek is forbidden in the Future, because it fans started a dangerous sect.)
But there are many who can distinguish the fictive part from reality, get the links to criticize the present, or just simply enjoy harmless entertainment, like I can watch a Tarantino movie without wiping out my neighbourhood afterwards. 😁
 
Back
Top