AI: We can't anticipate all of the possible consequences of this technology

Released in a beta version in June by the artificial-intelligence research lab OpenAI, a tool called GPT-3 generates long-form articles as effortlessly as it composes tweets, and its output is often difficult to distinguish from the work of human beings.


From carefully reading the FAQ you learn that there are not few possibilities for misusing the technology.

The Atlantic prompted GPT-3 to opine on these issues, it captured the problem succinctly:

" For the moment, at least, it seems unlikely that generative media will be effective in the same way as traditional media at promoting political messages. However, that’s not to say that it couldn’t be. What it will do is muddle the waters, making it much harder to tell what’s real and what’s not."

Looks like AI won't make it easy going in future. At least be prepared that it is possible you might get deceived or manipulated in a more perfectionist way. AI-generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans. One of the implications of the rise in AI-generated content is that the public will have to contend with the reality that it will be increasingly difficult to differentiate between generated content and human-generated content.

So if you are going to use the API you should make up your mind before using it because you are going to train the AI and help OpenAI on their work becoming profitable. It might hurt you somewhere in the future.
 
Pah! :)

The Guardian had a set of text generated by GPT-3 and it wasn't that clever at all. Very short pithy sentences, but many of them not really joined together.


Interesting to see and "clever" and sure they'll get there one day, so yes, of long-term concern where it might go (and like you say, places we might not even think of). But not quite there yet!
 
Thanks for pointing to the Guardian on this subject.

The text there starts with "I am not a human. I am a robot. A thinking robot." You got a hint. If that disclosure were missing humans may not that easy react like "Pah!". While GPT-3 is beta now, expect the technology to improve.

Now almost every technology has the potential for dual use. The potential for abuse of generative language models by assessing GPT-3 can be read there

 
Far too much content is generated by humans, too, that exists strictly to generate income and has no value otherwise. It exists only to get you to visit so the web site gets you to click on an ad or you buy their product. The content is the sugar-coated cereal when you should be looking for more substantial intake.

Television's so-called "news" works the same way. They do not present "all the news that's fit to print". They present content to get viewers so they can show advertisers why they should spend their advertising dollars on that station.

As a sidenote: One of my local, network affiliate TV stations proudly announces having hour long "news" broadcasts at 5, 6, 9, 10 and 11. That is nothing to be proud of.
 
Far too much content is generated by humans, too, that exists strictly to generate income and has no value otherwise. It exists only to get you to visit so the web site gets you to click on an ad or you buy their product.
While click-baiting is a known problem and I regard the advertising industry as a plague, what is your point? Do you welcome AI for this purpose?
 
The point is that machines will be far more effective than humans ever were. At end of the day each human only cooks with water and has to sleep. Get AI on the job and you will be able to produce WAY more garbage than you did before and that garbage will also be of higher quality.

It's not like there is any way to stop this though so getting upset about it is pointless in my opinion. The only way to avoid the negative sides of such technologies would be to never invent them in the first place. After that it's game over and there will ALWAYS be this one guy who does it even if 1000s would skip on it for moral reasons. Nuclear fusion, face recognization, thought reading, parts of gene technology, ... It's always the same pattern over and over again.
 
A technology is never positive or negative.

About the IA, I quote my IA professor :
In IA, the important word is not Intelligence.

For content creation, a lot of human created stuff is stupid, skewed or malicious. I am not sure that IA content have to be better to have the right to exist. We, as human, need to learn how to build personal conviction in a world where the truth is a noise in the ambiant din.
IA can help us to enlight cross-reference data and science facts...
 
IA can help us to enlight cross-reference data and science facts...
In your sentence the important word is "can". It does not read either must or should. Here we talk about potential misuse. And I think that we should talk about this at an early stage where legal regulation and ethical guidelines are lagging behind.
 
We, as human, need to learn how to build personal conviction in a world where the truth is a noise in the ambiant din.

While that's certainly true it won't work in the real world. You or me likely will try to come up with concepts to deal with this (and i am actually pretty sure have long done so at this point), a very sizable portion of people simply won't though. Those probably making up for the vast majority is where problems arise. People are already overwhelmed with the information they have to filter at the current level and you can't reasonable expect much more from them. Besides given that machines tend to get VERY effective at things there likely is going to be a point where even to most cautious and critical individual will run into problems properly weighting pieces of information. What we have now could maybe be called noise but once automation kicks in it's not going to stay at that and dismissing the resulting problems in a single sentence is simplicistic at best in my opinion.

IA can help us to enlight cross-reference data and science facts...

Nuclear fusion can supply a lot of power ;)
 
If it's intelligent, then it's intelligent. Artificial strikes me as fake, and using the word AI (==Algorithm) is a branding strategy, to sell junk on the TV and elsewhere.
 
Wasn't there a story by Larry Niven about AI? The one where the plans came from an interstellar prankster? It is in The Draco Tavern. Since he was spot on with so much, why not there? ;)
 
  • Like
Reactions: a6h
what is your point? Do you welcome AI for this purpose?
My point was that AI generated content is content produced for the sake of producing it and content for the sake of content--no other. If the content is needed, a human will produce it. At this point, it seems AI is producing content for the sake of producing it. If a human is not producing the content, is it needed?

I see this situation on Web Master forums all the time. Someone is concerned about keywords generated for the web pages and how to manipulate everything to rank higher in search engines but little to no talk is about quality content worth reading in the first place which is the real way to generate high search engine ranking.

I want AI to help me find solutions, not provide solutions to me for problems I didn't know I had. Or generate things for me to read that I wasn't looking to read in the first place.
 
Now Microsoft has an exclusive licence to use GPT-3, so not sure what means in "real world" terms.

 
I want AI to help me find solutions, not provide solutions to me for problems I didn't know I had. Or generate things for me to read that I wasn't looking to read in the first place.

As individuals we rather functionality over profit as a goal. But generate things we don't need is the main goal of AI, at least in social media and plaftforms paid by ads.

Skynet is already in charge of everything but not because they are clever, but because people deliberately chose not to think.
 
July 22, 2019 Microsoft announced that it would invest $1 billion in OpenAI, the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman.

At that time Microsoft said: "To accomplish our mission of ensuring that AGI (whether built by us or not) benefits all of humanity, we'll need to ensure that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is widely shared. If we achieve this mission, we will have actualized Microsoft and OpenAI's shared value of empowering everyone."

With our knowledge on GPT today we can question this self-praising. It was a strategic investment in a neural language model that is not only capable of sophisticated natural language generation and completion of tasks but also as a linguistic weapon. Read the paper on RISKS OF GPT-3 AND ADVANCED NEURAL LANGUAGE MODELS linked in an earlier post.
And we know that AGI cannot be deployed safely and securely as regulations do not exist and talks on ethics merely serve as an alibi for not getting attacked on a broader scale.

While a forum member commented yesterday with "Pah!" on GPT-3, it was financed 14 months ago with $1,000,000,000.

September 22, 2020 OpenAI has agreed to license GPT-3 to Microsoft for their own products and services. That means some more Dollars were spent.


As Microsoft intends not only to use GPT for own benefits but to sell GPT-products, it will be the opening of Pandora's Box on applied, advanced and weaponized computational linguistic proliferation.

My alarm always shrills if global tax avoiding monopolists claim that their doing "benefits all of humanity".
 
The point is that machines will be far more effective than humans ever were. [...]
That's not so clear. Historically, there've been two major schools in AI: the (american) neural networks, and the (european) rule-based approach. Both have been refined by Fuzzy Logic. We've witnessed the rise of neural networks in the past decades; the rule-based approach is now a niche. That doesn't mean that competition is finally decided. Let's look at the major differences: neural networks are very good at reasoning (deep learning), but since they are black boxes, they are very weak at explaining. So as for the current state-of-art, AI can immitate human intelligence in taking decisions in more or less closed domains (e.g. to control machines & robots), or even outperform humans.
  • They can answer questions, but they are very bad at explaining why their answer is reasonable.
As of now, explaining is a human domain. Rule-based AI systems are coming here, but their performance is often -- well, funny, to put it polite. Neural networks are more or less completely failing here.
P.S.: Hakaba With all respect for the beautiful french language, please write AI instead of IA in english ;)
P.P.S.: getopt you're mixing english billion = german Milliarde
EDIT: Humans often fail to explain, too... 🤬
 
While a forum member commented yesterday with "Pah!" on GPT-3, it was financed 14 months ago with $1,000,000,000.
You called? :-/

My "PAH!" is all about this "AI" "revolution" that is (still) nothing of the sort.

Lots of money spent, lots of clever stuff, and yes, progress being made (but it's not really any form of "intelligence" is it?) as well as concerns about where it's going. But we still seem to be a long way from the promises of yesteryear ...

Back in the 1980s when I started looking at computers and pondering a career in them I was warned there would be no point because they wouldn't need human programmers "soon". I'm still waiting.

Anyway, my "Pah!" was not about your ultimate concerns (as I understood them) but just at "AI" in general and the current hype-storm.
 
Anyway, my "Pah!" was not about your ultimate concerns (as I understood them) but just at "AI" in general and the current hype-storm.
Thanks for clarification. Yes "AI" is merely a marketing term and probably not understood by most of the people using it. But the technology behind what is called "AI" and the like is advancing fast.
Regulation and ethics may fail to catch up leaving most humans as targets for nudging at best but also for manipulation and financial- and data-exploitation.
 
My alarm always shrills if global tax avoiding monopolists claim that their doing "benefits all of humanity".
On the one hand I believe in libertarian limited system, i.e. property right and private business, but on the other hand we have these multinational corporations (MNC) and conglomerate entities with enormous amount capital, trying to destroy civilisation (social media, date and privacy, search and censorship, big data, AI, etc). These MNC entities such as Google (Alphabet Inc.), which I consider as Public Enemy Number One, gaining unlimited power by investing in multiple countries. Thus take benefit from different loophole, legal and business system simultaneously. Having problem X in country A? no problem move your assets to country B! Screwing country C while spending in county D!
I think the whole concept of multinational corporations and conglomerate business should be illegal.
 
Back
Top