ChatGPT: criminal usecases

Status
Not open for further replies.
Fair POV. I said the last message from my POV; as someone with parents who could definitely fall for it.
I agree, there is certainly an issue. And I think we should think about how we can cope with that issue. But I don't think we should push this to the governments to do something, rather we should focus on helping each other along.

Anecdote: when I travelled Scandi, I once came across a woman with a bicycle. And she had a problem: the chain of the bicycle had gone loose - and she was somehow in distress and didn't really know what to do.
I went to her, "can I help?", just grabbed that chain and put it back to where it belongs, and everything was fine (only I had got dirty fingers) - because that's the normal thing for me to do, I'm an technician, and if there is a technical problem, then I fix it.
Then she said: "you're not from scandinavia?!" No, I'm not - "why that?" I asked. "Because people here don't help each other".

So that is where the perfect governmental care and social welfare leads to: people no longer caring for each other, because the goverment is supposed to do that anyway.

And this can be perceived in the very same way as technology assessment is done. It is true that technological achievements have an impact on society. It is as well true that social achievements have an impact on society.
 
My personal assessment of conversational (chat) AI, and also of image generative AI: It's mostly useless, and in particular very useless for the currently hotly contested application (web search). It is a technological novelty like the Rubik's cube, fun to play with, you can amaze your friends, and scare your enemies.
Probably now. But this will not stop here.

I didn't really understand why people would want to have computers - except the scientific&business people who put them to practical use, and the hackers who enjoy understanding how they work.
For anybody else, lets say for "ordinary people", it stays difficult to handle.

So this technology, having the system undestand and answer natural language like it were another person, fills an imminent gap.

Where will that then lead to? I don't know, but one likely prognosis is Butler's jihad.
 
It's mostly useless,
Doesn't matter.

As long as you can fire people,
e.g. reduce the non-care-about-customers-no-support to even more keep-customers-with-bs-at-distance,
this will come,
and people will be fired.

Microsoft advertise its new programming IDE helps the programmer by using AI to produce Sourcecode.
Daily there are success stories about how AI produced things, such as code.
AI is sold on the promise any computer now only need to be trained by underqualified staff instead of being really programmed.
Result:
Qualified programmers are fired, and people in so called third world countries are being exploited even more.

If one objects, this will not really work,
he may be right,
but also may need a lesson in how our world actually works.

Decisions are not made by technicians, programmers, developers, engineers, or scientists,
but by businessmen, and businessmen, only.
(or sometimes politicans, who mostly decide in favor of the most money, so businessmen.)

Businessmen don't care, if something works.
They don't even care to understand.
They simply ask:
Will it raise my profit?
(by selling it, or reduce costs, e.g. by firing people)

If one promises 'yes',
then it's sold.

When it's not working, not keeping the promises,
then they don't revisit their idea.
No. First they nail the seller to keep his promises, no matter what.
And if this fails they seek for a specialist who simply knows which screw just needs to be turned to make it work.

Telling them - as an specialist - to stop that nonsense,
because it had no chance to work at all in the first place,
explaining systems errors cannot be solved by any quick & dirty solutions,
that's why there are engineers, which are not just trained on the job in a couple of hours,
for not to have quick & dirty solutions ready on demand, but to prevent systems errors by the start,
is wasted energy.
It simply will not be heard.

The damage will be done.

They believe.
They want to believe.
And because they believed in the first place,
they can do a lot of money without doing actual work,
they have to believe there will be a quick solution, that will solve all problems, and bring the promised profit without any cost.
You cannot reason with believe.

I had it myself dozens of times:
They simply run from pillar to post,
until it works.
And if not,
until the project/company runs out of money,
or some supervisor from the holding above (if there is any) will stop this madness.

The longer this takes, and the more money already was wasted,
the lesser the chances to make it work.
Because the acceptance for solid and sophisticated solutions drop to zero at the very moment,
the project's scheduled deadline is overdrawn.


The only thing one can do, when this is upcoming (e.g. in the own company):
Get yourself a life jacket, look for emergency exits, and get ready to eject.

Because the captain and the officers are already in the life boats at a secure distance when the wreck drowns,
still encouraging the crew to keep course and speed.
 
  • Thanks
Reactions: PMc
Doesn't matter.

As long as you can fire people,
e.g. reduce the non-care-about-customers-no-support to even more keep-customers-with-bs-at-distance,
this will come,
and people will be fired.

Microsoft advertise its new programming IDE helps the programmer by using AI to produce Sourcecode.
Daily there are success stories about how AI produced things, such as code.
AI is sold on the promise any computer now only need to be trained by underqualified staff instead of being really programmed.
Result:
Qualified programmers are fired, and people in so called third world countries are being exploited even more.

If one objects, this will not really work,
he may be right,
but also may need a lesson in how our world actually works.

Decisions are not made by technicians, programmers, developers, engineers, or scientists,
but by businessmen, and businessmen, only.
(or sometimes politicans, who mostly decide in favor of the most money, so businessmen.)

Businessmen don't care, if something works.
They don't even care to understand.
They simply ask:
Will it raise my profit?
(by selling it, or reduce costs, e.g. by firing people)

If one promises 'yes',
then it's sold.

When it's not working, not keeping the promises,
then they don't revisit their idea.
No. First they nail the seller to keep his promises, no matter what.
And if this fails they seek for a specialist who simply knows which screw just needs to be turned to make it work.

Telling them - as an specialist - to stop that nonsense,
because it had no chance to work at all in the first place,
explaining systems errors cannot be solved by any quick & dirty solutions,
that's why there are engineers, which are not just trained on the job in a couple of hours,
for not to have quick & dirty solutions ready on demand, but to prevent systems errors by the start,
is wasted energy.
It simply will not be heard.

The damage will be done.

They believe.
They want to believe.
And because they believed in the first place,
they can do a lot of money without doing actual work,
they have to believe there will be a quick solution, that will solve all problems, and bring the promised profit without any cost.
You cannot reason with believe.

I had it myself dozens of times:
They simply run from pillar to post,
until it works.
And if not,
until the project/company runs out of money,
or some supervisor from the holding above (if there is any) will stop this madness.

The longer this takes, and the more money already was wasted,
the lesser the chances to make it work.
Because the acceptance for solid and sophisticated solutions drop to zero at the very moment,
the project's scheduled deadline is overdrawn.


The only thing one can do, when this is upcoming (e.g. in the own company):
Get yourself a life jacket, look for emergency exits, and get ready to eject.

Because the captain and the officers are already in the life boats at a secure distance when the wreck drowns,
still encouraging the crew to keep course and speed.

I agree with some of what you've said. Many businesses act in stupid ways. I've been in businesses like that. Offered insights and watched as the captain headed right towards the iceberg proclaiming "We're unsinkable!". Those businesses will fail but there's a minimal threshold of "It works well enough." that is tolerable in business and is profitable. It's very much like evolution. Evolution doesn't need to lead to a point that is perfection. It just needs to be good enough to survive. So does AI. As long as AI can create messy things faster than humans that either 1. still work or 2. need minimal effort to make it work then AI will succeed.

If developers are more worried about AI taking their job I would suggest they learn to utilize AI as a tool to compliment their work. It won't replace everything a human can do but it will lead to new avenues. When the camera was first becoming popular a painter claimed that it would kill the art because, and I'm paraphrasing, he said why would anyone want a painting of something when someone can look at the real thing? Photography didn't kill painting. It opened new doors.

AI will become a tool like any other. Used for both good and bad. AI has already been used to generate new drugs, speed up algorithmic processes, etc. The singularity, AGI, or whatever you want to call it is a good thing. It won't be "the last invention". Like I've heard it called but it will be a groundbreaking one. I believe there will be a time before AI and a time after AI.
 
I didn't really understand why people would want to have computers
Short:
Because they were sold to them (Clifford Stoll, High-Tech Heretic: Reflections of a Computer Contrarian.)
But above all computers are colorful, flashy, blinky, noisy, and you can watch porn with them :cool:
 
I agree with some of what you've said.
Of course I put it a bit drastically, and not every businessman is a moron (otherwise our society had been dead decades ago.)
But it's so hard to face the same mistakes are done over and over again,
and so much energy, money, time, skills, and people are wasted for this madness.

One needs not argue about too high wages, when it's obvious where the money is really burned.

If AI becomes a tool like others we will see.
This depends on who handles it how for what.

By how the scientist who developed it ment it to be, sure.
They develop it since the 1950s.
Now computers are powerful and affordable enough, to realize nets which can do something everybody can see and grasp.
Now it's in a stage it can be sold.
So it's got in the hand of businessmen, who as always want to stop investment, and start to gain as much profit as quick as possible.

What usable will be left,
and what will just be another bursting bubble,
who will profit, and who will suffer and pay for it,
cannot actually be said at the moment.

It's not that I'm really worried about rea AI, actually.
It's how people deal with I am worried about.

Problem is, we are - again - not ready for such a big step.
And this always causes way more crockery shards as necessary.

In the 1970s...1990s we had a big jump of progress by automazation: NC & CNC
and besides that moved much production to China and Eastern Europe.
As a result the countries and societies of Western Europe and Nothern America had to deal with lots of people not needed in production anymore, so getting high rates of unemployment, which still lingers on today.
(Sorry for becoming a bit political, but AI will become a very political while social issue very soon,
while it still could be hold in the technical field, as the origin idea of this thread may be seen.)

As I already said in another post:
The worth of our money is represented by the amount in circulation.
The lower the wages, and the less people in labor,
the less money is in circulation, so the less it's worth.

In the 80s...90s the society was just told unemployment is anybody's own fault, and wouldn't happend if enough of the right stuff would have been learned in time.
This will not be sufficient anymore.
AI will be a way larger jump in automazation. That's what it's for.
And it will not happen within 30 years, but in a much shorter time.
And it will affect the whole world at once.
And it will affect also the people with higher education.

And our society has no idea whatsoever how to deal with this,
because such ideas are not discussed.
Only to look happily forward to this "big, fantastic step which will be a profit for all of us, only".

Everybody older than 30 already heard this befor,
and because of experience simply thinks a bit sceptical about that.
 
Of course I put it a bit drastically, and not every businessman is a moron (otherwise our society had been dead decades ago.)
But it's so hard to face the same mistakes are done over and over again,
and so much energy, money, time, skills, and people are wasted for this madness.

One needs not argue about too high wages, when it's obvious where the money is really burned.
I agree. Many businesses definitely are wasteful. I've seen it first hand but AI can help fix this. Many people, businessmen included, will go trust whatever an AI tells them over a person. For the simple fact that most people see computers as infallible. That being said, as long as the AI spits out the right thing then it can be a great tool in this sense.

Don't get me started on wages lol. Minimum wage laws are the dumbest thing and many people are paid way too much for the work they do.
 
..., as long as the AI spits out the right thing then it can be a great tool in this sense.
As you might know already, it can be disputed always whether a thing is “the right thing”. Even being “right” may depend on the color of the pills you have taken (i.e. red pills vs. blue pills among others). Watch out for easy consumable narratives within distributed texts like “for a better world ...”.

A great tool for the (normal ugly) underperformers on visual driven social platforms seemed to be the Midjourney AI:


The Californian startup turned profitable fast and it's service is no more available for free since this month after having banned rendering faces of the ever smiling Xi Jinping.
 
As you might know already, it can be disputed always whether a thing is “the right thing”. Even being “right” may depend on the color of the pills you have taken (i.e. red pills vs. blue pills among others). Watch out for easy consumable narratives within distributed texts like “for a better world ...”.

A great tool for the (the normal ugly) underperformers on visual driven social platforms seemed to be the Midjourney AI:


The Californian startup turned profitable fast and it's service is no more available for free since this month after having banned rendering faces of the ever smiling Xi Jinping.
You're correct that different people see things as different but does everything have to be done "for a better world"? We've gotten to this point over time off of the backs of some very selfish men who made great strides simply for self gain and a side effect of which was a better world.

Once that service went pay to play that opens the "for free" market up for others to jump in and offer the service. Although, keep in mind, nothing is ever truly free.
 
Many businesses definitely are wasteful
That's because if anything needed and useful is already being produced, but you still must have a growing economy, the only chance is to do wasteful business - wasting resources, companies, people, ideas on useless products, or even better completely useless at all.
That's what the whole stock market is all about - to deal with and make money without any real busniees whatsoever 😁
but AI can help fix this.
I don't think so.
AI cannot produce something really new. AI is not really creative, so cannot solve real problems.

All what AI does is scanning the existent for patterns by given patterns to produce new patterns consisting of already existing ones.
If there is something wrong or stupid within, it becomes part of the new pattern.
And it will be hard to filter out the rubbish, and to distinguish the genuine from the crap, just becasue of the large amount of stuff it produces in short time.

Or to put it easy:
AI will be good tool for going known ways.
Finding new ways is impossible.
For that you need natural intelligence.

Problem is many people will neither understand this, nor be capable to see the difference.

Many people, businessmen included, will go trust whatever an AI tells them over a person. For the simple fact that most people see computers as infallible.

That's what I am worried about.

I once tutored a student in math.
When I asked her "what's 3 times 4?"
she used her calculator, and answered: "3.1415...."
Me: "That's wrong. You hit the wrong button, and I know which one."
She:"Doesn't matter. The computer tells so." and wrote down the value.

Such happens if what ment to be assistance became substitute.

When in the late 90s the first navigation systems for car came out (handheld pc),
I knew several people getting lost on the same route they were driving for twenty years,
just becasue they believed what the computer told them.

People simply do not get the idea that someone can simply write rubbish on a website.
Their believe something written must be true is so deep as "a computer makes no mistakes",
which means calculations, used in an assisting way.
 
I don't think so.
AI cannot produce something really new. AI is not really creative, so cannot solve real problems.

All what AI does is scanning the existent for patterns by given patterns to produce new patterns consisting of already existing ones.
If there is something wrong or stupid within, it becomes part of the new pattern.
And it will be hard to filter out the rubbish, and to distinguish the genuine from the crap, just becasue of the large amount of stuff it produces in short time.
AI has already created new things. Such as:
~40,000 new chemical weapons.
New medicines.
New designer drugs.


That's because if anything needed and useful is already being produced, but you still must have a growing economy, the only chance is to do wasteful business - wasting resources, companies, people, ideas on useless products, or even better completely useless at all.
That's what the whole stock market is all about - to deal with and make money without any real busniees whatsoever 😁
Sometimes creating something from scratch is pure wastefulness. other times it sparks innovation.

That's what I am worry about.
Which is why developers, or anyone else worried about it, should learn to integrate it into their workflow or to use it as a framework.

Here's an example of how I've used AI to help. I was working on a project and needed a very large python dictionary created. I could've wasted the time writing it up myself. Instead I told the AI to write it for me. In seconds I had a huge dictionary completed instead of the 20-30 minutes it would've taken me. I've also used it to explain things instead of having to "read the docs". It's also much friendlier and to the point in it's responses than even some administration in these forums. Not naming any names.
 
driker
One of your cites come from rt.com which is well known site for spreading Russian propaganda, disinformation and conspiracy theories. Please keep the forums.freebsd.org clean of such sources.
 
AI has already created new things. Such as:
~40,000 new chemical weapons.
Fantastic.
That's what we need, chemical weapons, and more drugs.
I have to admit: It's a way to solve all our problems.
But it was not the way I intended to like. :-/

*sigh*
Let me put it in a more simpler way:
If you have a bunch of Lego(c) bricks, build some things from it,
let the AI watch it, and train it on that,
you may tell it later:"build me..." and it will built something which will be something you expected/wanted at a high probality,
but never for sure.

Of course this will be a great help not to do all the bulding by yourself anymore.
But it still will be Lego(c) bricks only.
All you get are new variations of anything already seen which can be built with Lego(c),
but nothing really new.

And besides that my question was:
What are we going to do else, when all Lego(c) is built by machine?
Keep in mind that for any answer you may also answer:"Also that can be done by AI."
So, what's left?
How shall our Society deal with that, if not only no Lego(c) builders are not needed anymore,
but no builders, programmers, designers,....

I know the answer, if this happens unprepared:
weapons & drugs

Do we want this?
Not me.
 
What those in power are actually afraid of, is that such an AI might tell you something that is not in line with the current newspeak, the current stance of propaganda.
And that is indeed a problem. While media that do not adhere to the ministry of truth can simply be discredited or banned, a publicly accessible AI cannot.
Uhh... ChatGPT is already banned in Italy and China...

And let's not forget, we're all using something that was invented by US military and paid for by Al Gore - to even have this very discussion.
 
That's what we need, chemical weapons, and more drugs.
I have to admit: It's a way to solve all our problems.
But it was not the way I intended to like. :-/
By identifying the chemical weapons we can work on antidotes to them. That way if someone else develops the weapons we can be prepared. Also, some chemo drugs are actually chemical weapons. Just because it has the word weapon attached to it doesn't mean that it doesn't have other uses.
Of course this will be a great help not to do all the bulding by yourself anymore.
But it still will be Lego(c) bricks only.
All you get are new variations of anything already seen which can be built with Lego(c),
but nothing really new.

And besides that my question was:
What are we going to do else, when all Lego(c) is built by machine?
Keep in mind that for any answer you may also answer:"Also that can be done by AI."
So, what's left?
How shall our Society deal with that, if not only no Lego(c) builders are not needed anymore,
but no builders, programmers, designers,....
Humans are ingenious. We, although not necessarily ever human, are good at finding new ways to use existing technology. We will continue to improve and combine things to make new things.
I know the answer, if this happens unprepared:
weapons & drugs

Do we want this?
Not me.
Drugs can have good uses. They're finding that things previously only used recreationally do have actual medical benefits.
not forget about the people who dont know that this is not like the movies and we not got robots like the eset one with a conscience and better than human brain..like this poor man
Widow Says Man Died by Suicide After Talking to AI Chatbot
all this is a madness...especially people who want to use this technology immediately and for purposes that are not ready yet
what's the rush?
Not to sound insensitive, because I'm not, but without seeing the chat logs it's hard to judge what exactly the chatbot said to make him act. Also, many countries are coming to terms with the "right to die". So I guess that depends where you are.
 
Uhh... ChatGPT is already banned in Italy and China...

And let's not forget, we're all using something that was invented by US military and paid for by Al Gore - to even have this very discussion.
Countries shouldn't be banning things like this.
 
By identifying the chemical weapons we can work on antidotes to them. That way if someone else develops the weapons we can be prepared. Also, some chemo drugs are actually chemical weapons. Just because it has the word weapon attached to it doesn't mean that it doesn't have other uses.
Use case since ancient times: Poisonous snakes like King cobras...

Not to sound insensitive, because I'm not, but without seeing the chat logs it's hard to judge what exactly the chatbot said to make him act. Also, many countries are coming to terms with the "right to die". So I guess that depends where you are.
yeah, I don't think chatbots are exactly capable of bullying and gaslighting (fortunately). I also don't think they're capable of exactly 'wanting' stuff. Yeah, you can (with some effort) prompt ChatGPT into pretending to be a human confessing love to you, but you're still the one with the initiative. If you want something bad enough, you go ahead and do it. That's something ChatGPT can't do, and that's why it doesn't pass the Turing test.

Countries shouldn't be banning things like this.
Leaders have a population of morons that need to be kept alive. It's all about statistics and reasons for keeping your numbers up. Actual, personal connections work differently than you might think - and yes, those connections do result in nepotism.
 
yeah, I don't think chatbots are exactly capable of bullying and gaslighting (fortunately). I also don't think they're capable of exactly 'wanting' stuff. Yeah, you can (with some effort) prompt ChatGPT into pretending to be a human confessing love to you, but you're still the one with the initiative. If you want something bad enough, you go ahead and do it. That's something ChatGPT can't do, and that's why it doesn't pass the Turing test.
Exactly. It has to be initiated. He definitely could have stepped away. Instead he chose to engage. It's the same way I feel about pretty much anything these days. Instead people feel the need to continuously engage with things they know are bad for them or that they do not like. When I was younger; if you did not like something you didn't engage with it. You went off and did your own thing.

Leaders have a population of morons that need to be kept alive. It's all about statistics and reasons for keeping your numbers up. Actual, personal connections work differently than you might think - and yes, those connections do result in nepotism.
It's not the government's job to protect people from themselves. People have a responsibility to ensure their own survival when it comes to the basics. Government's job is to protect the individual from others. In terms of food, shelter and the like that isn't the governments job. If someone is capable yet too purely moronic or lazy to survive that isn't societies fault and, if anything, it's a disservice to the person and society in general to make them bear that weight. Have you read "The Law" by Frederick Bastiat

Use case since ancient times: Poisonous snakes like King cobras...
Not sure if you're agreeing with me here lol
 
It's not the government's job to protect people from themselves.
Why not?

As for counter-examples, I'd like to offer up state-funded health care, police (for public safety), public education (as opposed to home schooling or private schools), and food safety regulations. Heck, even building codes are something the government is supposed to enforce. And somebody needs to organize all that, and then convince the leaders to pony up the money to pay for it.
 
By identifying the chemical weapons we can work on antidotes to them.
Inventing chemical weapons to have reasons to produce antidotes for them makes no sense whatsoever.

Drugs can have good uses.
They cannot, they have.
But creating 40k new ones is only interesting for overloading the system.

Countries shouldn't be banning things like this.
That's what countries for, primarily protecting their people.
That's why e.g. drugs and weapons are banned in most countries,
to protect the people.
In the contrary to e.g. USA, we in Europe can move freely on any street,
and not worry about being shot by gangsters or paranoid policemen,
because they always need to suggest anyone is a heavily armed psychopath, directly open fire with automatic weapons.

Since we've learned first allowing things, than struggle to regulate or even ban them afterwards, when we saw the downsides,
lead to most causes of the problems we have to day.
It's way better to do it the other way around:
First prove the harm done is reasonable for the benefits,
then define regulations, then allow it.
Insofar I find Italy's and France's attidude very exemplary, and mature.

Sorry,
but I start to think you're an artificial intelligence yourself.
Are you, or can you prove you're not?
 
Status
Not open for further replies.
Back
Top