How is your prompt engineering?

There are AI-powered art services that go completely the other way, and porn producers are complaining that it's saturating the market and pushing them out of their job.
I wouldn't mind LLM's destroying OnlyFans & the pr0n industry. And we had slop before AI in popular culture since at least the late 2000's.

People will eventually appreciate man-made stuff again. And the real thing and not just bytes. When I settle I plan to own paper books and vinyl records.
 
Last time i watched porn , and alot of it is created in the U.S. I fell asleep because it was so boooring.

Better,
https://en.wikipedia.org/wiki/Caligula_(film)
https://en.wikipedia.org/wiki/Emmanuelle
https://en.wikipedia.org/wiki/Story_of_O_(film)
I'm in US, and I'm under impression that a lot of it originates outside of US.

In the end, 'prompt engineering' is not that different from 'think before you speak'. In the early days of ChatGPT, someone did manage to engineer a few prompts to get ChatGPT to self-identify as a woman, even though in reality, ChatGPT is a freaking pile of poisonous rare earths, metals, and plastics, and has a bit of electricity injected into it to make many parts move.
 
This tip will be at lest of interest to the OP cracauer@ if he doesn't know it and is still using LLMs to assist him in writing code. It's very useful.

When you ask ChatGPT (or any other LLM) to generate some code, always do two passes:

1) Ask it to generate the code.

2) Immediately after, ask it to check the code it has just generated and correct it/improve it accordingly.

This works especially well if you have access to a deliberative model (like the "Thinking" option of ChatGPT).

I myself don't code currently. I only create FreeBSD sh scripts to automate things occasionally.
 
Since LLMs were brought up, and I mentioned my disdain for them elsewhere, a few days ago I wanted to remove my old Nvidia 9600GT graphics card which I've had in my system since forever. I wanted to use the built in Intel graphics ability of my i7 processor. I was wondering if it was possible to keep both in my system but I was too busy to do any real research on it so I thought I'd try using Grok and ChatGPT.

That was a disaster of epic proportions solved by using my own brain and just pulling the Nvidia card and saving it for another day.
 

Yes, it's another thing we have to optimize for. The bills come in by the number of tokens and it is very hard to predict.

Not to mention misunderstandings like this, where Anthropic cuts off people easier than they thought.

 
Back
Top