I Tried "Vibecoding"

The thing about ChatGPT, JohnK, is that it's a tool. Under your instructions, it would probably be capable of generating code that meets your standards. As someone said before in this thread, if the one at the helm is an experienced programmer, the result is much better.
 
I used duck.ai to generate a bunch of scripts that I would be unable to make, unable to afford pay someone for the task, and that nobody would have coded for free. That was a huge relief for me, but even though I am not able to code I could realize the scripts were excessively redundant, eventually they worked but better watching somewhere else...
 
[No sarcasm]. I never had any doubt, JohnK, about you being a much better programmer than ChatGPT, as you have just demonstrated.

I appreciate that but that's not really what I was getting at. ...short version: I don't think chatcpt can come with the logic behind the `aif()` above. And yes--a small amount--I am truly wondering if my logic--or it's--is flawed (I had 'no retort' because that entire response was contradictory and mostly incorrect) because the pinch points it laid out are not the same as I would. And I certainly wouldn't have made code to write code (but that's a guess/feeling at this point because I only really saw the config file logic).

For the record: I am not a programmer (I work in an entirely different field).
 
I've changed my mind. I will not make any more tests. This topic bores me. Everyone has their mind already set and I'm not a proponent of any sh*t in particular. My experience has been explained; the scripts have been shared. Period.
 
Sorry to hear that.
I'm finding this topic fascinating and, nor has my 'mind been made up' but I guess I'll have to conduct my own tests then, but I--being not a current or former professional programmer--cannot get it to do anything logical. So, even it's from a perspective of "self-validation" I want to see if I can get it to code up something better than I can. ...I am not a programmer, I don't get paid a bunch of money, I have a "free" education (-i.e. based on whatever free information I could find) and if this tool can be used to further my education, then I'm all for it (I just need to know how).
 
I've changed my mind. I will not make any more tests. This topic bores me. Everyone has their mind already set and I'm not a proponent of any sh*t in particular. My experience has been explained; the scripts have been shared. Period.

AI has advantages.
AI has disadvantages. And flaws.
Above all AI increases the production output by many magnitudes, while at the same time quality is not raised.

And it does nothing really creative new.
All it does is, it finds patterns in already produced stuff, and reorganize those patterns into other patterns by rules of also known patterns. For many routine things that's completely sufficient; to assign boring routine work to a machine. That's what it's orginally meant for, like any other kind of automation.
But this is very boring for real humans interested in anything actually new, particulary creative humans interested in producing creatively new stuff.

Also AI's product quality needs to be controlled very carefully. The two largest flaws of AI are inherent to its very system:
1. While it avoids already known errors, which makes it look so cool, it produces complete new ones instead; sometimes even really tricky, and very hard to find ones. As an experienced programmer you know, it's harder to find errors in something you think, it must be flawless, than when you already know it's buggy. While program's source code is something that can be checked by distinct rules, alas many other things AI produces are neither produced nor tested with expertise, but being released into the wild anyway without being quality approved at all.

2. The speed of production output overtook quality control way faster than anybody can keep up, to pick the useful stuff, while gigantic heaps of useless garbage can be produced in no time.

So, to use AI right also debugging, and quality control, which still need to be done by conventional methods, so cost almost the same time and effort as before like without AI, need to be either adjusted, too, first, to keep up with the speed of output, or new output has to wait, until the former one is tested, and quality approved.
Otherwise places trash up.
Anybody ever lived in a shared accommodation knows: Everybody likes to be the kitchen's chef, whirling with flaming frying pans, impressing others with his/her cooking. But afterwards nobody is there to clean up the kitchen.
The people producing are in the majority, now having a tool in their paws to produce even larger amounts of stuff in even way shorter times, while the people who clean up the mess afterwards neither became more, nor got better tools.

Plus the cost for quality control are not a neglectable, tiny part compared to the benefits AI may bring. Not seldom the costs for to correct some AI output are even larger than if one did it all by her-/himself without AI in the first place. So, it also always needs to be weighed up. Which is another task needed realistically being added to the bill deducted from AI's benefits. Which makes AI to anybody, who worked with and tested it a while, not so very dazzling shiny anymore, as it seems at the first glance, or being hyped to sell a new technology having billions invested.

Not a quarter goes by, when the topic AI is discussed here. Most here don't "just refuse themselves to the topic by prejudices", but (also) already have actual experience with it. Just because they don't share the same enthusiasm doesn't mean they are all by principle completely against this technology. Mostly it's just because they already have enough experience to see like any other new thing it not only has benefits, but also downsides - see things more realistic. They already know how they use it for what, or not. They don't need to be told. And above all, most here are simply tired of this topic AI - not tired of AI per se, but of discussing it again, and again, and again.

However,
when I see "made by AI" written over it, or even smell, a text was written by (or with the help of) AI, I immediately stop reading. I lose interest instantly.
Why shall I read a text by somebody obviously too lazy to write it him-/herself?
Yes. Writing a text needs ten times the time to read it - at least when you try to write it readable, interesting - for the reader. AI does not change that. It produces texts way faster than anybody can read. But you still need the same time afterwards to make it readable for your readers, anyway. I don't waste my life time for reading some garbage presented by somebody who does not care about the readers.
Particulary I don't waste my time to do any quality control on other's AI output.
Anybody may, or may not use AI. But if, you do it for yourself, only. It's very personal.
But you don't present its output to others. Especially not, as if it was something they've never seen, or being incapable to produce it themselves. That's another downside of AI: The creative value of its products is, as I already said above, none. Worthless. It's the tool, that delivers the product. Everybody can have access to this tool and its sources, so everybody can do it. And what everbody can do is nothing special, not worth to appreciate, not even worth to talk about.
Presenting AI output is like another kind of "Here, let me google that for you!"
To me it's like a little child crying from the toilet:
"Mommy, Mommy! Look, what I've made!!"
"That's nice, pumpkin. Please, just flush now."
 
My personal suspicion is that AI will remain a major tool for debugging, vulnerability identification and things of that nature mostly as is, but hopefully in a less resource intensive manner.

On the code generation side, I could see a class of even higher level programming languages that are explicitly intended to go through a constrain the AI to roughly what you want, generate code and further constrain the code/tweak cycles. The current methods just are not viable long term, regardless of how good the code is. It takes an ungodly amount of money and resources to train one of these AIs and they still have essentially no knowledge of any of what's going on, they're just reacting to the feedback they've been given and trained on code that already exists.

Even if that wasn't all the case, you'd wind up with issues in terms of having anybody at all with the experience coding to identify broken programs or programs that have one poison pill or another. Not to mention all the issues lately with prompt injection. Personally, I'm more of a hobbyist, but I don't see much point in going beyond a somewhat more advanced autocomplete features. I'm fine with the IDE doing things like setting up getters and setters when the language does that. Similarly, there's little harm in putting in boilerplate stuff like unimplemented methods or renaming all the variables if I've realized that the variable name I've chosen is stupid and want a better one.
 
Just a random selection of headlines from today. Seems to be a hot topic for some reason. I suppose a $40 trillion stock market bubble will do that to you.

"More than half of CEOs report seeing neither increased revenue nor decreased costs from AI, despite massive investments in the technology, according to a PwC survey of 4,454 business leaders."

"It's only a matter of time before AI-generated vulnerabilities become widespread. Few organizations will ever admit that a weakness came from their use of AI, but we suspect it's already happening. This won't be the last you hear of it — of that much we're sure."

"Making money isn't everything ... at least not when it comes to AI. Research from professional services firm Deloitte shows that, for most companies, adopting AI tools hasn't helped the bottom line at all."

“We’re close to zero job growth. That’s not a healthy labor market,” Federal Reserve governor Christopher Waller said at the Yale summit. “When I go around and talk to CEOs around the country, everybody’s telling me, ‘Look, we’re not hiring because we’re waiting to try to figure out what happens with AI. What jobs can we replace? What jobs do we don’t?’” "

This is a kind of inverse operation... looks like C-suite jobs are good candidates for AI replacement, who knew?

1769110863775.png


Ah yes... and last but not least, the famous "K-shaped economy curve"

The red line is essentially big tech stock prices. The blue line is jobs, for want of a better word. The vertical black line is when chatgpt was launched. Note that the red and blue lines were tracking each other, more-or-less in sync, historically for the last 20 years; in other words, jobs broadly tracked investment; until they diverged at the vertical black line, forming the 'K'. The article's authors might have finessed their description a bit so it doesn't read quite as bluntly as I have put it; but you have to read between the lines.

1769111192678.png

Of course if they do actually get AI to work, that divergence is going to accelerate and the K we can see here is going to turn into a very large K indeed; which is what they are all betting on, and hence is why investors are still buying stocks in AI tech companies even at the current astronomical p/e ratios. Everyone is buying, because of FOMO; if AI works out, being out of the market will be worse than being in. And all those smart guys are saying it will be ready real soon now; and they've got PhD's, and everything.

I guess it's a case of "watch this space". Better keep fingers crossed it doesn't go "pop", or I'll have to post that Hyuna video again.
 
Wihout finding a definition, I think it's the reinvention of existing things. Have a program develop the obvious for you and make it look speciai. Tech ambition is the joke if you plagiate everything without adding something new.
 
Back
Top