I Tried "Vibecoding"

The thing about ChatGPT, JohnK, is that it's a tool. Under your instructions, it would probably be capable of generating code that meets your standards. As someone said before in this thread, if the one at the helm is an experienced programmer, the result is much better.
 
I used duck.ai to generate a bunch of scripts that I would be unable to make, unable to afford pay someone for the task, and that nobody would have coded for free. That was a huge relief for me, but even though I am not able to code I could realize the scripts were excessively redundant, eventually they worked but better watching somewhere else...
 
[No sarcasm]. I never had any doubt, JohnK, about you being a much better programmer than ChatGPT, as you have just demonstrated.

I appreciate that but that's not really what I was getting at. ...short version: I don't think chatcpt can come with the logic behind the `aif()` above. And yes--a small amount--I am truly wondering if my logic--or it's--is flawed (I had 'no retort' because that entire response was contradictory and mostly incorrect) because the pinch points it laid out are not the same as I would. And I certainly wouldn't have made code to write code (but that's a guess/feeling at this point because I only really saw the config file logic).

For the record: I am not a programmer (I work in an entirely different field).
 
I've changed my mind. I will not make any more tests. This topic bores me. Everyone has their mind already set and I'm not a proponent of any sh*t in particular. My experience has been explained; the scripts have been shared. Period.
 
Sorry to hear that.
I'm finding this topic fascinating and, nor has my 'mind been made up' but I guess I'll have to conduct my own tests then, but I--being not a current or former professional programmer--cannot get it to do anything logical. So, even it's from a perspective of "self-validation" I want to see if I can get it to code up something better than I can. ...I am not a programmer, I don't get paid a bunch of money, I have a "free" education (-i.e. based on whatever free information I could find) and if this tool can be used to further my education, then I'm all for it (I just need to know how).
 
I've changed my mind. I will not make any more tests. This topic bores me. Everyone has their mind already set and I'm not a proponent of any sh*t in particular. My experience has been explained; the scripts have been shared. Period.

AI has advantages.
AI has disadvantages. And flaws.
Above all AI increases the production output by many magnitudes, while at the same time quality is not raised.

And it does nothing really creative new.
All it does is, it finds patterns in already produced stuff, and reorganize those patterns into other patterns by rules of also known patterns. For many routine things that's completely sufficient; to assign boring routine work to a machine. That's what it's orginally meant for, like any other kind of automation.
But this is very boring for real humans interested in anything actually new, particulary creative humans interested in producing creatively new stuff.

Also AI's product quality needs to be controlled very carefully. The two largest flaws of AI are inherent to its very system:
1. While it avoids already known errors, which makes it look so cool, it produces complete new ones instead; sometimes even really tricky, and very hard to find ones. As an experienced programmer you know, it's harder to find errors in something you think, it must be flawless, than when you already know it's buggy. While program's source code is something that can be checked by distinct rules, alas many other things AI produces are neither produced nor tested with expertise, but being released into the wild anyway without being quality approved at all.

2. The speed of production output overtook quality control way faster than anybody can keep up, to pick the useful stuff, while gigantic heaps of useless garbage can be produced in no time.

So, to use AI right also debugging, and quality control, which still need to be done by conventional methods, so cost almost the same time and effort as before like without AI, need to be either adjusted, too, first, to keep up with the speed of output, or new output has to wait, until the former one is tested, and quality approved.
Otherwise places trash up.
Anybody ever lived in a shared accommodation knows: Everybody likes to be the kitchen's chef, whirling with flaming frying pans, impressing others with his/her cooking. But afterwards nobody is there to clean up the kitchen.
The people producing are in the majority, now having a tool in their paws to produce even larger amounts of stuff in even way shorter times, while the people who clean up the mess afterwards neither became more, nor got better tools.

Plus the cost for quality control are not a neglectable, tiny part compared to the benefits AI may bring. Not seldom the costs for to correct some AI output are even larger than if one did it all by her-/himself without AI in the first place. So, it also always needs to be weighed up. Which is another task needed realistically being added to the bill deducted from AI's benefits. Which makes AI to anybody, who worked with and tested it a while, not so very dazzling shiny anymore, as it seems at the first glance, or being hyped to sell a new technology having billions invested.

Not a quarter goes by, when the topic AI is discussed here. Most here don't "just refuse themselves to the topic by prejudices", but (also) already have actual experience with it. Just because they don't share the same enthusiasm doesn't mean they are all by principle completely against this technology. Mostly it's just because they already have enough experience to see like any other new thing it not only has benefits, but also downsides - see things more realistic. They already know how they use it for what, or not. They don't need to be told. And above all, most here are simply tired of this topic AI - not tired of AI per se, but of discussing it again, and again, and again.

However,
when I see "made by AI" written over it, or even smell, a text was written by (or with the help of) AI, I immediately stop reading. I lose interest instantly.
Why shall I read a text by somebody obviously too lazy to write it him-/herself?
Yes. Writing a text needs ten times the time to read it - at least when you try to write it readable, interesting - for the reader. AI does not change that. It produces texts way faster than anybody can read. But you still need the same time afterwards to make it readable for your readers, anyway. I don't waste my life time for reading some garbage presented by somebody who does not care about the readers.
Particulary I don't waste my time to do any quality control on other's AI output.
Anybody may, or may not use AI. But if, you do it for yourself, only. It's very personal.
But you don't present its output to others. Especially not, as if it was something they've never seen, or being incapable to produce it themselves. That's another downside of AI: The creative value of its products is, as I already said above, none. Worthless. It's the tool, that delivers the product. Everybody can have access to this tool and its sources, so everybody can do it. And what everbody can do is nothing special, not worth to appreciate, not even worth to talk about.
Presenting AI output is like another kind of "Here, let me google that for you!"
To me it's like a little child crying from the toilet:
"Mommy, Mommy! Look, what I've made!!"
"That's nice, pumpkin. Please, just flush now."
 
My personal suspicion is that AI will remain a major tool for debugging, vulnerability identification and things of that nature mostly as is, but hopefully in a less resource intensive manner.

On the code generation side, I could see a class of even higher level programming languages that are explicitly intended to go through a constrain the AI to roughly what you want, generate code and further constrain the code/tweak cycles. The current methods just are not viable long term, regardless of how good the code is. It takes an ungodly amount of money and resources to train one of these AIs and they still have essentially no knowledge of any of what's going on, they're just reacting to the feedback they've been given and trained on code that already exists.

Even if that wasn't all the case, you'd wind up with issues in terms of having anybody at all with the experience coding to identify broken programs or programs that have one poison pill or another. Not to mention all the issues lately with prompt injection. Personally, I'm more of a hobbyist, but I don't see much point in going beyond a somewhat more advanced autocomplete features. I'm fine with the IDE doing things like setting up getters and setters when the language does that. Similarly, there's little harm in putting in boilerplate stuff like unimplemented methods or renaming all the variables if I've realized that the variable name I've chosen is stupid and want a better one.
 
Just a random selection of headlines from today. Seems to be a hot topic for some reason. I suppose a $40 trillion stock market bubble will do that to you.

"More than half of CEOs report seeing neither increased revenue nor decreased costs from AI, despite massive investments in the technology, according to a PwC survey of 4,454 business leaders."

"It's only a matter of time before AI-generated vulnerabilities become widespread. Few organizations will ever admit that a weakness came from their use of AI, but we suspect it's already happening. This won't be the last you hear of it — of that much we're sure."

"Making money isn't everything ... at least not when it comes to AI. Research from professional services firm Deloitte shows that, for most companies, adopting AI tools hasn't helped the bottom line at all."

“We’re close to zero job growth. That’s not a healthy labor market,” Federal Reserve governor Christopher Waller said at the Yale summit. “When I go around and talk to CEOs around the country, everybody’s telling me, ‘Look, we’re not hiring because we’re waiting to try to figure out what happens with AI. What jobs can we replace? What jobs do we don’t?’” "

This is a kind of inverse operation... looks like C-suite jobs are good candidates for AI replacement, who knew?

1769110863775.png


Ah yes... and last but not least, the famous "K-shaped economy curve"

The red line is essentially big tech stock prices. The blue line is jobs, for want of a better word. The vertical black line is when chatgpt was launched. Note that the red and blue lines were tracking each other, more-or-less in sync, historically for the last 20 years; in other words, jobs broadly tracked investment; until they diverged at the vertical black line, forming the 'K'. The article's authors might have finessed their description a bit so it doesn't read quite as bluntly as I have put it; but you have to read between the lines.

1769111192678.png

Of course if they do actually get AI to work, that divergence is going to accelerate and the K we can see here is going to turn into a very large K indeed; which is what they are all betting on, and hence is why investors are still buying stocks in AI tech companies even at the current astronomical p/e ratios. Everyone is buying, because of FOMO; if AI works out, being out of the market will be worse than being in. And all those smart guys are saying it will be ready real soon now; and they've got PhD's, and everything.

I guess it's a case of "watch this space". Better keep fingers crossed it doesn't go "pop", or I'll have to post that Hyuna video again.
 
Wihout finding a definition, I think it's the reinvention of existing things. Have a program develop the obvious for you and make it look speciai. Tech ambition is the joke if you plagiate everything without adding something new.
 
JohnK said:
What do you mean? I may not know what 'vibe codding' is then.

I think "vibe coding" generally means using AI to create an app, without touching code. You interact with the AI and it writes, builds, tests, and does everything else (these days it can also deploy, debug, and a lot more).

IME the generated code is usually trash unless you are very specific, intentional and thorough with your prompts. That said, it can be functional trash that 100% works correctly. AI isn't (yet) very good at applying wisdom and generating clean, maintainable code. But then again, does it need to? You can tell it to achieve a task (e.g. "add X feature and make sure all tests pass") and it could churn away at it non-stop for days until it works, however horrible it may be inside.

I don't think chatcpt can come with the logic behind the `aif()` above.

Maybe I am misunderstanding what you mean, but this sort of stuff is trivial for AI:

Code:
prompt:

> can you implement this in sh (not bash):
> # aif --
> # Aniphoric if.
> # This function will check an expr is not NULL or "-" before returned,
> # otherwise an iffalse value is returned.
> # EX
> # var=$(aif $(some_expr) 7)

response:

> Yep — totally doable in pure POSIX sh (no bashisms).
[...]
> aif() {
>     expr=$1
>     iffalse=$2
>
>     if [ -n "$expr" ] && [ "$expr" != "-" ]; then
>         printf '%s\n' "$expr"
>     else
>         printf '%s\n' "$iffalse"
>     fi
> }



Professionally I find way more success using it as a companion still, asking for certain things and reviewing/tweaking the output, but for throwaway or proof-of-concept code, letting the AI run free (vibing) can save a bunch of time. However, usefulness of AI depends very much on what exactly you do--the type of apps/code you work on, language popularity, the tools available, environment within which you work and the code runs...you can get much more value out of AI in some scenarios than others.

Personally, I think the models need way more context than they currently have, and time to evolve to use it correctly. Most are around 300k tokens I think. Gemini is better in this area with (up to) 1M. Having more context should allow the models to get even better. Supermaven was an AI tool/plugin specifically for auto-completion that had 1M context and it was great (until it got bought and locked behind Cursor, IIRC). Once the AI can reason about a whole codebase, best practices, languages, APIs, libraries, etc., it should be able to make much better output, I think.
 
I think "vibe coding" generally means using AI to create an app, without touching code. You interact with the AI and it writes, builds, tests, and does everything else (these days it can also deploy, debug, and a lot more).

IME the generated code is usually trash unless you are very specific, intentional and thorough with your prompts. That said, it can be functional trash that 100% works correctly. AI isn't (yet) very good at applying wisdom and generating clean, maintainable code. But then again, does it need to? You can tell it to achieve a task (e.g. "add X feature and make sure all tests pass") and it could churn away at it non-stop for days until it works, however horrible it may be inside.



Maybe I am misunderstanding what you mean, but this sort of stuff is trivial for AI:

Code:
prompt:

> can you implement this in sh (not bash):
> # aif --
> # Aniphoric if.
> # This function will check an expr is not NULL or "-" before returned,
> # otherwise an iffalse value is returned.
> # EX
> # var=$(aif $(some_expr) 7)

response:

> Yep — totally doable in pure POSIX sh (no bashisms).
[...]
> aif() {
>     expr=$1
>     iffalse=$2
>
>     if [ -n "$expr" ] && [ "$expr" != "-" ]; then
>         printf '%s\n' "$expr"
>     else
>         printf '%s\n' "$iffalse"
>     fi
> }



Professionally I find way more success using it as a companion still, asking for certain things and reviewing/tweaking the output, but for throwaway or proof-of-concept code, letting the AI run free (vibing) can save a bunch of time. However, usefulness of AI depends very much on what exactly you do--the type of apps/code you work on, language popularity, the tools available, environment within which you work and the code runs...you can get much more value out of AI in some scenarios than others.

Personally, I think the models need way more context than they currently have, and time to evolve to use it correctly. Most are around 300k tokens I think. Gemini is better in this area with (up to) 1M. Having more context should allow the models to get even better. Supermaven was an AI tool/plugin specifically for auto-completion that had 1M context and it was great (until it got bought and locked behind Cursor, IIRC). Once the AI can reason about a whole codebase, best practices, languages, APIs, libraries, etc., it should be able to make much better output, I think.
Generally speaking:
aif() logic works like this (in pseduo-ish code):

1) set a var (with default, for example).
2) get a var (from user or another program):
3) check that get'ed value for validity.
4) if that get'ed value is not valid, use the original set'ed value.

So, in this case of a user-controlled config file (which we typically treat as unhygienic or potentially hostile):

key=value

if in step #2 our program obtains the value from the user-controlled config file, we as programmers can either assume 'value' is correct (e.g., "a valid path") but what if it's not? Of course, we can just let our program toss a wobbly but that's not very useful. There are common patterns we can employ (like using a simple conditional) and this logic is what construct I think the AI can only produce (because it may not care if a pattern is repeated many times and/or is inefficient for maintenance).

Code:
set_value: default.path
get_value: config.path

if ( ! get_value ) {
    do_something: set_value
  else
    do_something: get_value
 }

-i.e. not (a more concise, and in what I think is a more professional approach):

Code:
set_value: default.path
do_something: (aif (get_value: config.path), set_value)

However, my comment about not being a "professional programmer" should be restated here. But, I would love to hear a real professional's opinion about this because I feel this can be a significant time/effort saver vs the "if construct" when it's duplicated 16 times throughout the program (change once for multiple locations in the "self-documented code" style). I think you may have addressed this point in your post, and I could be very much out of my depth so, please if you have time or an opinion on my ramblings...

And maybe AlfredoLlaquet already hinted at the "proper professional opinion" being a human's code will always be better than an AIs and I'm just beating a dead horse. Nerveless, I'll shut up.

However, I did notice how you and the AI made my function POSIX by removing "local". I take this to mean 'local' is more "bash-ish". Noted, nice tip! Thanks.

At one time I did have access to some professional (or at least "astonishingly gifted") people and I'm really starting to appreciate, more and more, how I'd get directions like: "Do this in one pass. Keep track of two variables as you iterate" because I'm not sure AI is doing that (in my tests with it).
 
I've changed my mind. I will not make any more tests.

Agree with others... THIS IS A GOOD THREAD ! :cool:

I mean there are (entire companies) sacking zillions of employees by the truckload right now -- over what you are posting !

The interesting part about what you are posting is the evolution of AI. So (today) we get result X and (tomorrow) we get result Y. The ultimate goal is that "the User" gets a working script that they can support and are proud to show off.

The current needed fix to AI (in my opinion) is that the (same AI) should deliver the (same result) every time it is asked the (exact same question). And at the moment I don't think AI is there yet?
 
I tried it with ChatGPT a few times. Not seriously but to see what it could do. I wasn't impressed, disappointed actually.

However management at $JOB is pushing AI to virtually all staff. Not just any AI but the tools that have been sanctioned by our parent company. So I am on AI training, and there's a lot of it. Frankly, AI could certainly replace the poor performers with ease and I expect it to at some point.
 
I keep thinking that Harlan D. Mills and Edsger Dijkstra, both of whom fought for more rigor in programming, must be turning in their graves.
 
  • Like
Reactions: cy@
Hopefully LLMs will do significant help doing a manpage
I possibly deserve that but at the same time I guess at the end of the day I shouldn't be ashamed of "trying", right? And FWITW, I know the quality of my code/tools/widgets/things isn't "top notch", but the effort(s) are from me just trying to help as best I can.
 
I possibly deserve that but at the same time I guess at the end of the day I shouldn't be ashamed of "trying", right? And FWITW, I know the quality of my code/tools/widgets/things isn't "top notch", but the effort(s) are from me just trying to help as best I can.

There must be some misunderstanding here.

I'm just hoping that I get a shortcut to a manpage.
 
I mean there are (entire companies) sacking zillions of employees by the truckload right now -- over what you are posting !
I don't think this is really happening. AI is very bad at customer service, for instance, because LLMs are novelists, not technicians. So they invent a lot of what they write still today.

I think companies are using AI as a scapegoat. In the real world, it's still not being used as much as some boast by a long shot.
 
There must be some misunderstanding here.

I'm just hoping that I get a shortcut to a manpage.
Oh?! I thought your post was saying my tool(s) should be lumped in with (or below that of) AI code (from a discussion in another thread about an AI manpage). I'm sorry. You are right. Misunderstanding.

 
Oh?! I thought your post was saying my tool(s) should be lumped in with (or below that of) AI code (from a discussion in another thread about an AI manpage). I'm sorry. You are right. Misunderstanding.

That's a very good tool because markdown is very easy and having it automatically converted to mdoc is perfect.

The choice of markdown is the best possible.

When is this tool going into the ports or the base system or somewhere where FreeBSD programmers can benefit from it?
 
Do I recommend "vibecoding"? Only for personal projects to be executed at home and not shared with the public. Why? Because I believe that a human should write or revise any code that's put "out there" for anyone to use. And if you are gonna revise it, you might as well write it (or you are gonna get really bored just reading code).
My "vibe" coding is looking at notes and seeing if I get a vibe for anything I can improve :cool:

It works pretty well! I might catch inconsistencies between website confs, or see a new way of doing something on older notes that I did on newer ones.
 
Back
Top