How is your prompt engineering?

Linux Torvalds doinng vibe coding (not on the Linux kernel):

Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer."

%%

Elsewhere Linus states on the issue of forbidding LLM contributions to the kernel that it would be useless to do so since the AI actors would not disclose LLM use.

%%

And unless you ar scared you can now give ollama a literal shell to execute programs on your computer:

%%

On the other side of things there is work being done to use various LLMs to resolve merge conflicts. Now that I am interested in. It uses a database of past merges, though, so it only works on trees it has been trained on. In this case the Linux kernel.
 
Python program to ask the capital of france to ollama,

Code:
import ollama

# Generate a one-off response from the model
response = ollama.generate(
    model='deepseek-r1:1.5b',
    prompt='Why is the sky blue?'
)
print(response['response'])

# Or, use the chat function to maintain a conversation
messages = [
    {'role': 'system', 'content': 'You are a helpful assistant.'},
    {'role': 'user', 'content': 'What is the capital of France?'},
]

chat_response = ollama.chat(
    model='deepseek-r1:1.5b',
    messages=messages
)

Note, for coding "qwen2.5-coder:3b" is very good model.
 
Python program to ask the capital of france to ollama,

Code:
import ollama

# Generate a one-off response from the model
response = ollama.generate(
    model='deepseek-r1:1.5b',
    prompt='Why is the sky blue?'
)
print(response['response'])

# Or, use the chat function to maintain a conversation
messages = [
    {'role': 'system', 'content': 'You are a helpful assistant.'},
    {'role': 'user', 'content': 'What is the capital of France?'},
]

chat_response = ollama.chat(
    model='deepseek-r1:1.5b',
    messages=messages
)

Note, for coding "qwen2.5-coder:3b" is very good model.
Wow, coding is making less and less sense. If THIS is what it takes to just ask an AI chatbot a question, then yeah, I can kind of see what 'Vibe' coding is. Honestly, I do have more respect for embedded systems programming and dereferencing pointers in C/C++... and git. 😩
 
truly, the future of software is asking, in increasingly desperate english, for a statistical output generator to squeeze out, from the drippings of github, something that works.

in case it is unclear, the previous sentence is deeply, deeply, sarcastic.
 
If linus can do it just by 'vibe coding', ie typing a few english questions and copy and pasting the code in, then why do we need to pay linus all that money? I know a 12-year-old who can do the same 'vibe coding'... in Vijayawada...
 
Wow, coding is making less and less sense. If THIS is what it takes to just ask an AI chatbot a question, then yeah, I can kind of see what 'Vibe' coding is. Honestly, I do have more respect for embedded systems programming and dereferencing pointers in C/C++... and git. 😩
The trick is , not to deference pointing to void or the sky. Too always free what is allocated and only once.
 
Are you getting useful things out of LLMs (AI chatbots)?

I use ChatGPT extensively to help in my literary failures with great success, so much so that I'm a paying customer (Plus tier, 20$/month).

I use projects with specific instructions (a general prompt that is always applied in that project).

The project I use the most serves as a literary Swiss Army knife. In it, I have defined commands to which ChatGPT responds. Examples:

L: <word> = list synonyms for <word>

A: <word> = list antonyms for <word>

R: <word> = explain register of <word>

X: <word> = determine whether X is an idiom or a usual expression in American English or suggest correction

Q: <explanation> = suggest words or common expressions that match <explanation>

Z: <text> = check grammar for text given my rules about what types of issues I want to be flagged - list individual "errors" and suggested corrections

T: <text> = translate Spanish <text> into English

TS: <text> = translate English <text> into Spanish

I: <text> = check the naturalness of <text> in American English given my specific rules about what I want to be flagged - list individual "errors" and suggested corrections

In this project, I open a new chat and use it as a a "shell" until it becomes a little bit slower than usual and I open another one.

I have another project that creates a "dictionary" entry of any word to my specifications. I save these entries and consult them later. If I don't have an entry for a word, I create it. Example:


Portmanteau

PART OF SPEECH
noun

AMERICAN PRONUNCIATION
\ pȯrt-ˈman-(ˌ)tō \

COMMON MEANINGS
  • 1. A (now often historical) suitcase or traveling case, especially one that opens into two matching sections.
  • 2. A word (or word part) formed by blending the sound/spelling and meaning of two (or more) other words (for example, smog from smoke + fog).
Figurative use: the linguistic sense is inherently figurative in that it describes a single form that “carries” multiple sources; the luggage sense is literal.

COMMON SPANISH TRANSLATIONS
  • Luggage sense: maleta, maletín, baúl de viaje (context-dependent).
  • Linguistics sense: palabra maleta, palabra-fusión, cruce léxico (in more technical contexts), or the loan portmanteau (common in specialized writing).

REGISTER
  • Luggage sense: archaic or historical in contemporary general American usage.
  • Linguistics sense: technical (linguistics, rhetoric, editing), but also common in educated general prose.

PREPOSITIONS
Notable pattern: portmanteau of + source words/ideas.
Less common: portmanteau for + intended function/label, portmanteau between + two entities.

COLLOCATIONS
  • portmanteau word
  • a portmanteau of X and Y
  • classic portmanteau
  • political portmanteau
  • brand-name portmanteau
  • formed by blending
  • coined from (two sources)

NUANCES
  • A portmanteau (in the linguistic sense) is typically a blend, not a straightforward compound: it usually merges overlapping sounds or letters rather than simply concatenating whole words.
  • In casual usage, portmanteau is sometimes applied loosely to any two-part combination (including some compounds), but in technical usage it more narrowly denotes a blend.
  • Many portmanteaus compress meaning for brevity, humor, branding, or political framing, often creating a term whose connotations differ from either source word alone.

EXAMPLES
Meaning 1: luggage
  • The traveler packed a portmanteau with shirts and papers for the overnight journey.
  • In the museum display, a leather portmanteau opened into two matching compartments.

Meaning 2: blended word
  • Brunch is a portmanteau that merges breakfast and lunch.
  • The headline used a portmanteau to compress a long policy dispute into a single, memorable label.
  • The coinage smog functions as a portmanteau, combining both form and meaning from its sources.

Prepositions
  • Portmanteau of: “Brunch is a portmanteau of breakfast and lunch.”
  • Portmanteau for: “The columnist coined a portmanteau for the new trend.”
  • Portmanteau between: “The nickname became a portmanteau between two rival factions’ slogans.”

Collocations
  • Portmanteau word: “The glossary defines each portmanteau word used in the campaign.”
  • A portmanteau of: “The label was a portmanteau of two competing proposals.”
  • Classic portmanteau: “Editors often cite brunch as a classic portmanteau.”
  • Political portmanteau: “The debate produced a political portmanteau meant to ridicule the opposition.”
  • Brand-name portmanteau: “The startup chose a brand-name portmanteau to signal two services at once.”
  • Formed by blending: “The neologism was formed by blending the first syllables of both terms.”
  • Coined from: “The term was coined from a pair of technical expressions to simplify instruction.”

SYNONYMS
Linguistics sense (register noted)
  • blend, word blend (neutral/technical)
  • blend word (technical)
  • telescoped word (technical, less common)
  • Frankenword (informal, playful)
Luggage sense (register noted)
  • suitcase (neutral)
  • traveling bag, traveling case (neutral)

ANTONYMS
(No single fixed antonym is standard; the following are conceptual contrasts.)
  • simple word (neutral)
  • unblended form (technical)
  • monomorphemic word (technical)

DERIVATIVES
No widely standardized derivatives beyond regular adjectival (attributive) use of portmanteau (for example, portmanteau term, portmanteau label) and the plural forms.

SHARED ROOTS
From French elements meaning to carry + cloak/coat, ultimately related to Latin portare (“carry”) and to mantle-family forms.
Examples of shared-root relatives in English include portable, transport, import, export (carry-related family), and mantle, mantel (cloak-related family).

COUNT OR NO-COUNT
Count: typically used as a count noun (for example, “a portmanteau,” “several portmanteaus”).

PLURAL OR SINGULAR
Singular: portmanteau.
Plural: portmanteaus or portmanteaux; pronunciation for the plural commonly appears as \ pȯrt-ˈman-(ˌ)tōz \.
 
Are you getting useful things out of LLMs (AI chatbots)?
Mixed results, but I like it. I opened a openrouter.ai account, so I can try/switch between many models. As pair-programming tool I'm mainly using Aider. It supports many AI models, and it runs on all the editors, because it simply watches the files.

Doing so, I can play with AI, without being too much committed/dependent to it and to a fixed AI workflow. I'm using it as an additional tool.

I don't want to become an AI user expert, because the technology is yet a little immature and dangerous, but I want to be prepared, because it is progressing.
 
Not doing any engineering. I only ask questions that can have a true or made up answer. Should I try AI development aid now? I ran a AI-generated qbasic tetris some months ago. It didn't work even with plagiated code. I personally concluded that AI as in prompt-services isn't capable of constructive development of anything.
 
[training a code model recently]
But as inexperienced coders started turning up in greater numbers, it also started to poison the training data. AI coding assistants that found ways to get their code accepted by users kept doing more of that, even if “that” meant turning off safety checks and generating plausible but useless data. As long as a suggestion was taken on board, it was viewed as good, and downstream pain would be unlikely to be traced back to the source.

Human stupidity making AI dumber.... I love it :cool:
 
Back
Top