How is your prompt engineering?

E.g. what is the capital of france in python/ollama,

cat phi3.py

Code:
import ollama

# Create a client to interact with the Ollama server
client = ollama.Client()

# This is a simple chat-style interaction
# The 'chat' method is recommended for most uses
response = client.chat(
    model='phi3',
    messages=[
        {'role': 'user', 'content': 'What is the capital of France?'}
    ]
)

# Print the model's response
print(response['message']['content'])

# You can also use the 'generate' method for a single-turn completion
response = client.generate(
    model='phi3',
    prompt='What is the capital of Germany?'
)

# The response from generate is a bit different
print(response['response'])
 
Back
Top