Prompt Engineering 201: 3 Techniques to Get Better Answers (Few-Shot, Personas, Constraints)

104. Prompt Engineering 201: 3 Techniques to Get Better Answers (Few-Shot, Personas, Constraints)

By Sapumal Herath · Owner & Blogger, AI Buzz · Last updated: March 3, 2026 · Difficulty: Intermediate

Most people use AI like a search engine: they ask a question and hope for the best.

But AI isn’t a search engine. It’s a reasoning engine. And the quality of its answer depends entirely on the quality of your instructions.

If you’ve mastered the basics (“Write me an email”), it’s time to level up. This guide covers three intermediate techniques that separate average users from power users: Few-Shot Prompting, Persona Prompting, and Constraint Prompting.

1) Few-Shot Prompting (The Most Powerful Trick)

The single best way to improve an AI’s output is to show, don’t just tell.

Zero-Shot (Basic): You give no examples.
“Classify this tweet as happy or sad.”

Few-Shot (Pro): You give examples of what you want.
“Classify the sentiment of these tweets:
Tweet: ‘I loved the service!’ -> Sentiment: Positive
Tweet: ‘The wait was too long.’ -> Sentiment: Negative
Tweet: ‘The food was okay, but expensive.’ -> Sentiment: Neutral
Tweet: ‘My package never arrived.’ -> Sentiment: ?”

Why it works: The AI sees the pattern and follows it perfectly. Use this for formatting data, writing in a specific style, or classifying text.

2) Persona Prompting (Setting the Context)

AI models are trained on the entire internet. If you don’t tell them who to be, they default to “generic helpful assistant.”

The Fix: Assign a role at the start of your prompt.

Standard:
“Write a blog post intro about coffee.”
(Result: Generic, boring, Wikipedia-style text.)

Persona Prompt:
“You are a world-champion barista with 20 years of experience. You are passionate, slightly snobby about beans, but welcoming to beginners. Write a blog post intro about coffee.”
(Result: Rich, voice-driven, authoritative text.)

Why it works: It narrows the model’s focus to a specific subset of its training data (e.g., “expert medical knowledge” or “senior developer code”).

3) Constraint Prompting (Limiting the Output)

AI loves to ramble. It loves to use flowery words like “delve” and “tapestry.” You need to tell it what NOT to do.

Common Constraints:

  • Length: “Keep it under 50 words.”
  • Format: “Output your answer in a Markdown table only. Do not write any intro text.”
  • Style: “Do not use jargon. Use simple English (Grade 5 level).”
  • Exclusion: “Do not mention Competitor X or Competitor Y.”

Why it works: Constraints force the model to be disciplined. They are essential for generating code, data, or clean summaries.

🚀 The “Mega-Prompt” Structure

Combine all three techniques into one powerful prompt template:

[Persona]
You are a Senior Product Manager.

[Context/Task]
We are launching a new feature. Write a product announcement email.

[Constraints]
Keep it under 150 words. Be exciting but professional. Do not use buzzwords.

[Few-Shot Examples]
Here is an example of our brand voice from a past email: [Insert Example].

[Input]
Draft the email for our new “Dark Mode” feature.

🧪 Mini-Lab: Rewrite a Bad Prompt

Bad Prompt: “Help me fix this code.”

Better Prompt (Using 201 Techniques):

“You are a Senior Python Engineer [Persona]. Review this code for bugs and efficiency [Task]. Output your answer in two parts: 1) The fixed code block, 2) A bulleted list of what you changed [Constraints]. Do not explain basic concepts, just fix the logic [Constraint].”

🔗 Keep exploring on AI Buzz

🏁 Conclusion

You don’t need to be a prompt engineer to get great results. You just need structure.

Next time you open ChatGPT, don’t just ask. Assign a role, give an example, and set a limit. The difference will be immediate.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts…