š Hey there,
Last week, we read Googleās Prompt Engineering whitepaper, so you donāt have to. If youāve ever yelled at ChatGPT for being āmidāāthis oneās for you.
Most people are getting bad AI results not because the model sucks⦠but because the prompt does.
Weāre breaking down the most useful stuff from the 60+ page paper into bite-sized, actionable takeaways (with a side of sass). But more importantly, Iāll show youĀ how to applyĀ themāwith real use-cases and prompts you can literally copy-paste.
And yesāweāll also show you how to skip the tinkering and let Prompt Genie do the thinking for you.
Letās break it down š
1ļøā£ First, understand what a promptĀ actuallyĀ is
Youāre not just āasking ChatGPT a question.ā
Youāre giving it instructions. Youāre feeding it context. Youāre shaping its identity. YouāreĀ programming a model using language.
And the way you do that changes everything. Hereās how:
āā āWrite about retro gamesā
ā āAct as a retro gaming blogger. Write a 3-paragraph post about how arcade cabinet designs evolved in the 1980s, citing 2 iconic examples.ā
š§© Framework:
Role + Task + Structure + Constraints = Gold
Both are about retro games, but notice how different they sound? Thatās the power of the prompt. The way you frame your question changes the kind of answer the AI gives youāmore detail, more personality, or just a quick summary. It all depends on what you ask.

2ļøā£ LLM Settings (aka how you control the vibe)
Large Language Models (LLMs) like ChatGPT donāt just spit out random wordsāthey generate outputs one word (or token) at a time based on probabilities. But hereās the fun part: you can tweakĀ howĀ they do that.
Here are the big 3:
TemperatureĀ ā ControlsĀ creativity vs. precision.
Low temp (e.g., 0.1) = logical, predictable output
High temp (e.g., 0.9) = imaginative, unpredictable output
Top-KĀ ā Limits the number of ānext wordā options.
Top-K = 5? It chooses from the top 5 likely next words only.
Higher K = more diverse outputs; lower = tighter focus.
Top-PĀ ā Looks at the smallest set of words that make up P% of total probability.
Top-P = 0.9? The model will only consider words that collectively have a 90% chance of being used next.
Think of it like this:
Setting | Controls | Youād Use It When You Want |
|---|---|---|
Temperature | How wild or safe the output is | More creativity or more logic |
Top-K | Number of choices to consider | More or less variety in style or wording |
Top-P | How confident the AI should be in its picks | Slightly wilder or safer depending on range |
š§ Ā Hereās a real example:
Prompt: āGive me three brand names for a personal finance app.ā
Setting | Possible Output |
|---|---|
Temp 0.2, Top-K 5 | āMoney Manager, Budget Pro, Finance Trackerā |
Temp 0.8, Top-K 50 | āWealthNest, CoinSage, BudgetBuddyā |
Temp 0.8, Top-P 0.95 | āNestEgg Now, SavvyStash, SpendSmartā |
Theyāre all goodābut as you move from lower to higher K or P, you start seeing names with more flair, creativity, or surprise.
3ļøā£ Easy Prompting Techniques That Actually Work
š ļø The most powerful formats:
Technique | Example Prompt | Why It Works |
|---|---|---|
Zero-shot | "Write a one-line caption for this photo." | Good for simple tasks but often too generic and lacks context. |
One-shot / Few-shot | "Write a tweet like this: 'Mondays are for deep work and deeper coffee.'" | Shows the model a structure or style to follow, improving consistency. |
Chain of Thought (CoT) | "Iām trying to calculate my budget. Letās break it down step by step." | Helps with logical tasks by encouraging the model to reason through each step. |
Step-back prompting | "Before writing the email, who is the audience and what do they care about?" | Adds useful context by zooming out before zooming in. |
Role prompting | "You are a career coach. Help me write a confident LinkedIn summary." | Frames tone and expertise for more relevant, tailored output. |
System prompting | "Summarize the text and return only the top 3 points in bullet form." | Defines how the model should behave or return results. |
Contextual prompting | "Context: Iām writing a blog post for Gen Z freelancers. Suggest catchy titles." | Provides task-specific info that guides the modelās output effectively. |
Self-consistency | Ask the same prompt multiple times: "Is this email spam or not? Explain why." | Improves reliability by comparing multiple reasoning paths and choosing the best. |
Tree of Thoughts (ToT) | "Give me 3 ways to explain blockchain to a 10-year-old, then pick the best one." | Encourages deeper reasoning by exploring several paths before settling on one. |
ReAct (Reason & Act) | "Use tools to find the weather in Paris this weekend, then suggest what to pack." | Combines reasoning and action for multi-step, research-based tasks. |
Automatic Prompt Engineering | "Write 10 different ways someone might say: 'I need help resetting my password.'" | Uses the model to generate effective prompt variations and optimize instructions. |
ā Try this: Add āLetās think step by stepā to your next complex request and compare the results.
šÆ Alternatively, just use our tool Prompt Genie to create these super prompts for any AI task
š ICYMI
AI Upskilling
Microsoft just released a free course on How to Create AI Agents for beginners.
AI Roundup
Verizon Sees Sales Boost with Google AI Assistant
Verizon saw a sales lift after fully deploying a Google AI assistant in Jan 2025. Powered by Googleās Gemini LLM, the tool helps customer service reps respond faster and more effectively by tapping into a database of 15,000 internal documents.
ChatGPT Gets a Major Memory Upgrade
OpenAI upgraded ChatGPT with memory, allowing it to recall past interactions for more personalized help across writing, learning, and advice.
Did you learn something new?
šĀ Ā Weād Love Your Feedback
Got 30 seconds? Tell us what you liked (or didnāt).
Until next time,
Team DigitalSamaritan
