Dear AI friends,
Does your language model sometimes feel like a talking parrot? Deep down, you’re hoping for a deep, nuanced exchange, but instead, the model replies with cheerful surface-level responses. It’s articulate, but not quite insightful.
There’s a reason for that. Language models, by design, don’t “understand” your intent. They mechanically predict the next likely word, which made their reputation as stochastic parrots in research circles (cf. On the Dangers of Stochastic Parrots). However, with skillful prompting, you can steer the behavior of the model, unlock its deeper cognitive capabilities, and create an amazing synergy between yourself and the machine.
In an organizational context, prompting isn’t just about getting better AI outputs. It’s also a powerful discovery mechanism. Every prompt that solves a recurring problem—be it drafting a brief, summarizing action items, or synthesizing customer insights—can be the seed of a scalable AI use case. The prompts your team uses today are often proofs-of-concept for the workflows, AI assistants, and automation pipelines of tomorrow.
In the following, I’ll dissect effective prompting. If you aren’t a techie, the best way to become a great prompter is by drawing parallels to how you think and communicate in the real world:
Just like great communication, great prompts rely on clarity, specificity, and context.
To make your job easier, with LLMs, you can skip empathy. They don’t need to feel understood. Just provide the instructions and the necessary information.
Many of the cognitive strategies we use intuitively—like breaking problems into parts, reflecting before acting, or reasoning by analogy—are also powerful patterns for prompt design.
In the following, we will first decompose a prompt into its components. Next, you’ll learn the core prompting techniques that power everything from quick insights to complex reasoning. Finally, I’ll share actionable best practices to help you and your organization build prompting into a repeatable, scalable capability.
As you read, I encourage you to open up ChatGPT or another LLM of your choice (cf. our list of popular LLMs for frontend prompting) and play with the examples on your own. At the end of the article, you’ll find concrete next steps to grow your prompting skills “on-the-job.” If you’d like a personalized introduction and a repeatable prompting setup for your team, explore our workshops.
The anatomy of a prompt
Good prompts follow a structure and have a set of repeatable components. Once you internalize these, your prompting will be effortless and efficient. Let’s take a look at the common components of a prompt, as in the following example:
Context gives the model background knowledge and frames the role it should adopt. It can include prior conversation history, relevant regulations, desired style, or domain-specific goals.
Implementation tip: Ground your prompt in your “enterprise truth.” Attach additional files and documents, like meeting notes, presentations, etc., where the AI should source its information. This helps you tailor the output and prevent hallucinations.
Instruction: This is the command—what you want the AI to do. It should be unambiguous, goal-oriented, and as specific as possible. Any gaps you leave will be filled in with assumptions by the model, leading to vague, off-target, or overly creative responses.
Input variables: For prompts that will be reused, input variables let you swap in different scenarios, increasing efficiency.
Examples: When the task is nuanced—like matching tone, structure, or logic—give examples. Few-shot prompting is one of the fastest ways to teach the model how you and your team work.
Output format: Don’t assume the AI knows what format you want. Be explicit. Whether it’s a JSON object, bullet list, or structured article, tell it what to return.
Constraints: Often, it’s not about what to include, but what to leave out. Constraints help shape content, tone, length, etc.
Not all of these components are needed for every prompt. As a rule of thumb, start with the instruction and add more information and complexity to improve the outputs. Once you’ve found a prompt that works, don’t treat it as a one-off - rather, turn it into a template. You can also store individual components like the context, variables, examples, and constraints. Transform prompting from an ad hoc activity into a structured practice that evolves over time and contributes to your organization’s AI capital.
Prompting techniques
The most effective prompting techniques are grounded in how people naturally think and solve problems. That makes them not only powerful, but also easy to learn. Let’s explore four foundational techniques that map to some of our core cognitive capabilities. We will proceed in order of growing complexity:
Zero-shot prompting - A simple entry point
Zero-shot prompting is the most straightforward and intuitive way to interact with a language model. You give a direct instruction, like “summarize this” or “make it sound more polite”, and the model responds immediately. It’s the fastest way to get started and build familiarity with tools like ChatGPT.
We do this kind of thing naturally as humans all the time: when we shoot off a quick email, ask a colleague to rephrase a slide title, or jot down a decision in bullet points. It requires little setup, minimal context, and often delivers good-enough results for low-complexity tasks.
This ad-hoc approach is great for warming yourself up to the idea of AI, but it has a clear ceiling. It falls short for tasks that demand deeper reasoning, structured output, or contextualization.
Example prompts:
“Summarize this user research report in three key insights for the head of product.”
“Rewrite this Slack message in a more assertive tone.”
“Explain this regulatory update in plain English for a general audience.”
✅ Use it for:
Quick summaries
Tone and language rewrites
Data-to-text
Translation
⚠️ Anti-patterns:
Vague inputs with unclear context (“Can you help with this?”)
No format or audience guidance
Over-reliance for complex or strategic tasks
💡 Implementation tip: Be specific about the role, audience, format, and tone. Small framing tweaks (“in two bullet points,” “for a skeptical CFO”) make a big difference.
Few-shot prompting - Show not tell
Few-shot prompting mimics learning by analogy. It’s the famous “show-not-tell” principle applied to AI: in your prompt, you show the model what you want by giving a few examples. The AI picks up the structure, tone, or logic and applies them to a new instance.
Here is an example prompt for providing exec summaries of verbatim user feedbacks on a new dashboard product:
Example 1: “We’re concerned the new reporting dashboard increases our team’s workload. It looks nice, but we’re doing more manual reconciliation than before.”
→ Summary: Reporting team raised concerns about increased manual effort due to new dashboard rollout.
Example 2: “Appreciate the automation improvements. Our triage time is down by 20% already.”
→ Summary: Ops team reported 20% faster triage thanks to automation updates.
Example 3: “The dashboard UI looks modern, but we’re still missing key audit fields. We’ve flagged this twice.”
→ Summary: Compliance team noted missing audit fields; issue flagged multiple times.
Example 4: “We’re excited about the tool, but would love to see a roadmap for the next features.”
→ Summary: ?
💡 Implementation tip: Don’t mix apples and bananas—your examples need to follow a consistent structure, tone, and intent. Few-shot prompting only works when the model can recognize and generalize a clear pattern.
✅ Use it for:
Tone- and structure-sensitive writing
Internal workflows (e.g., support replies, changelog entries)
Brand-aligned phrasing or formatting
⚠️ Anti-patterns:
Inconsistent examples (different tone, structure, formatting)
Too many examples, which can dilute the pattern
Using noisy or irrelevant samples
Chain-of-thought - Eating the elephant one bite at a time
Chain-of-thought prompting explicitly asks the AI to break down its reasoning step-by-step before giving an answer. This reflects how we work through complex issues: instead of jumping to a conclusion, we decompose the problem into substeps and address each of them separately. By taking more time for deliberate thinking, we increase the odds of success of our final solution. The same applies for LLMs - but here, time translates into tokens, and the LLM will reason explicitly about every substep. The LLM also shows its work, so you can step in with corrections and refinements.
Example prompt:
“Evaluate whether we should sunset our freemium tier. Think step-by-step. First, list three benefits and three risks of sunsetting the freemium tier. Then, assess how each affects our user base and revenue. Finally, recommend a course of action with rationale.”
💡 Shortcut (use with care): Try skipping the reasoning plan and just telling the model to "think step-by-step"— let it figure out a structure for itself. E.g. “Let’s evaluate whether we should sunset our freemium tier. Think step-by-step.”
✅ Use it for:
Impact analysis
Strategic decisions with many trade-offs
Critical thinking tasks (e.g., partner evaluation, feature prioritization)
⚠️ Watch out for:
Asking for conclusions too early
Overloading the prompt with many tasks at once
Lack of structure (e.g., missing steps, unclear evaluation criteria)
For advanced prompters - if you want to dive deeper, there are many more advanced variations of chain-of-thought prompting, such as:
Tree-of-thought prompting: Instead of a single linear reasoning path, this method explores multiple reasoning branches in parallel, allowing the model to evaluate and choose among alternative solution paths.
Graph-of-thought prompting: Expands on tree-of-thought by enabling more complex, non-hierarchical reasoning structures where nodes represent ideas or steps and edges represent logical connections, fostering richer problem-solving strategies.
Self-consistency prompting: Generates multiple diverse reasoning paths and selects the most common or consistent answer among them, improving accuracy and robustness.
Reflection - Iterative refinement
Reflection prompting encourages the AI to iteratively critique or improve its own answer. The model becomes its own editor, revisiting logic, tone, or clarity and refining accordingly. This aligns closely with how people work: we write drafts, reflect, and revise.
Example prompt:
“Here’s a first draft of our customer response. Now reflect: is it empathetic, clear, and aligned with our voice? Revise if needed.”
Or:
“List three campaign ideas for our product relaunch. Then reflect on each idea’s creativity and risk. Improve the strongest one.”
Reflection can also be applied sequentially. You prompt for a first output with one of the methods outlined above. In the next conversation turn, you ask the model to improve its answer, optionally also providing more specific feedback.
✅ Use it for:
Refining sensitive messaging
Stress-testing AI outputs
Synthesizing and improving rough ideas
⚠️ Watch out for:
Vague reflection prompts (“Any thoughts?”)
No clear criteria (what should the model reflect on?)
Assuming the first draft is final
💡 Implementation tip: Define the lens. Ask the model to review its work for clarity, empathy, consistency, tone, or alignment with strategy.
Each of these prompting techniques taps into a mode of thinking you and your team already use. “Teaching” them to the LLM in your specific context allows you to tap into its broad knowledge while tailoring the outputs to your requirements.
💡 Pro tip: Beyond separate use of these techniques, you can also stack them together. For example, you might start with a few-shot prompt to generate a first draft, apply chain-of-thought to improve the reasoning, then use reflection to polish tone and clarity.
Best practices for prompting
The qualities that turn you into an excellent communicator—clarity, precision, context, and structure—are very similar to those you need to write great prompts. Follow the following proven practices to consistently elevate the' quality and strategic value of your AI interactions.
Know your model: In a conversation, you are better off if you know your counterpart. AI is no different. Understand what your model knows and what it doesn’t. Check its cutoff date (until when the training data was collected), reasoning strengths, and common blind spots. Experiment with different models for different tasks.
To facilitate your prompting journey, download our overview of the top-10 popular LLMs for frontend prompting here.
Be clear and specific: Imagine you’re giving instructions to a personal assistant on their very first day. They’re smart, but they don’t yet know your preferences, so spell things out.
Ground the model in your enterprise context: Your model likely doesn’t know too much about your company. Attach files and documents that contain information that is relevant for the request.
Governance tip: Before including any sensitive or confidential information in a prompt, review your language model provider’s data retention and usage policies. Understand where your data goes, how it's stored, and whether it might be used for future model training.
Specify the output format: By default, models return unstructured text. If you want a certain structure, say so. For example: “List 5 space tourism companies in JSON. Include: name, founded year, HQ, and unique selling point.”
Iterate and test: Prompting is experimental. Try variations, compare, and refine. Small tweaks = big gains.
Example:
Start with: “Summarize the meeting.”
Then test: “Summarize for the CTO—focus on unresolved decisions and next steps.”
Embrace your editing function: With AI, you are in the role of a curator and editor. Achieving a high quality by just copy-pasting an AI output is a rare thing. Likely, you need to do editing work of some kind, like fact-checking, humanizing the tone, adapting it to your context, etc.
Pro tip: You might have heard the saying: “Your job will not be replaced by AI, but by someone who can use AI.” Becoming a pro at editing AI outputs is your first step towards the AI-upgraded role.
Systematize prompting: As your prompting matures, create a repeatable process:
Use templates for recurring tasks
Save and version proven prompts
Document what works—and why
Make prompting a team sport by leveraging the collective intelligence of your team and sharing successful prompts and strategies.
Leading with good prompts is like leading with good thinking and clear communication. The best AI results don’t come from clever tricks but from clear framing, smart iteration, and strategic intent.
Next steps
Thanks for reading - now, it’s time to practice:
Spot opportunities: Think about the cognitive or creative tasks you do regularly—briefs, emails, outlines, decision memos.
Reverse-engineer your thinking: Do you break things into steps? Use analogies? Iterate through drafts?
Match that process to a prompting technique—like step-by-step reasoning, few-shot prompting, or reflection.
Write a prompt—or just describe your thought process and ask your LLM to turn it into one.
💡 Implementation tip: Don’t overthink the wording. Focus on intent—let the model polish the language.
Test it across models and tools. Each one behaves a little differently.
Keep the good ones. Save, document, and share prompts that work.
Level up your team: Run a short prompting session or build a shared prompt library.
Prompting isn’t just how you get better AI answers. It’s how you surface hidden use cases, build reusable assets, and shape how your organization works with intelligence. Start small, iterate fast, and treat every good prompt like a prototype for something bigger.
Where to go from here:
If you face specific prompting challenges, let me know in the comments, and let’s sort them out. For an individual deep-dive with your team, check out our prompting workshops!
Bookmark our mental model for LLM Prompting Techniques and use it as a cheatsheet whenever you face a prompting challenge.
Share this with a colleague who should be prompting smarter.
Best wishes
Janna