Dear AI friends,
Have you ever sat through an AI strategy meeting where everyone seems to speak a different language? Engineers are deep in the latest LLM updates, compliance is throwing up red flags left and right, and leadership wants radical innovation. In the end, nothing gets pushed to production maturity.
After witnessing similar situations more than once, I started developing a structured AI methodology to create the missing big picture and alignment. It is distilled in our new AI Strategy Playbook — a network of mental models to shepherd AI teams through the full lifecycle of discovery, development, and adoption. Some of them are universally applicable to any AI project, while others come in handy when you face specific challenges.
In this episode, I’d like to give you a first taste of this methodology. I’ll share four essential models that have helped us deliver and integrate real-world AI systems. Each one comes with its motivation, implementation guidance, and common anti-patterns:
AI Opportunity Tree – Discover and prioritize AI use cases that align with business value.
AI System Blueprint – Align technical and business stakeholders by mapping the full AI system.
Iterative Development Process – Build AI products that learn from real-world use by launching and improving fast.
Domain Expertise Injection – Embed expert knowledge into your system so it feels like an expert, not an outsider.
Let’s dive in.
🔦 Mental Model #1: The AI Opportunity Tree
Many AI projects start with “let’s use AI” rather than “let’s solve problem XY.” That happens for different reasons—competitive pressure, leadership demand, or excitement with the technology. All of these can be great starting points, but you need to follow up by validating and tweaking the business value of your AI solution. If you skip this step, it will be detached from user needs and business outcomes.
The AI Opportunity Tree helps teams connect the dots between compelling technology and real business impact.
How it works
Each branch of the tree represents a core benefit of AI:
Automation and productivity: AI can support or automate routine tasks like fraud detection, customer service, or invoice processing. It frees up human effort and enables new workflows.
Improvement and augmentation: AI can improve outcomes by combining wide-ranging knowledge (e.g., from LLMs) with human context.
Innovation and transformation: In a rapidly changing world, AI can connect disparate insights, generate new ideas, and support adaptive innovation.
Personalization: AI enables tailored experiences that adapt to user needs, a core advantage in both B2C and B2B environments.
Secondary benefits like convenience or emotional value also exist, but rarely define the core.
Implementation steps
Source ideas: Gather ideas from users, tech trends, and internal insights.
Shape them: Use the AI System Blueprint to map feasibility.
Evaluate & prioritize: Assess impact, technical fit, and alignment with strategy.
Go with the learning curve: Progress from simple to transformative opportunities (often, this means progressing from left to right in the tree).
Visualize & revisit: Keep the tree updated and accessible to all stakeholders.
Anti-Patterns
Starting with “let’s use AI” rather than a clear problem (aka “AI for the sake of AI“)
Tackling abstract or overly complex issues first
Skipping user validation
Running disconnected AI initiatives without a roadmap
🔄 Mental Model #2: AI System Blueprint
I remember a kickoff meeting at a financial company. We aimed to build a chatbot that would allow investment managers to quickly access and analyse financial data à la Bloomberg & Co. Before the workshop, I asked each team member to sketch how they imagined the system. The results showed their different perspectives:
Engineers drew software architectures.
UX mapped user flows.
Data scientists drafted pipelines.
Compliance flagged guardrails.
Their questions also revealed misalignment:
Engineer: “We’ll call an LLM API to process user questions and turn them into SQL queries. How do we pick the best model for this task?”
Future users and UX: “Speed matters, but trust matters more. We need reliable answers. What happens when the AI gets something wrong?”
Data scientist: “That depends on how well we can train it. We’ll need conversation logs to fine-tune the model—but we don’t have any yet. How do we bootstrap the data?”
Compliance officer: “That’s a big one. If the chatbot ventures into giving investment advice, we're exposed. We’ll need guardrails. Should we block topics, add disclaimers, or force summarization instead of generation?”
Even my own priorities were different. Knowing how skeptical the users would be, I was mainly worried that the chatbot would not display enough domain depth and sound like an inexperienced intern speaking to a seasoned investment professional.
Out of necessity, I quickly sketched a simple model to bring everyone on the same page:
Before I had any time to refine the sketch, it was circulated in meetings with leadership and investors. It was at the same time simple to grasp and informative. All stakeholders could confidently use to explain, discuss, and plan the case. The AI System Blueprint was born.
How it works
The blueprint divides an AI system into two spaces. The opportunity space defines what the AI system aims to accomplish:
Use case — The real-world scenario the system addresses.
Value — The concrete value the AI system creates for users and the business.
The solution space specifies how the opportunity will be realized through AI:
Data — The "fuel" for training, evaluating, and operating the system.
Intelligence — Models, compound architectures, or other AI components.
User experience (UX) — The channel for delivering AI value to users; it can be conversational, graphical, hybrid, etc.
Governance — Guardrails from regulatory, IT security, and compliance requirements.
All components are tightly interlinked, and neglecting one can weaken the whole system. For example, a lack of appropriate data directly cascades into the value component when the AI system does not generate accurate or specialized outputs.
Implementation steps
Define system objectives: Start from your AI Opportunity Tree. What user problems or business outcomes is your AI system supposed to support? Define clear success criteria and impact goals.
Explore and design the solution space: Map all major components: data sources and pipelines, model architectures, user experience touchpoints, infrastructure needs, etc. For this, explore the full AI Solution Space Map.
Align your stakeholders: Use the blueprint as a communication tool. Make sure all team members and other stakeholders understand all the components.
Update throughout your iterations: Your AI system is a living object. As you iterate and learn more about the technology and your users, update the blueprint to maintain alignment.
Implementation tip: Print it, pin it, and reference it in every planning meeting.
Anti-Patterns
Solution-first thinking — Obsessing over models or architectures without grounding in real use cases.
Tech chasing — Focusing on the latest AI trends and models instead of a solid AI architecture.
Isolated design — Ignoring the dependencies and feedback loops between data, models, UX, and governance.
⏲️ Mental Model #3: Iterative Development Process
Often, AI projects start with uncertainty. You know there’s a road ahead, but the destination and the terrain are still unclear. Many critical variables aren't in place: the quality of your data, the right evaluation methods, or how much trust and AI proficiency your users will bring to the system. Still, the urge is there and you need to get going.
Especially with GenAI, you need to fasten your seatbelts before the first launch because you will inevitably hit "data shift." The controlled evaluation data and test assumptions rarely mirror the unpredictability of real-world user behavior. The insights that truly matter don’t come from internal testing but emerge once your system is in the wild, solving the live problems of your users.
The Iterative Development Process emphasizes the iteration and optimization phase after the early release of your baseline system.
How it works
The Iterative Development Process includes these core stages:
System definition — Select a real-world opportunity (cf. the AI Opportunity Tree) and specify the AI System Blueprint
Baseline setup — Prepare training data, select initial models, and set up a basic working architecture.
Evaluation — Define evaluation methods, metrics, and acceptance criteria. Run evaluations consistently and transparently.
Optimization — Improve the system through hyperparameter tuning, better data, architectural upgrades, and architecture-specific optimization methods.
Production - Monitor system performance, collect data for fine-tuning and evaluation, and learn about real-world user behavior.
If the model does not meet expectations at evaluation, you loop back into optimization and tweak again. Once the model performs consistently well, you move to production, where monitoring, data collection, and user feedback continue driving improvements.
The value comes from short, fast loops. In our 3–9 month B2B projects, we typically aim to launch a baseline within weeks. Iterations can be anything from a couple of days to two weeks, with each iteration building certainty and reducing risk.
You might wonder: But what if users are turned off by a baseline system that’s too raw? That’s a valid concern, and the key lies in shipping early without alienating your users. Your system should pull users into the loop by providing value, and be flexible enough to evolve through their feedback. If you are in interested in a future episode on the art of the AI launch, let me know in the comments!
How to use it
Pre-launch: Define success and agree on iteration rhythm.
During development: Use this model to structure feedback and improvements.
Post-launch: The real learning starts after production - from here, rinse and repeat.
Anti-patterns
Long feedback loops
Delayed first launch
Scope creep, adding too many features or optimizations per iteration
Believing production means you’re done
🔧 Mental Model #4: Domain Expertise Injection
Even large datasets don’t always encode the tacit knowledge experts hold. This is especially true in complex, nuanced domains like healthcare, finance, or sustainability. AI systems might look impressive to outsiders, but feel clueless to professionals in these areas.
Domain Expertise Injection ensures your system behaves like an astute insider.
How it works
This model follows the solution space in the AI System Blueprint. For each component, it shows the different methods to involve domain experts and pull their knowledge into the AI system:
Data: Experts define relevant sources, edge cases, annotation guidelines, and synthetic data needs.
Intelligence: They co-design prompts, embed domain logic, and structure knowledge (taxonomies, knowledge graphs).
UX: Experts guide how the system expresses uncertainty and integrates into real workflows.
The methods are provided as checklists. Likely, you will not implement all of them at once, but start with 1–3 injection paths that give the biggest bang without overwhelming the team. Note that different methods suit different AI skill levels. Some can be carried out without AI expertise - for example, the selection of data sources where domain-specific knowledge is encoded. Others, like prompt engineering, require AI fluency.
Implementation steps
Map your system architecture with the AI System Blueprint: Lay out how your system handles data, intelligence, and UX—then identify weak spots where expert insight is missing or misrepresented.
Embed expertise at each layer: Use the checklists to choose practical injection methods across data, model logic, and user-facing layers.
Enable expert feedback loops: Establish lightweight channels for experts to flag issues, suggest improvements, and correct model behavior over time.
Validate outputs collaboratively: Involve domain experts in shaping acceptance criteria, reviewing edge cases, and stress-testing decisions.
Anti-Patterns
Assuming more data = smarter AI
Hiding where expert logic lives (black-box UX)
Hardcoding rules without testing their effects
Overengineering with complex constructs like ontologies, where simpler tools suffice
Consulting experts once, then freezing their input into static rules
For a detailed description of the different methods and a practical case study, check out this article: Injecting domain expertise into your AI system.
That was it for today’s sneak peek. If you would like to explore more:
Check out the full AI Strategy Playbook.
Read my book The Art of AI Product Development for detailed guidance on how to apply the models in specific technological scenarios.
Stay tuned for updates since we will be adding new mental models soon. If you are missing a model for a challenge you currently face, hit reply and get in touch!
Until then,
Janna
P.S. Know someone who’s stuck in AI chaos? Share this with them.
Really touch to the pain points, learn a lot