Dear AI friends,
If you’re reading this newsletter, chances are you’re a natural AI champion—curious and actively exploring how to use AI. However, in your company, the big looming AI transformation might not feel like an exciting wave of new opportunities. Rather, you struggle with misunderstandings, resistance from different ends, and clashing mentalities. You know, those moments when you stop and wonder, Wait, how did I end up here again?
💡 Insight: According to a 2024 survey by BCG, the majority of AI adoption challenges (70%) in businesses stem from people. Only 10% stem from AI algorithms, and the other 20% are related to surrounding technology problems.
In this episode, I would like to share the story of Leila, a fellow AI champion who made her way through a series of people challenges, eventually putting her company on the right track for AI adoption. We will address the following points:
Selecting AI initiatives: Moving from talking to acting by picking the right AI initiatives at the start
Understanding AI mindsets: What’s going on in the heads of your people when they hear AI
Communication: Establishing AI as an inevitable technology, and empowering users with agency and control
Education: Ensuring your people have the skills to use AI and create value for your business
I hope this story will inspire you to move beyond the individual use of AI and turn AI into a positive transformational force for your business.
Start smart: Picking the right first bets
At AeroLogix, a logistics provider, leadership had spent a year on high-level talk about AI. Consultants had come and gone, leaving behind a pile of slide decks. Competitors were already moving—deploying predictive tools, optimizing routes, experimenting with AI agents. The board wanted to see action.
To their credit, the team didn’t rush into asking IT to implement a “revolutionary” chatbot. Instead, they took the time to explore more relevant directions:
Crew scheduling optimization
Predictive maintenance for ground equipment
Delay mitigation modeling
Demand forecasting for spare parts
The focus was on execution and progress, so they picked spare parts forecasting. It wasn’t glamorous, but the pain was visible to everyone—some planners constantly overstocked to avoid outages, wasting capital and storage space. Others understocked and caused delays. The performance gap cost the company millions, and a little bit of AI-driven optimization could improve outcomes significantly.
Digging deeper, more facts spoke in favour of the initiative. Data already existed in a reasonably clean and centralized form. The AI could integrate into existing workflows, and according to initial estimates, a working alpha version could be delivered in under 90 days.
Taken together, that made it a good first AI project that would allow AeroLogix to build initial experience and start shaping a broader AI strategy.
🎯 Action plan: Choose AI initiatives that maximize learning and minimize friction
When selecting your first AI initiatives, look for ideas that:
Are easy to understand and explain
Deliver visible value to both users and leadership
Avoid major compliance or integration barriers
Let people experience AI as useful and controllable
Can be launched quickly and improved incrementally
For a handy framework to structure your AI ideation, check out our AI Opportunity Tree.
Mapping the human terrain
A cross-functional team convened to plan the project, and the participants revealed very different attitudes towards AI. According to Reid Hoffman, most people adaopt one of four AI mindsets - Zoomers, Gloomers, Doomers, and Bloomers (cf. Hoffman’s book Superagency). That dynamics also surfaced during the meeting.
Tom, representing IT, exuded enthusiasm:
“This is just ChatGPT for supply chain, right? Let’s just do it ourselves.”
Tom embodied the Zoomer mindset—eager to use AI, but also a bit naive. He underestimated the risks and the complexity that AI would bring. This could easily result in optimistic estimates and huge, often insurmountable, problems at the never-ending “last mile” of the project.
Andrea, a seasoned planner with two decades of experience, voiced her reservations:
“I’ve been forecasting for decades. Now, a machine should help me with that?”
Andrea's skepticism was rooted in a Gloomer perspective. She acknowledged AI's inevitability but was wary of its implications on job security and the value of human expertise.
Mark, from compliance, maintained a cautious stance. He attended meetings, raised pertinent questions about documentation and explainability, but refrained from taking a definitive position. While it was not explicit, Mark was a Doomer - concerned about AI's potential risks for humanity and advocating for stringent oversight.
Then there was Leila, a mid-level operations transformation manager. She wasn't the loudest voice, but the most informed and objective one. Leila had been quietly exploring AI, experimenting with prototypes to streamline her workflows. She asked insightful questions, skipped the technical jargon, and encouraged her colleagues to engage. Leila exemplified the Bloomer mindset—balancing optimism with caution, eager to use AI and look for ways to mitigate its risks. She was a natural candidate to lead the project.
🧭 Navigating the AI mindsets
Let’s summarize the four AI mindsets you will likely encounter:
🛑 Doomers: View AI as a significant threat, advocating for strict controls or halting progress altogether.
☁️ Gloomers: Recognize AI's potential but focus on its risks, particularly concerning employment and societal impact.
⚡ Zoomers: Enthusiastic proponents of rapid AI adoption, sometimes at the expense of thorough risk assessment.
🌱 Bloomers: Maintain a balanced approach, embracing AI's possibilities while advocating for responsible implementation.
Recognizing and addressing these attitudes allows you to tailor your communication and training strategies, ensuring a more cohesive and effective AI adoption process.
Communicating about AI
Leila had carefully observed the team dynamics during the initial meetings and adapted her messaging accordingly.
A new baseline
Tired of discussing whether AI is even needed in their business, she started by resetting the baseline with a clear message:
“AI is coming either way. The real question isn’t whether our company uses it, but how you choose to engage with it.”
For some, this was a harsh wake-up call, but it created the much-needed pressure as the company was already lagging behind more decisive competitors. To address fears of job replacement, Leila said:
“Your job wont’t be replaced by AI, but it will be replaced by someone who uses AI. Make sure that person is you.”
🧭 Takeaway: Three messages for constructive AI conversations
Three messages to decrease AI resistance and support adoption:
Inevitable: AI is coming either way—with or without you.
Empowering: AI won't replace you, but it will improve and elevate your work.
Tameable: Don’t fear it - master it and make it work for you.
Communicating with different stakeholders
In a second step, Leila focused on onboarding the relevant stakeholders. She tuned her messaging to the values and concerns of the different groups, as summarized in Table 1.
Let’s dive deeper into the characteristic attitudes of each group and how you can tailor your communication address them.
Executives and investors
Executives and investors are crucial for establishing the right culture and allocating appropriate resources to AI (cf. also BCG’s article When Companies Struggle to Adopt AI, CEOs Must Step Up). Here, the conversation has to focus on ROI and strategic relevance. Often, they aren’t interested in technical details, but they need to understand how AI tied into revenue, growth, and competitive positioning. As Leila’s company was on a cost-cutting trip, she framed AI as an amplifier that would save costs while enabling better decision-making:
“AI isn’t a cost center, but an intelligence amplifier. The companies that learn and adopt AI faster will cut costs and outpace those that don’t.”
Operators and users
Operators and users are the key to successful AI adoption. After all, the best AI system is useless unless it gets used. These were the people closest to the day-to-day impact of AI: planners, analysts, service agents, logistics coordinators. Their concerns aren’t always spoken, but they were deeply felt: Will this system replace me? Can I trust it? Will it make my job harder or easier?
Leila didn’t wait for these fears to surface and spread. She built her communication around them, proactively addressing both the technical impact and the emotional undercurrent. She kept it simple, direct, and human:
“How will this affect my job?” - “This isn’t a black box. You stay in control. Think of it as a co-pilot—you’re still in the driver’s seat.”
“What’s in it for me?” - “Less manual reporting, fewer repetitive checks, and faster access to the info you need—it’s here to make your day easier.”
“How does it actually work?” - “It learns patterns over time. I’ll show you exactly what it sees, how it learns, and where the limits are.”
(cf. our mental model on AI explanations)
“Can I shape or influence it?” - “Absolutely. You can adjust thresholds, give feedback, and help us improve how the system works in your real-world context.”
By meeting users where they were, she gradually turned uncertainty into curiosity and engagement and made AI feel not imposed, but empowering.
IT and infrastructure
For IT and infrastructure teams, the conversation had to focus on stability, integration, and long-term sustainability. These were the people who would carry the system in production when the spotlight faded and the real work began.
But Leila also had to address a deeper challenge: the assumption that IT could do it all themselves. To the engineers, AI looked like just another software deployment—some data, some APIs, a few model endpoints. Some were genuinely eager about the new project, while others saw AI as a fast track to career relevance.
They clearly underestimated the paradigm shift brought about by AI. Instead of the usual deterministic workflows where everything can be hard-coded, AI came with uncertainty and a license to make mistakes regularly. If these were not properly addressed, they could easily cascade into harmful decisions and actions in the real world. For a real-life story of an IT team that approached AI as just another development project, check out Chapter 1 (publicly available) of my book The Art of AI Product Development.
Leila addressed the exaggerated confidence of the team:
“AI isn’t business-as-usual. We are building a system that makes risky predictions under uncertainty. To get it right, we need specialized AI expertise, data science rigor, and deep domain alignment. Let’s honestly assess how much of this we bring to the table internally.”
While some of the skills were already available in-house, it was clear that there were gaps. In the end, they agreed on a smart partnering approach with an external provider that would not only contribute specialized AI expertise, but also transfer some of their know-how to the company.
Governance and risk teams
Finally, when working with governance and risk teams, the message had to be framed around safeguards, auditability, and responsibility. These stakeholders weren’t concerned with user delight or forecast accuracy. They were thinking about regulatory scrutiny, reputational risk, and ethical exposure. Trust for this group came from rigor and transparency—not promises.
“We’re not just building models,” she told the head of compliance. “We’re building systems you can explain to auditor and regulators.”
She involved them early, invited them into design decisions, and gave them clear levers for escalation and oversight. The message wasn’t that AI was safe by default, but that it had been designed with safety and explainability in mind.
In each case, Leila didn’t change the essence of the project, but she changed the lens through which people viewed it.
Education - Building AI fluency
During her conversations, Leila realized that AI was surrounded by noise—misconceptions, hype, and half-truths. Especially among the Zoomers, enthusiasm outpaced understanding. Their vocal but often misconceived ideas spread fast, increasing uncertainty and confusion. To build competence and confidence, Leila worked with leadership on a simple but effective learning framework.
Step 1: Assessing AI skills
Leila introduced a company-wide skills assessment to group employees into three levels:
AI users formed the foundation. They could confidently integrate AI into their workflows—prompt effectively, interpret outputs, and know when to trust or question results. But the assessment showed that half the company wasn’t there yet. That gap was a drag on momentum and motivation. Before doing anything ambitious, the baseline had to rise.
AI co-creators were already relying on AI in their day-to-day work. They understood its patterns and limits, experimented thoughtfully, and brought role- and domain-specific insights into the loop. They could act as the bridge between teams and tech, embedding their domain expertise into the AI systems.
AI strategists operated at a systems level, viewing AI from the organizational rather than the individual perspective. They were able to identify opportunities, align AI initiatives with business goals, and guide long-term direction. Leila was one of them, but a company-wide transformation needed more.
This assessment provided a clear view of the status quo. It also showed how individual employees can be involved in AI initiatives, and could be used to monitor AI proficiency over time.
Step 2: Upskilling everyone into AI users
A mandatory, self-paced foundational course was rolled out to all employees below the user threshold. The content focused on practical, hands-on skills:
What AI is (and isn’t)
How to use standard tools like ChatGPT or Perplexity effectively
How to address the uncertainties and mistakes of AI (cf. my article Building and calibrating trust in AI)
Soon, something shifted. AI showed up in meetings, coffee chats, and project briefs. It was not a cool but abstract buzzword anymore. People were swapping prompts, sharing shortcuts, and starting to speak a common language.
Step 3: Embedding AI into everyday work
Next came department-level deep dives. These were led by AI champions and co-creators in the respective departments and focused on specific use cases, for example:
Ops teams refined AI forecasting models
Marketing explored content personalization
HR tested AI-powered hiring flows
Often, specialized use cases emerged from prompts that users would repeat over time - for example, one marketer had collected a library of prompt components he used to refine content. Just as Leila, he was among the first AI champions in the company, and his library was used to build a convenient tool for the rest of the department.
Step 4: Growing through partnerships
Leila knew internal talent could only go so far. To build traction and depth, the company brought in external AI partners who would support implementation and learning. These experts co-developed use cases, mentored internal teams, and modeled best practices. The “partner-and-grow” strategy also gave space for emerging strategists to stretch their thinking and connect the dots.
Step 5: Making learning social and continuous
Finally, Leila helped establish a culture of continuous learning. Instead of centralized training, teams shared what they were discovering:
Prompting parties and model jam sessions
“My Worst AI Fail” talks
Internal demos and how-to channels
These lightweight rituals normalized experimentation, lowered the stakes, and helped talent grow organically across the three levels of AI proficiency.
🎯 Action Framework: Building a scalable learning culture
Start with a core course – Ground everyone in AI basics, including prompting, trust calibration, and tool awareness.
Design department-specific workshops – Use real workflows and real data. Avoid abstract lectures.
Create social learning rituals – Prompting parties, model jams, and fail-sharing sessions build community and confidence.
Tie learning to ownership – Involve users in feedback loops. Recognize improvements driven by their input.
You are at the interface
Of course, all that people work ran alongside the technical build—data prep, co-creation with domain experts, and a few healthy debates with data science. A launch and a couple of iterations later, the model beat manual forecasts by 15%.
That was amazing progress, and leadership and the board were happy. But for Leila, the real wins were elsewhere:
A veteran planner showing new hires how to read confidence bands
A compliance officer drafting an AI governance playbook (and enjoying it)
An analyst spinning up a supplier co-pilot with a no-code tool
People weren’t just adopting AI, but also shaping it for their needs. They explored, adapted, and co-created, turning uncertainty into agency and control. In the end, the most powerful interface in any AI system isn’t the model or the dashboard. It’s the human user who gets curious, experiments in the open, challenges the system when needed, and brings others along.
That’s it for today. If you would like a deep dive into working with different stakeholders throughout your AI projects, check out chapter 12 (Working with stakeholders) of my book The Art of AI Product Development.
Keep in mind - AI will change your company either way. The real question is when and how that will happen, and how you will be involved in that shift.
Best wishes
Janna