Module 5 · Leading AI-Native Teams

Training Staff Who Are Skeptical or Scared

Lesson 20 of 22 · 5 min read Science Fair Model

How to bring along team members who are resistant to AI — without dismissing their concerns or forcing adoption.

What you'll cover
  • Why Good People Resist
  • Responding to Each
  • Practical Approaches
  • The Timeline for Buy-In
Time

5 min

reading time

Framework

Science Fair Model

Training Staff Who Are Skeptical or Scared

Some of your best people will be your hardest sells on AI. That’s not a problem to solve — it’s a dynamic to manage.

Why Good People Resist

Resistance to AI usually comes from one of four places:

Fear of replacement. “If AI can write grants, why do they need me?” This is the deepest fear and the one that requires the most direct response.

Quality concerns. “AI output isn’t good enough for the work we do.” This is often a legitimate observation that gets dismissed too quickly.

Ethical discomfort. “Using AI for grant writing feels dishonest.” This reflects real values about authenticity and earned expertise.

Change fatigue. “We just learned the new CRM. Now this?” People have limited capacity for change, and AI is just the latest demand.

Responding to Each

To the fear of replacement: Be honest. AI changes the composition of grant work, not the need for grant professionals. The research, writing, and administrative tasks that consume 60-70% of a grant writer’s time are where AI helps most. The strategy, relationships, judgment, and leadership that fill the rest? That’s entirely human. Grant professionals who learn AI become more valuable, not less.

To quality concerns: Agree with them. AI output isn’t good enough to submit without review. That’s exactly why you need experienced grant writers — to evaluate, refine, and improve AI drafts. Frame AI as a tool that produces raw material faster, not one that replaces the craft.

To ethical discomfort: Take this seriously. AI is increasingly going to be part of how we work — including in the spaces between people. As we covered in Lesson D2-04, the question isn’t whether AI is involved, but whether it’s sharpening the picture or distorting it. The principle here is that everyone needs to develop enough proficiency and understanding to tell the difference. Knowledge is power — the more your team understands how AI works, the better their ethical judgment becomes.

To change fatigue: This one is real, and it’s going to intensify. The pace of AI development isn’t slowing down. If someone on your team feels exhausted by constant change, trying to keep up with every new tool and development will only make it worse.

The people getting the most out of AI aren’t the ones constantly chasing the latest release. They’re the ones who understand the core concepts — how AI works, what it’s good at, where it breaks — and then build systems that harness these tools for repeatable, safe operations. First principles, then systems. That’s how the productivity compounds. You have to go slow to go fast.

Philip’s Take: I live in Costa Rica, and at the beaches here there are signs that teach you what to do if you get caught in a riptide. Most people’s instinct is to fight it — swim hard toward shore. They get exhausted and drown. The signs tell you: swim with the current until it dissipates. Then swim parallel to shore. Come back to the beach further along where the water is calm. AI change fatigue works the same way. If you’re fighting to stay on top of every development, you’ll exhaust yourself. Let go. Zoom out. Look at how AI is changing the landscape. Then come back in with clarity about what you actually want to achieve, and focus on that. Stop fighting the current. Find the calm water.

The people getting the most out of AI aren’t chasing the latest tool. They understand core concepts, build systems, and apply them to repeatable operations. First principles, then systems. Slow to go fast.

Practical Approaches

Pair skeptics with enthusiasts. Not to convert them, but to let them see AI in action with someone they trust. “Sit with Maria while she uses AI to draft the budget narrative and tell me what you think.”

Let them find the flaws. Skeptics are often your best reviewers. “Take this AI-generated needs statement and tear it apart. What’s wrong with it?” They’ll engage, find real problems, and in doing so, develop AI evaluation skills.

Start with their frustrations. Don’t talk about AI in the abstract. Ask what they hate doing. “What’s the most tedious part of your week?” Then show them — without pressure — how AI might help with specifically that.

Don’t require enthusiasm. Ask that people follow the team’s agreed-upon practices — run new tools past the evaluator, apply review controls that match the stakes. Don’t require anyone to love AI. Some people will come around over time. Others won’t. As long as they’re being thoughtful about it, that’s enough.

The Timeline for Buy-In

Realistic expectations:

  • Week 1-2: Skeptics participate reluctantly but find one or two useful things
  • Month 1: They’ve integrated one AI task into their routine
  • Month 3: They have opinions about what works and what doesn’t (this means they’re engaged)
  • Month 6: They’re sharing tips with newer staff

Not everyone follows this timeline. Some people take longer. A few never fully embrace AI. That’s okay — as long as they’re not actively undermining the team’s adoption.

Key Takeaways
  • Resistance usually comes from fear, quality concerns, ethics, or change fatigue -- each needs a different response
  • Pair skeptics with enthusiasts; let skeptics find the flaws in AI output
  • Start with their specific frustrations, not abstract AI benefits
  • Require policy compliance but don't require enthusiasm -- buy-in takes time
### Next Lesson

Your team is using AI. But is it working? How do you measure whether AI adoption is actually making your organization more effective?

Have questions about this lesson?

Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.

Ask Grantable
© 2026 Grantable. All rights reserved.