Back to Blog
Getting Started with AI in Grants Article 2 of 5
· 8 min read

The Science Fair Model: How to Roll Out AI When Half Your Team Is Terrified

N

The staff meeting that breaks your heart a little

You've just finished demoing an AI tool for your grant team. You showed them how it drafted a needs statement in two minutes flat. You pulled up the org profile integration, the source citations, the voice controls. You were prepared. You were excited.

And now you're watching the room split in half.

On your left: two program managers leaning forward, already scheming about how to use this for their federal reports. On your right: your senior grant writer with her arms crossed, your finance director looking at his phone, and your operations lead staring at you like you just suggested replacing the entire team with a robot.

This is the moment most AI rollouts die. Not from a board veto or a budget shortfall — from the gap between the people who can't wait and the people who can't even. You push too hard, you lose the skeptics forever. You wait for consensus, you lose the enthusiasts to frustration. The tool sits unused. Six months later someone brings it up again and gets the same split room.

I've watched this play out at dozens of organizations. And I've seen exactly one approach that consistently works. I call it the Science Fair Model.

Why comfort spectrums are normal (and permanent)

First, let's kill the fantasy that you're going to get everyone on the same page about AI. You're not. You never will. And that's fine.

Every team has a comfort spectrum. On one end, you've got the early adopters — the people who were using ChatGPT before you even heard of it, who have opinions about different language models, who probably have a personal workflow they're already running on the side. On the other end, you've got people who are genuinely afraid. Not resistant. Afraid. They're worried about their jobs, worried about their craft, worried about making a mistake with a tool they don't understand.

Most of your team falls somewhere in the middle: cautiously curious. They'd try it if someone showed them how. They'd use it if they weren't worried about looking stupid. They're waiting for permission and a path.

The mistake most leaders make is designing the rollout for one end of the spectrum. They either cater to the enthusiasts ("Here's the tool, go nuts, figure it out") or cater to the skeptics ("Let's form a committee and study this for six months"). Both approaches fail because they ignore half the room.

The Science Fair Model works because it meets everyone where they are.

The Science Fair Model

I borrowed this from how I've seen the best team adoptions actually happen — not from vendor playbooks or change management textbooks, but from watching real nonprofit teams figure it out. The structure is dead simple, and it works because it turns abstract fear into concrete experience.

The Science Fair Model — 5 Steps

  1. Gather small problems. Send a two-question survey to your team: "What task eats the most time in your week?" and "What task do you dread the most?" You're looking for bounded, repetitive pain points — not moonshot ideas. Things like: "Reformatting the same program description for twelve different funders." "Pulling outcome data for quarterly board reports." "Writing the same cover letter introduction forty times a year."
  2. Pick three to five experiments. Choose problems that are low-stakes, representative of different roles, and solvable in a week. Pair each problem with a team of one to three people — and critically, mix comfort levels. Put a skeptic with an enthusiast. Let them balance each other.
  3. Give them one week and one approved tool. Not a free chatbot someone found on the internet. A paid, organization-sanctioned platform with proper security and data controls. Set one rule: every output gets human review before it goes anywhere. Beyond that, let teams experiment however they want.
  4. Present results — science fair style. At the end of the week, each team gets ten minutes to present: What did you try? What worked? What flopped? What surprised you? Keep it informal. Posters optional but encouraged. The goal is shared learning, not performance review.
  5. Decide together, with data. Now your team has real evidence from your own organization. Not a vendor demo. Not a conference keynote. Actual results from actual staff solving actual problems. The conversation shifts from "Should we use AI?" to "Here's what we learned — what do we want to do next?"

The magic is in step four. When the skeptical senior grant writer stands up and says "I didn't think this would work, but it cut my LOI reformatting time in half and the quality was actually decent" — that's worth more than any pitch deck you could build. Peer evidence beats executive mandate every single time.

The Low-to-High Risk Ladder

Once your team has run the science fair, you need a framework for deciding what to tackle next. Not everything is equally risky, and treating all AI use cases the same is how organizations either move too fast or stay stuck.

Here's the ladder I recommend, from lowest risk to highest:

Rung 1: Editing and polishing. Take something your team already wrote and use AI to tighten it, check for clarity, fix passive voice, or adjust tone. This is the lowest-risk entry point because the human wrote the original content. AI is just the copy editor. Almost nobody has a problem with this once they try it.

Rung 2: Research synthesis. Use AI to summarize long documents, pull out key statistics from reports, or compare funder guidelines side by side. The AI is organizing information your team already has access to. The risk is low because you're dealing with publicly available data, not generating original claims.

Rung 3: Funder prospecting. Use AI to analyze funder databases, identify alignment between your programs and foundation priorities, or draft initial prospect profiles. The stakes go up slightly because you're making strategic decisions based on AI analysis — but a human still reviews and validates before anyone picks up the phone.

Rung 4: First-draft generation. This is where AI writes the first version of a narrative, a letter of inquiry, or a proposal section. The risk is higher because you're starting from AI output rather than editing human output. It requires strong voice controls and a rigorous editorial process. But for teams that have climbed the first three rungs, this feels like a natural next step rather than a scary leap.

Rung 5: Compliance checklists and verification. Using AI to cross-reference your proposal against funder requirements, check for internal consistency, or flag missing components. This sounds low-risk, but it's actually the highest because there's a temptation to trust the AI's judgment without verification. A missed compliance item that a human would have caught is worse than no AI at all. This rung requires the most oversight.

The ladder gives your team a shared vocabulary. Instead of "Are we using AI for grants?" — which is too vague to be useful — you can say "We're comfortable through Rung 3. Let's pilot Rung 4 next quarter." That's a conversation everyone can participate in, regardless of where they fall on the comfort spectrum.

Don't build rigid processes. AI is eating process for breakfast. The organizations that write 30-page rollout plans are still on page 12 when the technology has already moved twice. Give your team a ladder, not a rulebook.

How to handle the extremes

Every team has at least one person at each pole, and they need different things from you.

The enthusiast who's already running ahead. This person has been using AI for months. They have strong opinions. They might be a little evangelical about it. Your job isn't to slow them down — it's to channel them. Make them a team lead in the science fair. Let them mentor a skeptic. Give them the hardest problem to solve. Their energy is an asset, but only if it's in service of the group, not running ahead of it.

The person who's genuinely afraid. This is usually someone who's been in their role for a long time, who takes deep pride in their craft, and who hears "AI" as "you're about to be replaced." No amount of data will fix that fear. What fixes it is experience — specifically, the experience of using AI and discovering that their expertise is what makes it work. Pair them with a patient partner. Give them an editing task (Rung 1 on the ladder). Let them see that AI without their judgment produces mediocre output, but AI with their judgment produces something better than either alone.

Be careful, not never. I say this constantly because the opposite of reckless adoption isn't abstinence — it's thoughtful experimentation. The organizations banning AI aren't eliminating risk. They're pushing it underground, where staff use free tools on personal devices with zero guardrails.

Where Grantable fits in the experiment

Collaborative Editing

The Science Fair Model works best when teams can experiment together in the same workspace. Grantable's collaborative editing means your whole team can try AI-assisted drafting, editing, and review without worrying about per-seat costs or license juggling. Bring the skeptic and the enthusiast into the same document. Let them see each other's process in real time. That shared experience is worth more than any training session.

Content Library

When your science fair teams start producing results — a prompt that generates great needs statements, a workflow that cuts reporting time, a reformatted logic model that actually works — those discoveries need to live somewhere everyone can access them. Grantable's Content Library becomes your team's shared knowledge base. The lessons from week one become the starting point for week two. Institutional learning compounds instead of evaporating.

The Monday morning version

Here's your week, start to finish:

Monday: Send your team the two-question survey. "What task eats the most time?" and "What task do you dread?" Give them until Wednesday to respond.

Wednesday: Review the responses. Pick three to five bounded problems. Assign teams of two to three people, mixing comfort levels intentionally.

Thursday: Set everyone up on a single, approved AI platform. Run a 30-minute orientation — not a training, just "here's how to log in, here's the one rule (human review on everything), go explore."

The following Thursday: Science fair presentations. Ten minutes per team. Celebrate what worked. Laugh about what didn't. Document everything in a shared space.

The Friday after that: Leadership debrief. What did you learn? What's the next experiment? Where are you on the risk ladder?

Two weeks. That's all it takes to go from a split room to a team with shared experience and real data. You won't convert every skeptic — that's not the goal. The goal is to give everyone enough firsthand evidence to have an honest conversation about what comes next.

The organizations that figure this out don't just adopt AI faster. They build something more valuable: a team that knows how to learn together, experiment safely, and adapt when the ground shifts. In the nonprofit sector, that capability is rarer — and more important — than any single tool.