Measuring AI Impact — What to Track
How to measure whether AI adoption is making your grant team more effective — the metrics that matter and the ones that don't.
- What to Measure
- What Not to Measure
- A Simple Tracking Approach
- Reporting to Leadership
4 min
reading time
Interactive knowledge check
Measuring AI Impact
“AI is saving us time” is a feeling, not a measurement. To sustain AI adoption — especially when justifying it to a board or executive director — you need data.
What to Measure
1. Time per deliverable. How long does it take to produce a first draft of a proposal section? A budget narrative? An LOI? Track this before and after AI adoption. Not with stopwatches — just rough estimates. “This used to take me a full day. Now it takes half a day.”
This is your most compelling metric. If your team can produce the same quality work in less time, everything else follows.
Time per deliverable is the most compelling metric. If your team produces the same quality work in less time, everything else follows.
2. Proposal volume. How many applications does your team submit per quarter? If AI frees up capacity, this number should increase — assuming there are enough good opportunities to pursue.
Be careful with this metric. More proposals isn’t automatically better. Submitting more low-quality applications to poorly-matched funders is worse than submitting fewer strong ones. Track volume alongside quality indicators.
3. Pipeline coverage. How many funders are in your active research pipeline? AI-assisted prospecting should expand your awareness of potential funders. Track how many prospects you’re evaluating and how many make it to the application stage.
4. Review catch rate. How often does human review catch AI errors before they go external? This is a health metric. A high catch rate means your review process works. A zero catch rate is suspicious — it either means AI is producing perfect output (unlikely) or review isn’t thorough enough.
5. Team confidence. Survey your team quarterly. “How confident are you using AI in your work?” “Do you feel you have the skills to evaluate AI output?” Confidence correlates with effective use.
What Not to Measure
Win rate. It’s tempting to track whether AI improves your grant win rate. Resist this. Win rates depend on too many factors — funder priorities, competition, budget cycles, relationship quality — to attribute changes to AI use alone. A drop in win rate while submitting 50% more proposals might actually represent net gains.
AI usage frequency. How often people use AI is not a meaningful metric. Someone who uses AI once per week for the right task is more effective than someone who uses it daily for tasks that don’t benefit from it.
Cost savings from “replaced” work. Don’t frame AI as reducing headcount. It almost never does in practice — it shifts work composition. Grant professionals spend less time on mechanical tasks and more on strategic ones. The headcount stays the same; the output changes.
A Simple Tracking Approach
You don’t need a dashboard. A quarterly check-in that captures:
- Average time per major deliverable (rough estimates from each team member)
- Number of proposals submitted this quarter vs. same quarter last year
- Number of funders in active research pipeline
- Number of AI errors caught during review (rough count)
- Team confidence score (1-5 scale, anonymous)
Track these over time. The trends matter more than any single data point.
Reporting to Leadership
When presenting AI impact to your board or ED:
- Lead with time savings. “Our team produces first drafts 40% faster, which means we submitted 8 proposals this quarter instead of 5.”
- Acknowledge what hasn’t changed. “Win rates are similar, which we expected — the quality of our applications hasn’t changed, we’re just producing more of them.”
- Share a concrete example. “Here’s a before-and-after: this budget narrative used to take a full day. With AI drafting and human review, it took four hours.”
- Include the investment. “We’re spending $X/month on AI tools. Here’s what we’re getting for that.”
After 6 months of AI adoption, your win rate dropped from 35% to 30%, but your team submitted 40% more proposals. How should you interpret this?
- Track time per deliverable, proposal volume, pipeline coverage, review catch rate, and team confidence
- Don't attribute win rate changes to AI -- too many confounding factors
- A simple quarterly check-in is enough -- trends matter more than any single measurement
- Lead with time savings and concrete examples when reporting to leadership
Everything is going well — until it isn’t. The final lesson covers what to do when AI causes a problem: the incident response playbook.
Notice an error or have a question about this lesson?
Get in touchHave questions about this lesson?
Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.