Module 1 · The Case for AI in Grant Work

We Need the Most Mission-Driven People Competent With These Tools

Lesson 2 of 22 · 5 min read

Why the people who fund schools, shelters, and justice work are exactly the ones who should understand these tools.

What you'll cover
  • Who Should Be Good at This?
  • What Competence Actually Means
  • What Happens Without Understanding
Time

5 min

reading time

Includes

Interactive knowledge check

We Need the Most Mission-Driven People Competent With These Tools

Grant funding supports schools, shelters, clean water, scientific research, justice, conservation, and public health. In the U.S. alone, nonprofits represent about 5.3% of GDP, and when you add universities and research institutions, the grant-funded share of the economy grows significantly. Add SBIR-funded startups and government grant programs, and you’re looking at a substantial portion of the economy dedicated to public benefit.

The federal government is making massive investments in AI. The private sector is racing ahead. If the mission-driven sector — the organizations that exist to create impact rather than profit — abstains from this technology, it’s unrealistic to think it can maintain its role in society at the pace the world is moving.

Who Should Be Good at This?

AI is already being used widely — in marketing, consulting, government, academia, business. The tools are powerful, and they’re being shaped right now by whoever is using them.

If the most mission-driven people in society choose not to engage with AI, they don’t stop it from being used. They just leave the shaping of these tools to people who may have less concern for accuracy, equity, and impact.

If the most mission-driven people in society choose not to engage with AI, they don’t stop it from being used. They just leave the shaping of these tools to people who may have less concern for accuracy, equity, and impact.

That’s not an argument for rushing in. It’s an observation about what happens when thoughtful people step back from a powerful technology.

Philip’s Take: “We need the most mission-driven people in society to be competent with these tools.” Not enthusiastic about them. Not uncritical of them. Competent. Able to evaluate, to use responsibly, and to make good decisions about when AI is appropriate and when it isn’t.

What Competence Actually Means

Being competent with AI doesn’t mean being a technologist. But it does require understanding one fundamental thing: how this technology is different from the software you already know.

Traditional software is deterministic — it follows logic. If you enter the same inputs, you get the same outputs. A spreadsheet formula, a database query, a word processor. You tell it exactly what to do, and it does exactly that.

AI language models are different. They are prediction machines. They don’t look up facts or follow rules. They predict the most likely next words based on patterns learned from enormous amounts of text. This is why they can write fluently about almost anything — and also why they can state something completely false with total confidence.

That distinction — deterministic logic versus probabilistic prediction — is the difference between a horse and a car. You need to understand what kind of engine you’re working with before you can use it well or judge its output.

Beyond that foundational understanding, competence means:

  • Knowing what AI can and can’t do — so you don’t over-rely on it or fear it unnecessarily
  • Recognizing when AI output is wrong — hallucinated statistics, biased framing, fabricated citations
  • Understanding the privacy implications — what data you’re sharing, with whom, and what they do with it
  • Making informed decisions about adoption — based on understanding, not hype or fear

This is about fluency, not expertise. Enough understanding to make good decisions — as a leader, as a team member, as someone responsible for work that matters.

What Happens Without Understanding

When organizations don’t build AI literacy, a few things tend to happen:

  • Staff use AI tools on their own, without guidance or guardrails
  • Leadership can’t evaluate vendor claims about AI capabilities
  • The organization can’t make informed decisions about when AI is appropriate for their work
  • Conversations about AI policy happen without enough knowledge to make them productive

This isn’t about falling behind competitors. It’s about being equipped to make good choices for your organization and the communities you serve.

Check your understanding

Your board is debating whether to invest in AI training. A member argues: 'We should focus limited resources on programs, not technology.' What's the most constructive response?

Key Takeaways
  • The mission-driven sector can't maintain its role in society by abstaining from a technology the rest of the world is adopting
  • The most important thing to understand: AI predicts rather than computes -- it's a fundamentally different kind of tool than traditional software
  • Competence means judgment, not technical expertise -- knowing what to trust, verify, and question
  • Without AI literacy, organizations can't make informed decisions about a technology that's already shaping their world
### Next Lesson

You’re not alone in thinking through these questions. Let’s look at what your peers are already doing with AI — and what they’re learning.

Have questions about this lesson?

Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.

Ask Grantable
© 2026 Grantable. All rights reserved.