Module 2 · Understanding AI Risks

Funder Perception — AI as Lens, Not Line

Lesson 8 of 22 · 5 min read

How AI changes (and doesn't change) the funder relationship, why funders are concerned, and why both sides need to adapt.

What you'll cover
  • The Ethics Haven't Changed
  • Why Funders Are Concerned Right Now
  • But Funders Need to Adapt Too
  • On AI Detection
  • The Disclosure Question
Time

5 min

reading time

Includes

Interactive knowledge check

Funder Perception — AI as Lens, Not Line

There’s a useful way to think about AI in the funder relationship: it’s like glass. Used well, it’s corrective lenses — it helps you and the funder see one another more clearly. Used poorly, it’s a funhouse mirror — it distorts everything.

AI in the funder relationship is like glass. Corrective lenses help you see one another more clearly. A funhouse mirror distorts everything. The outcome depends on how you use it, not on the technology itself.

The Ethics Haven’t Changed

Here’s something worth saying plainly: the ethical questions people raise about AI in grant writing are not new questions.

If you know a funder well, engage with them directly, and use AI to produce an outstanding, fully factual, deeply researched application — that can bring your relationship to a new level. The AI helped you see the funder more clearly, prepare more thoroughly, and present your organization’s work with precision. That’s corrective lenses.

If you let AI do all your prospecting, don’t read what it produces, and send applications to funders you’ve never researched — that’s a funhouse mirror. But this isn’t an AI problem. It’s a decision problem. If you hired a grant consulting firm and told them “I don’t care who it goes to, just spray out applications with my boilerplate,” that was an ethical problem in the 1990s. You can do it faster and cheaper with AI now, but the fundamental issue is the choice to operate that way.

AI amplifies whatever approach you already take. It doesn’t create new ethical categories. It scales the ones you’re already in.

Why Funders Are Concerned Right Now

That said, funders have a real and specific concern: they’re being inundated with applications.

AI has made it easier to prospect for funders and produce grant applications. The barrier to submitting has dropped. Some funders are seeing significantly higher application volumes, and they’re trying to figure out how to sort through them.

This is a legitimate operational challenge for funders. And it leads to a practical principle for you: don’t add to the problem. If AI helps you submit more applications, make sure those applications are well-researched, well-targeted, and genuinely aligned with each funder’s priorities. More volume of poor-quality, poorly-matched applications hurts everyone — funders, other applicants, and ultimately your own reputation.

The goal isn’t to submit more. It’s to submit better.

But Funders Need to Adapt Too

Here’s the other side of this conversation — one that doesn’t get said enough.

For decades, the applicant community has implored funders to make their processes less onerous. The list of frustrations is long and well-documented:

  • Every funder has a different application portal
  • Funders ask the same questions that applicants have answered dozens of times before
  • Common application formats have been proposed and largely ignored
  • Many portals don’t show all questions upfront — you complete one page just to see the next
  • Full proposal downloads are often unavailable
  • Character limits are restrictive, portals sometimes don’t auto-save, and progress can be lost
  • Some funders require extensive applications for relatively small awards

Grant professionals have written about these issues for years. The application process is often far more onerous than it needs to be, and funders have done relatively little to streamline it.

Now, applicants finally have tools that can help manage this complexity. Funders can’t unilaterally deny applicants the advantage of using AI to navigate their own onerous processes — especially when they haven’t done enough to simplify those processes from their side. Both sides of this relationship are adapting to a world with AI tools. It can’t just be applicants who bear the burden of change.

On AI Detection

One more thing that applicants should know: AI text detectors do not work.

This matters because some funders have discussed using detection tools to screen applications. But the reality is that current AI detection technology cannot reliably distinguish AI-generated text from human-written text. The models produce language that is statistically indistinguishable from human writing — especially when a human has reviewed and edited the output, which is what responsible AI use looks like in the first place.

Major academic publishers and institutions have already acknowledged this limitation. Funders will reach the same conclusion. The path forward isn’t detection — it’s evaluating applications on their merit: does this organization understand the work? Can they deliver? Is this a strong fit?

This should give you peace of mind. You’re not going to be “caught” using AI, because there’s nothing reliable to catch you with. What matters is the quality and integrity of what you submit.

The Disclosure Question

Should you tell funders you use AI? Some straightforward guidance:

If they ask, be honest. Some RFPs now include questions about AI use. Answer truthfully.

If they prohibit AI, respect it. It’s their funding, their rules. You can use AI to brainstorm and organize your thinking, then write the proposal yourself.

If they don’t ask, use your judgment. The test is: does the final proposal represent your organization’s genuine understanding, voice, and capabilities? If yes, the tool you used to get there matters less than the result. You don’t disclose every tool you use — Grammarly, Google, a colleague’s feedback. AI is part of your professional toolkit.

Philip’s Take: Think of AI as a lens. If you’re already doing the work — knowing your funders, understanding your programs, engaging authentically — then AI helps you do it with more precision and depth. The ethics are the same as they’ve always been. And funders need to adapt too. They’ve asked applicants to jump through hoops for decades. Now that applicants have tools to manage those hoops more efficiently, the conversation has to go both ways.

Key Takeaways
  • AI amplifies your existing approach -- it doesn't create new ethical categories. The ethics of grant relationships predate AI.
  • Funders are concerned about application volume. The answer: submit better, not just more.
  • Funders also need to adapt -- decades of onerous processes can't be met with "but you can't use AI to manage them"
  • AI text detectors don't work reliably. What matters is the quality and integrity of what you submit.
### Next Lesson

You understand the risks: hallucination, data privacy, bias, and funder perception. One more piece: understanding the difference between safe and unsafe AI providers.

Have questions about this lesson?

Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.

Ask Grantable
© 2026 Grantable. All rights reserved.