Back to Blog
Privacy, Policy & Ethics Article 1 of 5
· 7 min read

Your AI Policy Doesn't Need to Be 20 Pages. Here's What Actually Matters.

The scene I keep walking into

It's the Q&A portion of a conference session. Someone raises their hand — usually a director of programs or an ops lead — and says something like:

"Our staff are using ChatGPT. Some of them are using it well. Some of them are pasting donor SSNs into it. We have no guidelines. Where do we even start?"

And then, before I can answer, the person next to them chimes in: "We started an AI governance committee eight months ago. We've met six times. We still don't have a policy."

Two organizations. Same problem from opposite ends. One has no guardrails. The other has so many guardrails they built a fence around the parking lot and now nobody can get to work.

The governance trap

Here's my honest take on why this keeps happening: the nonprofit sector has a governance reflex. When something new and slightly scary shows up, the instinct is to form a committee, hire a consultant, write a framework, circulate it for review, revise it, circulate it again, get board approval, and then roll it out with a mandatory training.

That process takes 18 months on a good day. Three years if you're being thorough.

Meanwhile, AI moves in six-month cycles. The policy you spent two years writing? It's already outdated. The model it references has been deprecated. The risks it addresses have shifted. The capabilities it restricts have become table stakes.

Don't build rigid processes. AI is eating process for breakfast. By the time your 20-page policy gets its final signature, the technology it governs will have changed three times over.

I'm not saying governance doesn't matter. I'm saying the format is wrong. You don't need a constitution. You need a field manual.

What's actually going wrong without a policy

Before we talk about what to build, let's name the real risks. Because they're not the ones most people think.

The biggest risk isn't that AI will write a bad grant. The biggest risk is inconsistency. One program officer is using Claude with careful prompting and reviewing every word. Another is copy-pasting ChatGPT output directly into a LOI without reading it. A third is avoiding AI entirely and falling behind on deadlines.

That's three different quality standards, three different risk profiles, and zero shared understanding of what's acceptable. Multiply that across a team of 15 and you've got chaos wearing a professional smile.

The second biggest risk is data exposure. Not malicious data exposure — accidental. Someone pastes a program participant's intake form into a free AI tool because they need help writing a case study. They didn't mean to share protected information. They just didn't know they shouldn't.

The third risk is voice drift. When everyone uses AI independently with no shared source materials, your organization starts sounding like five different organizations. Your grant to the community foundation reads nothing like your federal proposal, which reads nothing like your annual report. Funders notice.

The MVP policy

Here's what I tell every nonprofit that asks me where to start. Your minimum viable AI policy has four components. Four. You can write it on an index card.

The Four-Line AI Policy

  1. Use a safe, paid provider. Free tiers of AI tools often train on your inputs. Paid tiers typically don't. This is the single highest-leverage decision you can make. If your team is using the free version of ChatGPT, you are donating your organizational data to OpenAI's training pipeline. Stop. Get a paid plan or use a platform built for professional use.
  2. Never input PII, donor SSNs, or client-identifying health data. Full stop. No exceptions. No "but I anonymized it." No "but I only included first names." If it could identify a real person, it doesn't go into an AI tool. Period.
  3. Always review AI output before it leaves your desk. AI is a first draft machine, not a final draft machine. Every piece of AI-assisted content gets human eyes before it goes anywhere — to a funder, to a board member, to a partner, to the public. You are the editor. The AI is the intern.
  4. Share what works with teammates. When someone figures out a prompt that generates great needs statements, or a workflow that cuts reporting time in half, they share it. AI gets better when teams learn together. Hoarding tricks helps nobody.

That's it. Print it. Post it in the break room. Revisit it quarterly. You now have an AI policy.

"But Philip, what about —"

No. Start here. You can add nuance later. The enemy of good enough is a 20-page PDF that lives in a SharePoint folder nobody can find.

Why "be careful" beats "never"

Be careful, not never. The organizations that ban AI outright don't eliminate risk — they just push it underground. Your staff will use it anyway. They'll just use it on their personal phones, with their personal accounts, with zero guardrails at all.

A "never" policy is a fantasy. A "be careful" policy is a strategy. It acknowledges reality, sets boundaries, and creates space for your team to do better work.

I've watched organizations try the ban approach. Within six months, every single one had staff using AI tools secretly. The ban didn't prevent risk. It prevented visibility into risk. That's worse.

The better move is to make the safe path the easy path. Give people approved tools. Show them what good usage looks like. Make it simpler to do the right thing than the wrong thing.

The tools that make policy simple

Here's something I've learned from building Grantable and working with hundreds of grant teams: the best AI policy is the one your tools enforce for you.

Think about it. You can write a policy that says "always use approved organizational data." Or you can give your team a platform where the AI already has your approved organizational data loaded, so using unapproved data requires extra effort.

Organization Profile

When your mission, programs, outcomes data, and organizational details live in a centralized profile, the AI draws from that — not from hallucinated facts or outdated boilerplate someone found in a drawer. The guardrail isn't a rule your team has to remember. It's the default behavior of the tool.

Same principle with source materials. You can write a policy that says "only reference approved statistics and program descriptions." Or you can build a shared library where approved content lives and the AI pulls from it automatically.

Content Library

When your team uploads approved narratives, data points, and supporting documents into a shared library, that library becomes your quality control. New hires don't need to guess which version of the logic model is current. The AI uses what's in the library. The library IS the policy.

This is the shift I want nonprofit leaders to internalize: policy enforcement through tool design is always stronger than policy enforcement through memos. A rule people have to remember will be forgotten. A default built into the platform won't be.

Built-in Guardrails

Grantable's platform has data controls, voice consistency, and source attribution built into the workflow. You don't need a ten-page addendum about "responsible AI use in grant development" when the tool itself ensures responsible use by design.

The quarterly check-in (not the annual overhaul)

Your AI policy should be a living document. Not in the corporate sense where "living document" means "we update it once and then forget." Actually living. As in: revisit it every quarter.

Here's what a quarterly AI policy check-in looks like:

15 minutes. Four questions.

One: Has anyone on the team run into a situation the policy doesn't cover? If yes, add a line. Not a paragraph. A line.

Two: Are we using any new AI tools since last quarter? If yes, make sure they meet criterion #1 (safe, paid provider).

Three: Has anyone discovered a workflow worth sharing? If yes, document it in your shared library and tell the team.

Four: Is anything in the current policy no longer relevant? If yes, delete it. Policies should get shorter over time as your tools get smarter, not longer.

That's it. Fifteen minutes, four times a year. You will be more current than organizations that spent $40,000 on a consultant-led governance framework.

What the big policy actually signals

I want to be honest about something. When I see a 20-page AI policy, I don't see an organization that's careful. I see an organization that's scared.

Length is not rigor. Length is usually a symptom of not knowing what matters, so you cover everything and hope something sticks. It's the AI equivalent of writing a grant narrative that's twice the word limit — it doesn't show thoroughness, it shows you couldn't prioritize.

The organizations doing AI well? Their policies are short. Their tools are good. Their teams talk to each other. They iterate fast and learn from mistakes instead of trying to prevent all mistakes through documentation.

Your Monday morning move

Here's what I want you to do this week. Not this quarter. Not after the next board meeting. This week.

Step one: Write the four-line policy. Copy it from this article if you want. Customize the language to fit your org. Put it somewhere everyone can see — a shared doc, a Slack channel, a poster on the wall. Anywhere but a policy manual.

Step two: Ask your team one question: "What are you using AI for right now?" Don't judge the answers. Just listen. You need to know the baseline before you can improve it.

Step three: Pick one approved tool and make it available to everyone. Not five tools. One. Lower the barrier. Make the safe path easy.

Step four: Put a 15-minute AI check-in on the calendar for 90 days from now. Set it and forget it until then.

You now have an AI policy, a baseline assessment, an approved tool, and a review cycle. That puts you ahead of 90% of nonprofits. It took you an afternoon.

Don't let the perfect policy be the enemy of the working one. Start small. Start now. Revisit quarterly. That's the whole strategy.