Putting It Together — Policy as a Living Practice
Why the best AI policy is a culture of learning, not a document — and how to build one that keeps pace with the technology.
- The Speed Problem
- Apply What You Already Have
- What a Good AI Practice Looks Like
- If You Need a Formal Document
- A Starting Template
- Common Questions
6 min
reading time
MVP Policy
Putting It Together — Policy as a Living Practice
You have the four components. Before we talk about putting them into a document, let’s talk about the most important thing to understand about AI policy.
The Speed Problem
The rate of AI improvement and adoption is unlike anything we’ve seen in technology. It is completely unprecedented. Models get significantly better every few months. New tools appear weekly. Capabilities that didn’t exist six months ago become standard features.
The speed of AI improvement is completely unprecedented in technology — and it’s accelerating. Any policy has to deal first and foremost with this speed.
And this pace is accelerating, not plateauing. AI systems are now helping to create the next generation of AI — assisting in research, writing code, and improving the very models that will replace them. Each iteration helps build the next one faster. The improvement curve is compounding.
This means any AI policy has to deal first and foremost with speed. If your policy takes six months to develop, AI will have changed substantially in that time. The greatest irony would be spending half a year crafting a careful AI policy, only to adopt something that no longer matches the technology it’s meant to govern.
This isn’t an argument against having guidelines. It’s an argument for guidelines that are designed to move as fast as the technology.
Apply What You Already Have
Here’s an insight that simplifies things considerably: your organization probably already has policies that cover most of what matters.
- Information security policies — these apply to AI tools the same way they apply to any software
- Data handling and privacy policies — the same principles govern what goes into AI tools
- Ethics and conduct policies — the same standards apply to AI-assisted work
- Procurement or vendor evaluation processes — these apply to evaluating AI tools
The question isn’t “how do we create a whole new AI policy?” It’s “how do we apply our existing policies to this new category of tools?” That reframe saves enormous time and avoids reinventing protections you already have.
What a Good AI Practice Looks Like
Rather than a static document, the most effective approach is a living practice built around:
Continuous experimentation. Encourage your team — especially anyone passionate about technology — to keep trying new tools, testing new workflows, and discovering what’s possible. This is the first rule: stay curious, stay current.
Transparency and communication. People should feel comfortable talking openly about what AI tools they’re using, what’s working, and what isn’t. No invisible use. No secret tools. Transparency is the foundation everything else builds on.
Regular learning checkpoints. Whether it’s the weekly 15-minute slot from Component 4, monthly deeper dives, or quarterly reviews — build a rhythm of shared learning. AI moves fast enough that even a month of silence means missed developments.
Leadership education. Leaders don’t need to be AI experts, but they need to understand the state of the art well enough to make good decisions. Subscribe to a few good sources. Attend a conference or webinar. Stay oriented.
The four components as fundamentals. Know who built your tools. Be thoughtful with sensitive data. Match review depth to stakes. Share learnings. These principles hold regardless of how the technology evolves.
If You Need a Formal Document
Some organizations — especially those with boards, compliance requirements, or institutional governance — need a written policy. If that’s you:
Look at what others in your sector have done. Don’t start from scratch. Good resources exist:
- Nonprofits: NTEN’s AI for Nonprofits Resource Hub (built with the National Council of Nonprofits and Maryland Nonprofits) has policy templates, governance frameworks, and training resources. Candid’s responsible AI use guide walks through policy creation step by step. The NC Center for Nonprofits maintains sample policies from real organizations.
- Higher education: EDUCAUSE has published comprehensive AI policy action plans and ethical guidelines. The AAUP has published guidance on AI and academic professions. Many universities have published their institutional policies publicly — search for your peer institutions.
- Cross-sector: GlobalGiving’s responsible AI guide and TechSoup’s resources offer frameworks that apply broadly.
Adapt, don’t adopt. These templates are starting points. Your organization’s context, values, and risk profile should shape the final result.
Keep it short and revisable. Whatever you write, it should be easy to update. Build in a review date — and make it soon. Six months, not a year.
A Starting Template
If you want a simple framework to adapt, here’s one built from the four components:
[Organization Name] — AI Guidelines
Effective [Date]. Next review: [Date + 6 months]
Our approach. We encourage responsible experimentation with AI tools. AI can make our work more efficient and effective. These guidelines help us use it thoughtfully.
1. Tool Evaluation. Explore and experiment with AI tools freely. Before using a new tool for client, funder, or sensitive work, check with [name/role], who can do a quick evaluation of the tool’s team, data handling, and track record.
2. Data Awareness. Apply the same data standards to AI tools that we apply to all our software. Be thoughtful with personally identifiable information and sensitive data. If you’re unsure, ask [name/role].
3. Review Controls. Match review depth to stakes. Routine work needs a light check. Proposals and funder correspondence need structured review. Financial, legal, and leadership communications need close human sign-off.
4. Shared Learning. We learn about AI together. [Weekly/Biweekly], we share what’s working, what’s not, and what we’ve discovered. Everyone is encouraged to experiment and report back.
5. Stay Current. AI is changing fast. We commit to continuous learning — trying new tools, following developments, and updating these guidelines as the technology evolves.
Questions? Talk to [name/role].
Common Questions
“Do we really need a formal policy?” Maybe not. If your team is small and communicates well, the four components practiced informally may be enough. A formal document is most useful when you need to communicate expectations to a larger group or satisfy governance requirements.
“What if I already used my personal ChatGPT for work?” That’s fine. Going forward, just run new tools past [name/role] before using them for client or funder work. No one’s in trouble for what they’ve already done.
“Does this apply to AI features built into tools we already use?” Good question. Many tools (Google Docs, Microsoft 365, email clients) are adding AI features. The data awareness and review principles apply regardless of the tool.
Philip’s Take: The best AI policy isn’t a document — it’s a culture. Transparency, experimentation, shared learning, regular checkpoints, and leadership that stays educated. That’s the real policy. If you need to write something down for your board, keep it short and build in a fast review cycle. But don’t spend six months on a policy for a technology that changes every six weeks. The time you spend writing is time you’re not spending learning.
- AI moves faster than traditional policy cycles -- design for speed, not permanence
- Apply your existing information security, data handling, and ethics policies to AI rather than building from scratch
- The real policy is a culture: transparency, experimentation, shared learning, and leadership education
- If you need a formal document, look at what others in your sector have published and adapt it
You have your fundamentals. Now you need a plan for getting your team started. Module 4 introduces the Science Fair Model — a low-stakes way to begin.
Notice an error or have a question about this lesson?
Get in touchHave questions about this lesson?
Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.