Building a Culture of Responsible AI Use
How to create an organizational culture where AI is used confidently, carefully, and openly — not feared or misused.
- What "Culture" Means Here
- How Leaders Shape AI Culture
- The Permission Problem
- Signs of a Healthy AI Culture
- The AI Slop Problem
- Signs of Trouble
6 min
reading time
Interactive knowledge check
Building a Culture of Responsible AI Use
A policy gives you rules. A culture gives you judgment. You need both.
What “Culture” Means Here
An AI-positive culture doesn’t mean everyone loves AI. It means:
- People feel safe experimenting. They can try AI for a new task without fear of being judged if it doesn’t work.
- People feel safe reporting problems. When someone catches an AI error or makes a mistake, they share it openly rather than hiding it.
- Questions are welcome. “Is this appropriate for AI?” is always a valid question, never a sign of being behind.
- Skepticism is respected. People who are cautious about AI aren’t dismissed as Luddites. Their caution is a feature, not a bug — it keeps the team honest.
How Leaders Shape AI Culture
Culture comes from what leaders do, not what they say. Three leadership behaviors that matter:
1. Use AI yourself — visibly. When you use AI to help draft a board report or summarize funder research, mention it. “I used AI to pull together the first draft of this, then I reviewed and edited it.” This normalizes AI use and demonstrates the review process simultaneously.
2. Celebrate catches, not just wins. When someone catches an AI hallucination before it goes to a funder, that’s a win worth celebrating. “Good catch” reinforces the review process. It signals that finding problems is as valued as producing output.
3. Never punish honest mistakes. If someone accidentally pastes PII into an AI tool, the response is education, not punishment. “Here’s how to anonymize next time.” Punitive responses drive AI use underground, which is the opposite of what you want.
The Permission Problem
Many grant professionals are already using AI but feel uncertain about it. They wonder if it’s “cheating” or whether it diminishes their expertise. As a leader, this is worth addressing openly:
- Name it. “Some of you may feel uncertain about whether using AI is appropriate. Let me be clear: using AI responsibly is a professional skill we want everyone to develop.”
- Reframe it. AI handles the mechanical parts — first drafts, formatting, research compilation — so you can focus on the parts that need human judgment: strategy, relationships, voice, and decision-making.
- Model it. Share your own AI use, including the rough patches. “I tried using AI to draft our annual report intro and it was terrible. So I wrote it myself. But the budget summary section was great.”
Signs of a Healthy AI Culture
You’re on the right track when:
- Team members voluntarily share AI tips in meetings
- People ask “could AI help with this?” for new tasks
- AI failures are discussed openly, not swept under the rug
- The team has opinions about which AI tasks work well and which don’t
- New staff learn about AI use during onboarding
The AI Slop Problem
There’s a pattern emerging that’s worth naming directly: AI slop.
It works like this: someone takes a colleague’s work, throws it into AI, prompts it briefly, and returns the output minutes later as their “product.” The person on the receiving end can tell. It feels dismissive — like the sender didn’t do real work, they just ran it through a machine. And now the recipient has to sift through a pile of AI-generated content that may or may not be useful.
It gets worse when this becomes a chain. One person generates AI output, hands it to a teammate, who throws it into AI to process, who hands it to someone else. At some point it’s AIs talking to AIs, and the humans have been removed from meaningful control without anyone intending that.
This happens because AI speed is seductive. You can produce so much so fast that it’s easy to get numb to the volume and stop being mindful about what you’re handing to other people.
The critical moment is the handoff. Every time you’re about to share AI-assisted work with a teammate, pause and think:
Every time you’re about to share AI-assisted work with a teammate, pause. What context do they need? What do they need to understand about how you arrived at this?
- How do I introduce this work? What’s the context they need?
- What should they understand about how I arrived at this? What’s been verified and what hasn’t?
- Is there too much here? Should I distill it before handing it off?
- Would a short conversation or meeting be better than dumping a document?
- Could I use AI to create an overview or summary that makes the handoff smoother?
The goal is that your teammate receives something they can meaningfully engage with — not a pile of raw AI output that shifts the burden of sense-making onto them. Understanding what you’ve done with AI, and being able to help others come along, is one of the most important leadership skills in an AI-native team.
Philip’s Take: I’ve been on the receiving end of AI slop. Someone took work I’d done, ran it through AI, and handed back the output in minutes as if they’d contributed something. It was obvious, and it was frustrating. If you’re using AI to accelerate your work — great. But the moment you share it with a teammate, you have to snap out of the speed and think about the handoff. What do they need to understand? How do you bring them in? That’s the human work that AI can’t replace.
Signs of Trouble
Watch for:
- People using AI secretly on personal accounts
- No one sharing learnings (they’re either not using AI or not admitting it)
- AI output going external without clear review
- AI slop — teammates handing each other raw AI output without context or curation
- Blanket resistance disguised as caution (“we just can’t trust AI for anything”)
- One person as the sole “AI person” while everyone else avoids it
You used AI to generate a 10-page funder analysis. You need to share it with your ED. What's the right approach?
- Culture determines whether AI is used well -- policy alone isn't enough
- Use AI visibly, celebrate catches, and never punish honest mistakes
- Address the "is this cheating?" anxiety directly -- responsible AI use is a professional skill
- Watch for AI slop: the critical moment is the handoff -- pause, provide context, and bring your teammate along
- A healthy AI culture is visible: people share, question, and improve together
Not everyone on your team will embrace AI enthusiastically. The next lesson addresses the hardest leadership challenge: bringing skeptical or fearful staff along.
Notice an error or have a question about this lesson?
Get in touchHave questions about this lesson?
Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.