Precision, Not Prohibition
A thoughtful look at the principled reasons people hesitate about AI — and why deeper understanding beats blanket decisions.
- The Principled Concerns Are Real
- For Those Who Are Unsure
- Going Deeper, Not Broader
- The Reality of Adoption
- What "Careful" Looks Like
6 min
reading time
Interactive knowledge check
Precision, Not Prohibition
If you’re reading this, you probably have some reservations about AI. That’s a good instinct. The people who do grant-funded work and approach something this powerful with caution are exactly the ones who should be guiding how it’s used.
The Principled Concerns Are Real
Many organizations that apply for grants have chosen not to use AI, and a lot of those decisions come from a genuinely principled place:
The environmental cost
AI training and data center operations are energy-intensive. If your organization works on climate or environmental justice, that tension is real.
Intellectual property
The models that power AI were trained on the work of writers, artists, researchers, and professionals — many of whom were never compensated or consulted.
Labor practices
Behind the polished AI interfaces, human workers — often in developing countries, often at low wages — sort through disturbing content to make these systems safer.
A reflex toward responsibility
Grant-funded work — whether at nonprofits, universities, government agencies, or businesses — drives some of the most important work on the planet. Being careful with new tools is not a weakness.
If someone takes a principled stand against using AI, that deserves respect. There are real ethical questions here, and thoughtful people can land in different places.
For Those Who Are Unsure
This track is for people who haven’t fully decided — or who sense that the right answer isn’t a blanket yes or a blanket no, but something more precise.
Most of us already use AI in our daily lives. It’s in our search engines, our social media feeds, our email spam filters, our photo apps, our online shopping recommendations. Many of us have tried ChatGPT at home. AI is increasingly woven into how the world works.
If you use AI everywhere else in life but restrict it from mission-driven work, you’re taking AI out of the equation in exactly the place where you could do the most good with it.
Going Deeper, Not Broader
The answer to ethical concerns about AI isn’t to ignore them. It’s to understand them well enough to make precise choices.
Take the energy question. Many people have heard that AI uses a lot of electricity, but “a lot” without context isn’t useful. How does an AI query compare to driving a gas car, flying for a conference, or the environmental footprint of other daily choices? The numbers may change your calculus — or they may confirm your concerns. Either way, you’re deciding based on understanding rather than impression.
A blanket ban is a blunt instrument. Deeper understanding lets you be surgical — choosing where AI is appropriate for your organization and where it isn’t, based on your actual values and the actual tradeoffs.
The Reality of Adoption
Society and the workplace are moving toward this technology. The adoption data shows it clearly, and if you have honest conversations with colleagues, most have tried AI in some form. Many find it useful.
Organizations that adopt responsible AI practices are becoming more efficient in their grant seeking — not just faster, but producing proposals with deeper research and stronger prose. That competitive landscape is shifting. This isn’t a scare tactic. It’s just the reality, and it’s worth knowing when you’re making your choice.
What “Careful” Looks Like
There’s a version of AI adoption that honors the caution:
Choose safe tools
Not every AI tool handles your data the same way. Some train on your inputs. Some don't. That distinction matters.
Set clear boundaries
Decide what goes into AI tools and what doesn't. Where AI output needs human review. Who's responsible for what.
Start small
You don't have to transform overnight. Start with low-risk tasks and build understanding from there.
Learn together
Make AI a team conversation rather than a solo experiment. Share what works, what doesn't, and what you're figuring out.
This track will walk you through all four — building a policy, planning a rollout, and leading your team through the transition in a way that respects both the opportunity and the real concerns.
Philip’s Take: “Precision, not prohibition.” If you have principles about AI — and you should — do the work to understand what those principles would actually guide you to do. A blanket ban and blanket adoption are both blunt instruments. The thoughtful path is making precise choices about where AI is appropriate and where it isn’t.
A colleague says their organization banned AI because of concerns about energy consumption. What's the most constructive response?
- The ethical concerns about AI (energy, IP, labor, responsibility) are real and deserve respect
- If you use AI everywhere else in life but ban it from mission-driven work, you may be removing it from where it does the most good
- Deeper understanding enables precise choices — a blanket ban is a blunt instrument
- There's a version of AI adoption that starts small, stays careful, and respects the caution
Notice an error or have a question about this lesson?
Get in touchHave questions about this lesson?
Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.