The Risk Spectrum — Four Dimensions That Matter
A practical framework for evaluating AI risk: context ratios, output volume, external actions, and autonomous behavior.
- Dimension 1: Context-to-Output Ratio
- Dimension 2: Output Volume Relative to Review Time
- Dimension 3: Can the AI Delete, Overwrite, or Send?
- Dimension 4: Autonomous Action Within Your Systems
- Putting It Together
- What Each Dimension Teaches Your Team
5 min
reading time
Risk Spectrum
The Risk Spectrum — Four Dimensions That Matter
After the Science Fair, your team has some AI experience. Now the question is: how do you evaluate risk as you expand? Rather than categorizing tasks as “safe” or “dangerous,” it’s more useful to think about the dimensions that actually determine risk.
Four things matter most.
Dimension 1: Context-to-Output Ratio
We covered this in the hallucination lesson, and it’s worth reinforcing here because it’s the most intuitive risk indicator.
Low risk: You give AI a large amount of source material and ask for something focused. Summarize this 30-page RFP. Draft a budget narrative based on these actual line items. Extract the requirements from this document. The AI has real information to work from, and it stays close to it.
Higher risk: You give AI very little context and ask it to generate a lot. “Write a needs statement about food insecurity in the Southeast” with no outcomes data, no community data, no organizational context. The AI fills the gaps with predictions that may or may not be accurate.
The principle: The more source material you provide relative to what you’re asking for, the safer the output. This is something your team can evaluate for every AI task.
Dimension 2: Output Volume Relative to Review Time
Low risk: AI generates a short, focused output that a human can review carefully. A paragraph. A checklist. A summary. The review takes minutes, and nothing slips through.
Higher risk: AI generates a large volume of content — a full proposal draft, a batch of funder briefs, a set of email templates — and the human reviewing it has limited time. More ground to cover means more chances for something to slip through. A fabricated statistic on page 12 of a 15-page draft is easy to miss when you’re reviewing quickly.
The principle: Risk increases when the volume of AI output outpaces your capacity to review it thoroughly. This doesn’t mean you can’t generate large outputs — it means you should plan your review time realistically and consider using AI-assisted review (from Component 3) to help cover the ground.
Dimension 3: Can the AI Delete, Overwrite, or Send?
Low risk: AI generates text in a chat window or a document. A human reads it, decides what to do with it, and takes the action. The AI’s output sits there until someone acts on it. Nothing happens without a human step in between.
Higher risk: AI can overwrite existing content — editing a document in place, replacing a previous version. If there’s no version history or undo capability, a bad AI edit can destroy good human work. This is manageable with tools that maintain version history, but dangerous with tools that don’t.
Highest in this dimension: AI can send content externally — emails, messages, submissions. Once something is sent, it’s sent. There’s no review step after the fact. Any AI workflow that includes a “send” action needs a human checkpoint before that action fires.
The principle: Risk increases when AI moves from generating content (which you can review) to taking actions (which may be irreversible). Always check: can this tool send, delete, or overwrite? If yes, where’s the human checkpoint?
Dimension 4: Autonomous Action Within Your Systems
This is the frontier, and it’s where the highest risks live.
Low risk: AI is a tool you interact with directly. You ask, it responds. You decide what to do with the response.
Higher risk: AI operates semi-autonomously — scheduled tasks that run on their own, automated workflows that trigger based on events, tools that monitor and act without being prompted each time.
Highest risk: AI agents that act within your systems — managing your email inbox, processing incoming funder communications, making decisions about how to route or respond to information. These are becoming more common and more capable. They can be genuinely useful, but they represent a fundamentally different risk profile than a chatbot you type questions into.
The principle: The more autonomy AI has to act within your systems, the more important it is to understand exactly what it can do, set clear boundaries, and build in monitoring. An AI agent that can read and respond to emails on your behalf needs much tighter oversight than an AI chatbot that drafts text for you to review.
Putting It Together
These four dimensions work together. You can evaluate any AI use case by asking:
- How much context am I providing relative to what I’m asking for? (More context = lower risk)
- How much output is being generated relative to my review capacity? (More volume with less review = higher risk)
- Can this tool delete, overwrite, or send anything? (If yes, where’s the human checkpoint?)
- Does this tool act autonomously in my systems? (If yes, what exactly can it do, and how do I monitor it?)
A task that scores “low” on all four dimensions — lots of context, focused output, read-only AI, human-initiated — is very safe. A task that scores “high” on multiple dimensions deserves careful thought about guardrails.
What Each Dimension Teaches Your Team
| Dimension | Skill Developed |
|---|---|
| Context-to-output ratio | How to set up AI tasks for reliable results |
| Output volume vs. review | How to plan review time and use AI-assisted verification |
| Delete/overwrite/send | How to identify irreversible actions and build checkpoints |
| Autonomous action | How to evaluate and monitor AI agents and automated workflows |
- Risk isn't about the type of task -- it's about four dimensions: context ratio, output volume, external actions, and autonomy
- The more context you provide and the more focused the output, the lower the risk
- Risk increases when AI can take irreversible actions (delete, overwrite, send) or act autonomously
- Evaluate any AI use case across all four dimensions to calibrate your review and guardrails
You have the risk framework. Let’s look at the practical first steps for getting started — the low-risk tasks that build skills and confidence.
Notice an error or have a question about this lesson?
Get in touchHave questions about this lesson?
Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.