Module 3 · The MVP Policy Framework

Component 1 — Know Who Built It

Lesson 10 of 22 · 6 min read MVP Policy

The first rule of any AI policy: have a way to evaluate whether the tools your team uses are well-made and responsibly maintained.

What you'll cover
  • The Principle
  • Why Not an Approved List?
  • What to Build Instead
  • How to Evaluate an AI Tool
  • Making It Work in Practice
  • When a Tool Needs to Go
Time

6 min

reading time

Framework

MVP Policy

Component 1 — Know Who Built It

The MVP Policy Framework has four components. They’re designed to be simple enough to implement this week, without a committee, without a consultant, and without months of deliberation. This is the first one.

The Principle

Your organization needs the ability to evaluate whether an AI tool is well-made and responsibly maintained. Not a list of approved tools. A process — or better yet, a person — who can make that call quickly.

Why Not an Approved List?

The instinct is to create a list: these three tools are approved, everything else is off-limits. It’s tidy. It feels safe. In practice, it creates problems:

  • It discourages experimentation. New AI tools are emerging constantly, and many of them are genuinely useful. If someone on your team discovers a tool that could save hours of work, you don’t want their first thought to be “it’s not on the list, so I can’t use it” or “I don’t want to deal with getting it added.”
  • It becomes gatekeeping. Lists need maintenance. Someone has to add tools, remove tools, review changes. In a busy organization, the list gets stale, and people either work around it or stop trying new things.
  • It drives invisible use. When the process for trying a new tool is cumbersome, people skip the process. They use the tool quietly on a personal account. That’s the opposite of what you want.

You want your team to be experimental. You want them innovating at their desks, trying tools that could make their work better. You just need the ability to verify that what they’re using is trustworthy.

You want your team to be experimental — innovating at their desks, trying new tools. You just need the ability to quickly verify that what they’re using is trustworthy.

What to Build Instead

Designate someone who evaluates tools. This could be a staff member who’s comfortable with technology, an IT contact, or even a tech-savvy board member. The role is simple: when someone on the team wants to try a new AI tool, this person can do a quick evaluation.

For smaller organizations, this doesn’t need to be a formal role. It can be the same person who evaluates any new software purchase. The point is having someone your team can go to who can turn around an assessment quickly — days, not weeks.

How to Evaluate an AI Tool

A quick evaluation covers a few key questions:

Who’s behind this tool? Is this a company with a track record? Can you find real people — founders, engineers, a support team? How long have they been operating? A polished interface built last month by an unknown team deserves more scrutiny than a tool from an established provider.

What are their data handling terms? Specifically: do they train on your inputs? How long do they retain your data? Can you delete it? As we covered in the data privacy lesson, these questions matter most for free-tier tools where the business model may depend on user data.

Is the tool well-maintained? Is there active development? A responsive support team? Regular updates? These are signs of a team that takes the work seriously.

Does it pass the common sense test? If the tool promises something that sounds too good to be true, or if you can’t find any information about the people who built it, trust that instinct.

You can actually use AI itself to help with this evaluation. Point a chatbot at the tool’s website and documentation and ask it to summarize their data handling practices, their team, and their track record. It’s a good first pass.

Making It Work in Practice

The goal is speed and openness, not control:

  • When someone finds a new tool: They mention it to the evaluator. “Hey, I found this tool that helps with budget formatting. Can you take a quick look?”
  • The evaluator checks the basics: Who built it, what the data terms are, whether it looks legitimate. This should take 15-30 minutes, not a committee meeting.
  • The answer comes back fast: “Looks good, go for it” or “I have some concerns — here’s what I found.”

If your organization is small enough that there’s no dedicated evaluator, you can make it a team norm: before using a new AI tool for real work, spend 10 minutes checking who built it and reading their data handling terms. That’s often enough.

When a Tool Needs to Go

Things change. A company gets acquired. Terms of service shift. A security incident makes the news. A tool that was fine last month may not be fine today.

Have a simple protocol for this: if a tool is flagged as a vulnerability — by your evaluator, by a team member, or by outside reporting — the organization can communicate quickly that it’s time to stop using it. This doesn’t need to be elaborate. An email or Slack message: “We’re discontinuing use of [tool] because [reason]. Here’s what to use instead.” The key is speed. Mistakes and accidents happen — companies make mistakes too. Anticipating that a tool may need to be dropped quickly is just good operational hygiene.

Philip’s Take: I couldn’t tell you all of our SaaS providers off the top of my head, and I don’t think most people can. A list of approved tools becomes a bureaucratic exercise. What I want is for my team to be constantly discovering and trying new tools — that’s how you stay current. You just need somebody who can quickly check whether a tool is legitimate and well-maintained. Fast evaluation, not gatekeeping.

Check your understanding

A team member finds a new AI tool that could save hours on budget formatting. It looks polished but launched two weeks ago from an unknown team. What do you do?

Key Takeaways
  • Don't maintain an approved tools list -- it becomes gatekeeping that discourages experimentation
  • Instead, have a person or process that can quickly evaluate new tools
  • Key questions: who built it, what are the data terms, is it well-maintained, does it pass the common sense test
  • The goal is speed and openness -- people should feel free to discover new tools, with a lightweight check before real use
  • Have a protocol for fast removal too -- companies change, incidents happen, and you need to be able to drop a tool quickly
### Next Lesson

You have a way to evaluate tools. The second component is about what goes into those tools — specifically, the PII question and how to think about it proportionally.

Have questions about this lesson?

Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.

Ask Grantable
© 2026 Grantable. All rights reserved.