Back to Blog
Privacy, Policy & Ethics Article 4 of 5
· 8 min read

How to Evaluate an AI Vendor Without a Computer Science Degree

The sales call you're not prepared for

You're on a demo call with an AI vendor. They're showing you slick features — automatic proposal drafts, funder matching, budget generation. It looks great. Then you ask about security, because you know you're supposed to ask about security.

The vendor says: "We're SOC 2 Type 2 compliant, our infrastructure is deployed on AWS with AES-256 encryption at rest and TLS 1.3 in transit, we support SAML SSO, and our data retention policy is configurable with automated purge workflows."

You nod. You write down "SOC 2" on your notepad. You have absolutely no idea if what they just said is good, bad, or meaningless. You're not even sure which of those acronyms are security certifications and which are just... letters.

You're not alone. Most nonprofit professionals — including most executive directors, most board members, and most grant writers — don't have a background in information security. Why would you? You got into this work to serve communities, not to evaluate encryption protocols.

But here's the problem: you're making decisions about tools that will handle your organization's most sensitive data. And the vendors know you don't speak their language. Some of them use that to their advantage.

This article gives you the five questions that actually matter, what the answers mean in plain language, and how to tell when a vendor is being transparent versus when they're hiding behind jargon.

Why this matters more than features

Every nonprofit evaluating AI tools spends ninety percent of their time looking at features and ten percent asking about security. It should be the other way around. Features determine whether the tool is useful. Security determines whether the tool is safe. A useful tool that leaks your data is worse than no tool at all — because it creates a false sense of productivity while building up compliance risk you won't discover until it's too late.

Grant proposals contain information that, in any other context, would be handled with extreme care. Program participant demographics. Financial statements. Strategic plans. Donor relationships. Personnel data. Community health assessments. Descriptions of vulnerable populations. This is the kind of information that, if it showed up in the wrong hands, could harm real people.

When you choose an AI vendor, you're choosing who gets to hold that information. The features are secondary. The trust is primary.

The five questions

I've distilled the entire vendor evaluation process into five questions. You don't need to understand the technical details behind each one. You just need to ask the question and know what a good answer sounds like versus a bad one.

The 5-Question Vendor Evaluation

  1. Do you hold SOC 2 Type 2 certification?
  2. Is my data ever used for model training?
  3. What is your data retention policy and can I delete my data?
  4. Can you provide a BAA for HIPAA compliance?
  5. Where is my data physically stored and processed?

Let's go through each one.

Question 1: Do you hold SOC 2 Type 2 certification?

What this means in plain language: Has an independent auditor verified that your security practices actually work — not just on paper, but consistently over time?

SOC 2 is a security framework developed by the American Institute of CPAs. It evaluates how a company manages data across five categories: security, availability, processing integrity, confidentiality, and privacy. Think of it as a health inspection for data handling.

There are two types. Type 1 means an auditor looked at the company's security controls on a single day and said "these exist." Type 2 means an auditor monitored those controls over a period of months — typically six to twelve — and verified they work consistently. Type 1 is a snapshot. Type 2 is a track record.

What a good answer sounds like: "Yes. We hold SOC 2 Type 2 certification. Our most recent audit covered [specific time period]. We can share our SOC 2 report under NDA."

What a bad answer sounds like: "We're working toward SOC 2." "We follow SOC 2 principles." "We're SOC 2 Type 1 compliant." "Our cloud provider is SOC 2 certified." That last one is particularly sneaky — the fact that AWS or Google Cloud is SOC 2 certified doesn't mean the vendor built on top of them is. Your landlord's fire safety certification doesn't mean your apartment has smoke detectors.

The federal equivalent: If your organization does federal grant work, ask about FedRAMP — the Federal Risk and Authorization Management Program. FedRAMP is the government's security standard for cloud services. It's more rigorous than SOC 2 and specifically relevant if you handle federal data. Most nonprofit-focused AI tools won't have FedRAMP authorization, but if you work with federal grants, it's worth asking.

Question 2: Is my data ever used for model training?

What this means in plain language: When I type something into your tool, does it become part of the dataset that teaches future versions of the AI?

This is the question that separates tools built for enterprise use from tools built for consumer use. Consumer-tier AI tools — especially free ones — typically use your inputs to improve their models. That means your grant narratives, your budget justifications, your community descriptions contribute to the model's learning. Your data doesn't get spit back verbatim, but the patterns in your data become part of the model's knowledge.

What a good answer sounds like: "No. We never use customer data for model training. This is stated in our terms of service, section [X]. Our data processing agreement explicitly excludes training use."

What a bad answer sounds like: "Not by default." "You can opt out." "We anonymize data before using it for improvements." "We use aggregate, de-identified data for product improvement." Every one of these answers means yes, they use your data, but they've added a layer of language to make it sound like they don't. "Opt out" means the default is opt-in. "Anonymized" and "de-identified" still mean your content entered the pipeline.

If the answer to "do you train on my data" requires more than one sentence, the answer is yes. Vendors who don't train on your data say so clearly because it's a competitive advantage. Vendors who do train on your data explain it in paragraphs because they need the paragraphs to make it sound acceptable.

Question 3: What is your data retention policy?

What this means in plain language: How long do you keep what I've typed, and can I permanently delete it?

Every AI tool stores your inputs for some period of time. The question is how long, for what purpose, and whether you can make it stop.

What a good answer sounds like: "We retain your data for [specific period] for [specific purpose]. You can delete your data at any time through [specific mechanism]. Deletion is permanent and verifiable. We can provide a data deletion certificate upon request."

What a bad answer sounds like: "We follow industry-standard practices." "Data is retained as needed." "You can delete your conversations from the interface." That last one is important — deleting a conversation from your chat history doesn't necessarily mean it's been purged from the company's servers. The UI and the backend are different things. Ask specifically: when I hit delete, is the data removed from all systems, including backups?

Some vendors retain data for 30 days for abuse monitoring even on enterprise plans. That's generally acceptable and standard. Indefinite retention with no clear deletion mechanism is not.

Question 4: Can you provide a BAA?

What this means in plain language: If my organization handles health data, can we have a legal agreement that requires you to protect it under HIPAA?

A Business Associate Agreement is a legal contract required under HIPAA whenever a covered entity (like a healthcare nonprofit) shares protected health information with a third party. If your organization serves health-related populations — community health, mental health, substance abuse recovery, aging services, disability services — you almost certainly handle data that qualifies.

What a good answer sounds like: "Yes. We offer a BAA as part of our [plan name]. Here's a copy for your legal team to review."

What a bad answer sounds like: "We take data security very seriously." "Our platform is secure enough for health data." "We haven't had any issues." None of these are a BAA. Without the actual legal agreement, there is no HIPAA-compliant basis for sharing protected health information with that vendor. It doesn't matter how secure they claim to be. The law requires the contract.

Even if your organization isn't a covered entity under HIPAA, having a vendor that offers a BAA signals they've built their infrastructure to health-data standards. That's a useful proxy for overall security seriousness.

Question 5: Where is my data physically stored?

What this means in plain language: Which country and which servers hold my data? Is it encrypted?

Data residency matters for several reasons. Some federal funders require that data remain within the United States. Some international programs have jurisdiction requirements. And as a general principle, knowing where your data lives is a basic component of responsible data stewardship.

What a good answer sounds like: "Your data is stored in [specific region] on [specific cloud provider]. Data is encrypted at rest using AES-256 and in transit using TLS 1.2 or higher. We do not transfer data outside [jurisdiction] without customer consent."

What a bad answer sounds like: "Our data is in the cloud." "We use industry-standard encryption." "Our infrastructure is global." "Global" might mean your data passes through servers in countries with different privacy laws. "Industry-standard" is meaningless without specifics. You want a country, a cloud provider, and an encryption standard. If they can't tell you, they either don't know or don't want you to know.

How Grantable Answers These Questions

SOC 2 Type 2 certified with the most recent audit report available on request. Zero-training data policy — customer inputs are never used for model improvement, period. Defined data retention with permanent, verifiable deletion. BAA available for organizations handling health data. US-based data storage with AES-256 encryption at rest and TLS in transit. We answer these questions on the first call because we built the platform to make the answers easy.

Red flags to watch for

Beyond the five questions, here are patterns that should make you cautious during any vendor evaluation:

They redirect security questions to their cloud provider. "We're hosted on AWS, which is SOC 2 certified" is not the same as the vendor being SOC 2 certified. AWS being secure doesn't make every application running on AWS secure. The vendor's application layer, access controls, and data handling practices are separate from the infrastructure provider's certifications.

They use "enterprise-grade" without specifics. "Enterprise-grade security" is a marketing term, not a certification. It means whatever the vendor wants it to mean. Push for specifics: which framework? Which certification? Audited by whom? Over what period?

They can't produce documentation. A vendor with real security practices can produce a SOC 2 report, a data processing agreement, a BAA template, and a clear privacy policy. If getting these documents requires multiple follow-ups, escalations, or promises that they're "being updated," that tells you the documentation doesn't exist in a mature form.

They conflate security with privacy. Security means your data is protected from unauthorized access. Privacy means your data isn't used in ways you didn't consent to. A tool can be perfectly secure — no breaches, no leaks — while still using your data for training. You need both. Don't let a vendor answer a privacy question with a security answer.

The evaluation in practice

Your Vendor Evaluation Checklist

  1. Ask all five questions. In writing, via email, before you sign anything.
  2. Request the SOC 2 Type 2 report. Most vendors share it under NDA. If they won't share it, ask why.
  3. Read the data processing agreement. Search for "training," "improvement," "aggregate," and "de-identified." These are the words that signal your data may be used beyond your workspace.
  4. Check the privacy policy publication date. If it hasn't been updated in over a year, the vendor may not be keeping pace with changes in their own data practices.
  5. Ask for customer references in the nonprofit sector. A vendor that serves enterprise SaaS companies may not understand the specific compliance needs of organizations handling human services data.

You don't need a CS degree. You need five questions.

The vendors who make security evaluation feel complicated are the ones who benefit from your confusion. The vendors who make it simple are the ones who've done the work and want you to see it.

You don't need to understand AES-256 encryption to ask whether a vendor holds SOC 2 Type 2 certification. You don't need to read a technical whitepaper to ask whether your data is used for training. You don't need a background in information security to ask for a BAA.

You need five questions, the ability to distinguish a clear answer from a vague one, and the willingness to walk away from a vendor who can't answer them.

Your Monday morning move

Step one: Print the five-question checklist from this article. Put it wherever you keep vendor evaluation materials.

Step two: Send the five questions to every AI vendor your organization currently uses. Don't ask in a meeting — put it in writing. The quality of the written response tells you more than a verbal answer on a sales call.

Step three: Compare the responses side by side. You'll immediately see which vendors answer clearly and which ones hedge. The hedging is the data point.

Step four: Share the framework with your team. The next time someone on your staff says "I found this cool AI tool," the first question shouldn't be "what does it do?" It should be "does it pass the five questions?"

You got into nonprofit work to serve people, not to become an information security expert. You don't have to become one. You just have to ask the right questions and recognize when the answers aren't good enough. That's not a technical skill. That's judgment. And you already have plenty of that.