Module 2 · Understanding AI Risks

The Difference Between Safe Providers and Unsafe Ones

Lesson 9 of 22 · 3 min read

Not all AI tools handle your data the same way. How to evaluate providers on privacy, security, and data handling.

What you'll cover
  • What Makes a Provider "Safe"
  • What Makes a Provider "Unsafe"
  • A Practical Evaluation Checklist
  • The Paid vs. Free Distinction
Time

3 min

reading time

Includes

Interactive knowledge check

Safe Providers vs. Unsafe Providers

“AI” is not one thing. There are hundreds of AI tools, and they vary enormously in how they handle your data. Choosing the right provider is the first decision in any AI policy.

What Makes a Provider “Safe”

A safe AI provider commits to clear, verifiable data handling practices:

No training on your inputs. The provider does not use the content you submit to improve their models. Your proposals, org data, and conversations stay yours.

Data retention limits. Your inputs are not stored indefinitely. Ideally, they’re processed and discarded, or retained only for the duration of your session.

Encryption in transit and at rest. Your data is encrypted when it’s sent to the provider and while it’s stored on their servers.

Clear terms of service. The provider’s data handling policies are written in plain language, not buried in legalese. You can find and understand what happens to your data.

SOC 2 or equivalent compliance. The provider has undergone independent security audits that verify their practices match their promises.

What's SOC 2 compliance?

SOC 2 (System and Organization Controls 2) is an auditing standard that verifies a company’s information security practices. A SOC 2 report means an independent auditor has examined the provider’s controls for security, availability, processing integrity, confidentiality, and privacy. It’s not a guarantee of security, but it means the provider has been examined by a third party rather than just self-reporting.

What Makes a Provider “Unsafe”

Not every AI tool meets these standards. Warning signs:

“Free” with no clear business model. If you’re not paying, the product might be your data. Free-tier tools often have weaker privacy protections than paid versions of the same product.

Vague or missing data policies. If you can’t find a clear answer to “do you train on my data?” in the terms of service, assume the answer is yes.

No opt-out for training. Some providers train on inputs by default and require you to find and toggle an opt-out setting. Many users never find it.

Third-party data sharing. Some tools share your inputs with partners, advertisers, or analytics providers. Read the fine print.

No compliance certifications. Without SOC 2, HIPAA, or equivalent certifications, you’re relying on the provider’s self-reported security practices.

A Practical Evaluation Checklist

When evaluating an AI tool for your organization, ask:

  1. Does the provider explicitly commit to not training on our inputs?
  2. How long is our data retained?
  3. Is our data encrypted in transit and at rest?
  4. Can we delete our data on request?
  5. Has the provider undergone independent security audits?
  6. Does the provider comply with relevant regulations (GDPR, CCPA, etc.)?
  7. Is the provider willing to sign a data processing agreement?

If you can’t get clear answers to these questions, that’s your answer.

If you can’t get clear answers about a provider’s data handling, that IS your answer. Move on.

The Paid vs. Free Distinction

This isn’t about paying being better in principle. It’s about economics. Providers who charge for their service have a business model that doesn’t depend on monetizing your data. Providers who offer AI for free need to make money somehow.

Many major AI providers offer both free and paid tiers — with significantly different data handling practices between them. The free tier of a tool might train on your inputs while the paid tier does not. Always check the specific tier you’re using.

Key Takeaways
  • Not all AI tools handle data the same way -- provider selection is a policy decision
  • Safe providers commit to no training on inputs, clear retention limits, encryption, and independent audits
  • Free-tier tools often have weaker privacy protections than paid versions
  • If you can't get clear answers about data handling, choose a different provider
### Next Lesson

You understand the risks. Now it’s time to build something practical: a set of fundamentals your organization can adopt this week. Module 3 introduces the MVP Policy Framework — four components that work as a living practice, not a static document.

Have questions about this lesson?

Ask Grantable to explain concepts, suggest how they apply to your organization, or help you think through next steps.

Ask Grantable
© 2026 Grantable. All rights reserved.