Tips & Best Practices
Get the most out of Grantable's AI with these practical tips for better results.
Last updated Mar 26, 2026
The one thing that matters most
The single most impactful thing you can do is give Grantable more context. Everything else flows from that.
Here’s the mental model: more context → better output → fewer iterations → faster work. Every tip on this page is really just a specific way to put more context in front of the AI so it can do better work on the first try.
Build your foundation first
Complete your organization profile
Your org profile is the context that follows every conversation. When the AI knows your mission, programs, geographic focus, financials, and theory of change, it stops producing generic grant language and starts producing your grant language.
Use the /profile skill to build or update it. This is a one-time investment that pays off in every interaction going forward. You can always update it as your organization evolves.
Upload source materials early
Past successful proposals, annual reports, program descriptions, evaluation reports — all of it becomes reference material the AI can draw from. The more relevant material in your file tree, the more evidence-based the output becomes.
Think of it this way: you wouldn’t ask a new staff member to write a proposal without giving them any background materials. Same principle here.
Write better prompts
Be specific about what you need
“Draft the needs statement for the youth mentoring program using data from the 2025 annual report” gives the AI a clear target. “Write something about our programs” gives it almost nothing to work with.
You don’t need to write perfect prompts. Just include three things when you can: what you want, which materials to reference, and what format you expect. That’s usually enough.
Use skills for structured work
When you have a clear task — drafting, prospecting, reviewing — invoke the matching skill with a slash command. Skills give the AI a structured workflow tuned for that specific type of work, which means better results than an open-ended prompt. See Skills & Slash Commands for the full list.
You can also just describe what you need in plain language. The AI will suggest relevant skills if one fits.
Choose the right model tier
Use Auto for most tasks — it handles the vast majority of grant work well. Switch to Pro when you’re doing complex writing that needs the highest quality, like a final narrative draft. Use Fast for quick questions and simple lookups where speed matters more than nuance. See AI Model Tiers for details.
Let the AI interview you
One of the most effective ways to give Grantable context is to let it ask you questions. Instead of trying to write the perfect prompt, try:
- “Ask me 5 questions that would help you write a better needs statement for this application.”
- “Interview me about our youth mentoring program so you have enough context to draft a strong narrative.”
- “Before you start, ask me 10 questions about what makes our organization unique.”
The AI will ask targeted, specific questions about the things it actually needs to know — your programs, your evidence, your strategic direction. Answer them in plain language (you don’t need to be formal), and the AI now has a much richer foundation to work from.
This is especially useful when you’re starting fresh on a new application or working with a funder you haven’t pursued before. Think of it as briefing a new colleague: they’ll do better work if they can ask you what they need to know rather than guessing from a stack of documents.
Understanding AI output
Citations tell you where the AI got its information
When the AI references your source materials, it cites specific documents. Check these — they tell you exactly what evidence backed a particular claim or section. If a citation seems off, or the AI says it couldn’t find evidence, that’s a useful signal. It means there’s a gap in your uploaded content that you might want to fill.
The AI tells you when it’s unsure
Pay attention when the AI hedges or says it couldn’t find something. That’s honesty, not failure. Those moments are your cue to either provide the missing information in chat or upload a document that fills the gap.
Sometimes it needs your input
Budget numbers, specific program details, strategic decisions that aren’t captured in any document — the AI will ask for these when it hits a wall. Answering directly in chat is the fastest way to keep moving. You’re always in control of what goes into the final output.
Common patterns
Starting a new application
Upload the RFP and any relevant source materials first. Then ask the AI to analyze the requirements and outline an approach — this gives you a map before you start writing. Use /grant-writing to draft sections based on the outline, and /review to check the draft against what the funder actually asked for.
Evaluating whether an opportunity is worth pursuing
Ask the AI to assess your fit with a funder or opportunity. Review the fit score and the evidence behind each criterion. This helps you make an informed decision about whether to invest the application effort before you’ve spent hours on it.
Improving a draft you’ve already started
Attach your draft to a message and ask the AI to review it against specific requirements, or use /review for a structured assessment. Then use follow-up prompts to revise specific sections. You stay in control of what changes and what doesn’t.
What the AI won’t do
This section is here to set expectations, not to list limitations.
- It won’t fabricate information. The AI doesn’t invent statistics, program outcomes, or organizational details. If it can’t find evidence in your materials, it says so. This is a feature — you never have to worry about made-up data ending up in a submission.
- It won’t override your decisions. The AI proposes and recommends, but every action requires your confirmation. You review, edit, and approve everything before it goes anywhere.
- It won’t produce good output without context. Generic prompts without context produce generic output. This isn’t a limitation of the AI — it’s just how the process works. The tips above are all about closing that gap.
Pro tip: If your first draft from the AI feels too generic, that’s almost always a context problem, not an AI problem. Run
/profileto check your org profile, upload more source materials, and try again with a more specific prompt. The difference is usually dramatic.