Foundational Research Grants

Center for Security and Emerging Technology

Funding Amount

Up to US $1,000,000

Deadline

Rolling / Open

Grant Type

foundation

Overview

Foundational Research Grants

Status: ACTIVE
Funder: Center for Security and Emerging Technology
Amount: Up to US $1,000,000
Last Updated: May 23, 2025

Summary

The Foundational Research Grants (FRG) program supports research exploring fundamental technical issues related to the long-term national security implications of AI. Unlike traditional CSET research, FRG finances external projects by technical teams. Current areas of interest include AI assurance for complex systems, technical tools for external scrutiny, and the exploration of risks and regulations surrounding frontier AI. The program aims to enhance understanding of these critical issues from both strategic and policy perspectives.

Overview

Background Foundational Research Grants (FRG) supports the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term. In contrast to most CSET research, which is performed in-house by our team of fellows and analysts, FRG funds external projects by technical teams. The program aims to advance our understanding of underlying technical issues to shed light on questions of interest from a strategic or policy perspective. Current areas of interest include: AI assurance for general-purpose systems in open-ended domains: Machine learning systems are rapidly becoming larger, more complex, more capable, and more general-purpose. Existing assurance approaches for systems with automated or autonomous capabilities do not appear to be well suited to the kinds of large-scale deep learning systems that are currently being developed and deployed. FRG is interested in whether—and how rapidly—assurance approaches are likely to be developed that are suitable for such systems both now and for the long term. Technical tools for external scrutiny of AI: As AI’s impact on the world grows, so does the need for external scrutiny of privately held AI systems, to ensure that they are being developed and used in safe and ethical ways. But granting access to outsiders has ramifications for the privacy, security, and intellectual property of AI developers. There are early indications that different technical methods—including approaches using privacy-preserving tools and hardware-based features—can help reduce these tensions. FRG is interested in investigating how well these tools can work in practice. Frontier AI risks and regulations: The term “frontier AI” has begun to be used to refer to general-purpose AI systems that are at or just beyond the current cutting edge. These systems raise a range of questions and policy challenges, which FRG is interested to explore. AI security and nonproliferation: As AI systems become more capable, it will be important that their developers are able to prevent unauthorized actors from accessing or using them. FRG is interested in supporting work that could make this more feasible.Call for research ideas: Risks From Internal Deployment of Frontier AI Models

Eligibility

We've imported the main document for this grant to give you an overview. You can learn more about this opportunity by visiting the funder's website.

Focus Areas & Funding Uses

Fields of Work

science-research

Categories

Browse similar grants by category

Related Grants

Similar grants from this funder and related organizations

Ready to apply for Foundational Research Grants?

Grantable helps you assess fit, draft narratives, and track deadlines — so you can submit stronger applications, faster.