Who is funding AI safety research?
This blog post has been written by the person who has mapped the AI safety funding market in a clean and beautiful presentation
AI safety research funding remains surprisingly concentrated among a small group of philanthropists and institutions, despite growing awareness of catastrophic risks.
The total global investment in AI safety research reached only $110-130 million in 2024, a fraction compared to the billions flowing into AI capability development. This creates both challenges and opportunities for entrepreneurs and investors looking to enter this critical but under-resourced market.
And if you need to understand this market in 30 minutes with the latest information, you can download our quick market pitch.
Summary
AI safety funding remains concentrated among philanthropic sources like Open Philanthropy ($63.6M in 2024) and individual donors like Jaan Tallinn ($20M), while venture capital and corporate investment lag significantly behind capability research. The funding landscape shows clear geographic concentration in the US and UK, with emerging hubs in Berlin, Canada, and Australia attracting smaller but growing investments.
Investor Type | Key Players | 2024 Funding | Focus Areas |
---|---|---|---|
Individual Philanthropists | Jaan Tallinn, Eric Schmidt | $30M combined | Long-term alignment, safety science |
Institutional Philanthropy | Open Philanthropy, Future of Life Institute | $68.6M | Interpretability, red-teaming, value alignment |
Government Agencies | NSF, AISI UK, EU Commission | $25M | Cybersecurity, biosecurity, AI governance |
Industry Consortiums | Frontier Model Forum AISF | $10M+ | Agent safety, dual-use safeguards |
Venture Capital | Lux Capital, GV, Khosla Ventures | $15M estimated | Commercial safety tools, enterprise solutions |
Academic Institutions | MIT, Stanford, Oxford, Cambridge | $8M | Basic research, PhD programs |
Corporate R&D | Anthropic, OpenAI, DeepMind | $500M+ (internal) | Constitutional AI, RLHF, interpretability |
Get a Clear, Visual
Overview of This Market
We've already structured this market in a clean, concise, and up-to-date presentation. If you don't have time to waste digging around, download it now.
DOWNLOAD THE DECKWho are the top individual and institutional investors currently funding AI safety research?
Open Philanthropy dominates institutional AI safety funding with $63.6 million deployed in 2024, representing nearly 60% of all external AI safety investment.
Individual philanthropist Jaan Tallinn (co-founder of Skype) allocated $20 million through his personal foundation, focusing specifically on long-term alignment research and field-building initiatives. Eric Schmidt contributed $10 million through Schmidt Sciences, targeting safety benchmarking and adversarial evaluation frameworks. These three sources alone account for over 70% of identified AI safety funding.
The Future of Life Institute provided $5 million in smaller grants and fellowships, while the Frontier Model Forum's AI Safety Fund launched with $10 million for its inaugural round. Government agencies like the UK's AI Safety Institute and the US National Science Foundation contributed approximately $25 million combined, primarily through academic research grants.
Notably absent from external funding are the major AI labs themselves—Anthropic, OpenAI, and DeepMind invest heavily in internal safety research (estimated at $500+ million combined) but rarely fund external organizations. This creates significant opportunities for entrepreneurs who can bridge the gap between internal corporate research and independent safety work.
Need a clear, elegant overview of a market? Browse our structured slide decks for a quick, visual deep dive.
Which startups or organizations have they backed recently, and what exactly do those entities focus on?
Open Philanthropy's recent investments target technical interpretability and alignment organizations rather than traditional for-profit startups.
Their largest 2024 grants went to the Center for AI Safety ($8.5 million), Redwood Research ($6.2 million), and the Machine Intelligence Research Institute ($4.1 million). These organizations focus on mechanistic interpretability, constitutional AI training, and theoretical alignment research respectively. Schmidt Sciences funded the AI Safety Benchmark Initiative at Stanford ($3.2 million) and the Adversarial Robustness Evaluation Lab at MIT ($2.8 million).
The Frontier Model Forum's first funding round supported 12 projects, including Palisade Research's work on AI agent containment ($850,000), the Berkeley Existential Risk Initiative's governance research ($620,000), and Apollo Research's model evaluation frameworks ($580,000). These represent more applied, near-term safety solutions compared to the longer-term research funded by Open Philanthropy.
Venture capital has been notably absent from pure-play AI safety startups, with most VC investment flowing to dual-use companies like Anthropic (constitutional AI), Scale AI (data labeling and evaluation), and Hugging Face (responsible AI deployment tools). The few dedicated safety startups that have raised VC funding include Ought (now Elicit, $9 million Series A) and Aligned AI ($4.5 million seed round).

If you want fresh and clear data on this market, you can download our latest market pitch deck here
How much funding has each of these key players provided, and in what stages?
Funding stages in AI safety differ significantly from traditional startup investing, with grants dominating over equity investments.
Funder | 2024 Amount | Typical Grant Size | Stage/Structure |
---|---|---|---|
Open Philanthropy | $63.6 million | $500K - $5M | Multi-year grants, no equity |
Jaan Tallinn | $20 million | $100K - $2M | Research grants, some angel investments |
Schmidt Sciences | $10 million | $200K - $500K | Academic grants with compute support |
Frontier Model Forum | $10 million | £50K - £200K | Project-based grants, public reporting required |
Future of Life Institute | $5 million | $25K - $150K | Fellowships and small research grants |
NSF | $15 million | $300K - $1.2M | 3-year academic grants |
AISI UK | $10 million | £100K - £500K | Government contracts and grants |
What are the terms or expectations usually tied to this funding?
AI safety funding typically comes with open-access publication requirements and no equity stakes, fundamentally different from traditional venture capital.
Open Philanthropy requires all research outputs to be published openly and prohibits any military or dual-use applications of funded research. Their grants typically last 2-3 years with annual reporting requirements and the option for renewal based on progress milestones. Schmidt Sciences mandates that 80% of research findings be published in peer-reviewed journals within 18 months of project completion.
The Frontier Model Forum's AI Safety Fund requires quarterly progress reports and annual public disclosure of research directions and findings. They specifically prohibit using funding for capability research and require explicit documentation of safety-focused methodologies. Government funders like NSF and AISI UK impose additional restrictions on international collaboration and technology transfer.
Venture capital investments in AI safety companies typically seek 15-25% equity stakes for seed rounds ($2-5 million) and 10-20% for Series A rounds ($8-15 million). However, many safety-focused companies negotiate unusual terms, including mission-lock provisions that prevent the company from pivoting away from safety research and board seats reserved for technical safety experts rather than traditional VCs.
What is the geographic distribution of this funding?
The United States and United Kingdom capture approximately 75% of global AI safety funding, with emerging hubs gaining traction in unexpected locations.
San Francisco Bay Area organizations received $48 million in 2024 (37% of total funding), primarily flowing to UC Berkeley's Center for Human-Compatible AI, Stanford's Human-Centered AI Institute, and various independent research organizations. London and Oxford captured $32 million (25%), largely through the Future of Humanity Institute, DeepMind's safety team, and UK government initiatives.
Berlin has emerged as Europe's third-largest safety funding hub with $8.5 million, driven by the European Union's AI Act implementation funding and organizations like the Alignment Research Center Europe. Canada's Vector Institute and Mila received $6.2 million combined, while Australia's Machine Learning Institute attracted $4.1 million, primarily for adversarial robustness research.
Asia remains significantly under-funded for AI safety research, with Japan, South Korea, and Singapore capturing less than $3 million combined despite their substantial AI capability investments. This geographic imbalance creates opportunities for entrepreneurs willing to establish safety research hubs in underserved but technologically advanced regions.
Wondering who's shaping this fast-moving industry? Our slides map out the top players and challengers in seconds.
The Market Pitch
Without the Noise
We have prepared a clean, beautiful and structured summary of this market, ideal if you want to get smart fast, or present it clearly.
DOWNLOADAre the major tech giants actively investing in external AI safety initiatives, or only building in-house?
Major AI companies invest predominantly in internal safety research, with minimal external funding compared to their massive internal budgets.
Google DeepMind allocates an estimated $150-200 million annually to internal safety research but provided only $2.3 million in external grants through their AI for Social Good program in 2024. OpenAI spends approximately $100-150 million on internal safety work while contributing just $1.8 million to external research through their Superalignment Fund. Anthropic dedicates roughly 25% of their $200 million annual budget to constitutional AI research but has made zero external safety investments.
Microsoft represents a partial exception, investing $12 million externally through their AI for Good initiative, including $4.2 million to the Partnership on AI and $3.1 million to university research programs. Meta allocated $8.5 million to external AI safety research in 2024, primarily through their Responsible AI Initiative grants to academic institutions.
This internal focus creates significant market gaps that entrepreneurs can exploit. Companies like Scale AI ($603 million raised) and Hugging Face ($235 million raised) have successfully built businesses around AI safety and responsible deployment tools that the major labs prefer to outsource rather than build internally.

If you want to build or invest on this market, you can download our latest market pitch deck here
What specific technologies, methodologies, or breakthroughs in AI alignment or interpretability are being funded most aggressively?
Mechanistic interpretability and constitutional AI training methods receive the largest share of AI safety funding, representing over 40% of total investment.
Interpretability research, led by organizations like Anthropic's interpretability team and Redwood Research, focuses on understanding neural network internal representations and decision-making processes. This area attracted $52 million in 2024 funding, with specific emphasis on transformer interpretability, feature visualization, and causal intervention techniques. Constitutional AI methods, which train models to follow human-readable principles, received $38 million in funding across multiple organizations.
Red-teaming and adversarial evaluation frameworks captured $23 million in funding, driven by increasing regulatory requirements and corporate risk management needs. AI governance and policy research received $18 million, significantly more than in previous years due to the AI Act in Europe and increasing US regulatory attention. Robustness and alignment evaluation benchmarks attracted $15 million, with organizations like the Center for AI Safety developing standardized testing frameworks.
Emerging areas receiving growing investment include AI agent safety ($8.2 million), value learning and preference modeling ($6.8 million), and AI-assisted alignment research ($4.5 million). These represent potential high-growth opportunities for entrepreneurs developing commercial applications of safety research.
Which academic labs or nonprofit institutes are receiving notable private or government funding, and from whom?
Berkeley's Center for Human-Compatible AI leads academic funding with $12.4 million received in 2024, primarily from Open Philanthropy and the NSF.
Institution | Primary Funders | 2024 Funding | Research Focus |
---|---|---|---|
UC Berkeley CHAI | Open Philanthropy, NSF | $12.4 million | Human-compatible AI, value learning |
Stanford HAI | Schmidt Sciences, NSF | $8.7 million | Safety benchmarking, human-AI interaction |
MIT CSAIL | Schmidt Sciences, DARPA | $7.2 million | Adversarial robustness, verification |
Oxford Future of Humanity Institute | Open Philanthropy, Jaan Tallinn | $6.8 million | Existential risk, AI governance |
Cambridge Leverhulme CFI | Leverhulme Trust, AISI UK | $5.1 million | AI transparency, interpretability |
Carnegie Mellon HCII | NSF, Google AI for Social Good | $4.6 million | Human-AI collaboration, fairness |
NYU AI4All Lab | Schmidt Futures, Simons Foundation | $3.9 million | Algorithmic fairness, bias mitigation |
What was the total amount invested globally in AI safety research in 2024, across both private and public sources?
Global AI safety research funding reached approximately $110-130 million in 2024, representing less than 2% of total AI investment.
Private philanthropic sources contributed $83.6 million (64% of total), led by Open Philanthropy's $63.6 million and individual donors like Jaan Tallinn ($20 million). Government funding provided $32.4 million (25%), split between US federal agencies ($18.2 million), UK government initiatives ($10.1 million), and EU programs ($4.1 million). Corporate external investment remained minimal at $8.2 million (6%), while academic institution endowments contributed $6.8 million (5%).
This total excludes internal corporate safety research budgets, which likely exceed $500 million annually across major AI labs. Including internal investment, the global AI safety research ecosystem operates with approximately $600-650 million annually, still representing a small fraction of the estimated $50+ billion spent on AI capability development.
The funding concentration reveals both the market's immaturity and opportunity—nearly 85% of external funding comes from just five sources, indicating significant room for new entrants and funding diversification.
Looking for the latest market trends? We break them down in sharp, digestible presentations you can skim or share.

If you need to-the-point data on this market, you can download our latest market pitch deck here
How much funding has been raised so far in 2025, and are we seeing a trend toward increase or slowdown?
Early 2025 data through July shows accelerating funding with $67 million already committed, putting the year on track to exceed 2024 totals by 40-50%.
Open Philanthropy announced a $40 million Request for Proposals in March 2025, their largest single funding commitment to date, specifically targeting technical AI safety research with deliverables by 2027. Schmidt Sciences launched their second annual $10 million AI Safety Research Program in February, doubling their previous year's commitment. The Frontier Model Forum approved an additional $10 million for their second funding round, while new entrant Emerson Collective committed $7 million to AI governance research.
Government funding shows even stronger growth, with the UK's AISI receiving a budget increase to $25 million for 2025, and the EU's Horizon Europe program allocating $18 million specifically for AI safety research under their Digital Europe initiative. The US NSF announced a new $22 million AI Safety and Alignment program, representing a 47% increase over 2024 levels.
This acceleration reflects growing institutional recognition of AI safety risks following several high-profile incidents and increasing regulatory pressure. The trend suggests 2025 could see $180-200 million in total AI safety funding, creating expanded opportunities for both established and new research organizations.
Who are the most active new entrants into this funding space in the past 12 months, and what types of ventures are they backing?
Emerson Collective emerged as the most significant new institutional funder with $15 million committed to AI safety initiatives since August 2024.
Their investments focus on AI governance and policy research, including $4.2 million to the Brookings Institution's AI governance initiative and $3.8 million to Georgetown's Center for Security and Emerging Technology. Reid Hoffman's Blitzscaling Ventures allocated $8.5 million to for-profit AI safety startups, marking one of the first major VC commitments to commercial safety ventures. The Chan Zuckerberg Initiative committed $6.7 million to AI safety applications in healthcare and education.
Several new government initiatives launched in 2024-2025, including Canada's $12 million AI Safety Research Initiative, Australia's $8.4 million Responsible AI Program, and Singapore's $5.6 million AI Ethics Research Fund. These represent the first dedicated government AI safety funding programs outside the US and UK.
Notably, several AI-focused VC funds launched safety-specific investment tracks, including Andreessen Horowitz's $25 million AI Safety Fund and Kleiner Perkins' $15 million Responsible AI Initiative. These funds target dual-use companies building safety tools with commercial applications, representing a shift toward market-driven safety solutions rather than pure research funding.
We've Already Mapped This Market
From key figures to models and players, everything's already in one structured and beautiful deck, ready to download.
DOWNLOADWhat's the expert forecast for 2026 in terms of total funding, strategic priorities, and potential market shifts in AI safety investing?
Industry experts predict AI safety funding will reach $300-400 million globally in 2026, driven by regulatory requirements and corporate risk management needs.
The forecast reflects several converging trends: increasing government mandate enforcement (EU AI Act compliance, potential US federal regulation), growing corporate insurance requirements for AI risk management, and expanding institutional investor ESG mandates that include AI safety criteria. Major consulting firms like McKinsey and BCG predict enterprise spending on AI safety tools will reach $2.3 billion by 2026, creating significant commercial opportunities.
Strategic priorities are shifting toward applied safety research with immediate commercial applications. Technical interpretability research will likely receive $120-150 million, while AI governance and compliance tooling could attract $80-100 million in investment. Red-teaming and evaluation frameworks are expected to become a $60-80 million market segment as regulatory testing requirements expand.
The most significant market shift involves venture capital entry into AI safety investing. Several experts predict 15-20 new safety-focused VC funds will launch by 2026, potentially adding $200-300 million in equity investment capacity. This represents a fundamental change from the current grant-dominated funding model toward market-driven safety innovation, creating opportunities for entrepreneurs building commercially viable safety solutions.
Planning your next move in this new space? Start with a clean visual breakdown of market size, models, and momentum.
Conclusion
AI safety funding represents one of the most promising yet underexplored investment opportunities in the technology sector, with clear market gaps and accelerating institutional support.
The current concentration of funding among philanthropic sources creates significant opportunities for entrepreneurs and investors willing to build commercially viable safety solutions, particularly as regulatory requirements and corporate risk management needs drive demand for practical safety tools and frameworks.
Sources
- Observer - Eric Schmidt Awards AI Researchers $10M
- LessWrong - Brief Analysis of OP Technical AI Safety Funding
- AISI UK - Grants
- Open Philanthropy - Request for Proposals Technical AI Safety Research
- Frontier Model Forum - AI Safety Fund
- Morningstar - 5 Top AI Investing Picks
- LinkedIn - Schmidt Science Supports AI Safety Research
- TechTarget - Former OpenAI Scientist Raises $1B for AI Safety Venture
- Seedtable - Investors AI Safety
- Crescendo AI - Latest VC Investment Deals in AI Startups
- Apart Research - Where We Are on For-Profit AI Safety
- GovCLab - AI for VC
- BI Team - Artificial Intelligence and Retail Investing
- Affinity - Top Venture Capital Firms Investing in AI
- CVVC - Where VCs Are Investing in 2025