Senior Program Associate – AI Information Security, AI Governance and Policy
About the AI Governance & Policy (AIGP) Program
The AI Governance and Policy program works to improve society’s preparedness for transformative AI, particularly by mitigating global catastrophic risks. Our 11-person team aims to distribute over $150 million in grants annually over the coming years to organizations and individuals that focus on developing sound governance approaches, increasing governance capacity, and advancing policy research and advocacy that could reduce risks from transformative AI. You can read more about our priorities in our current Request for Proposals.
About Open PhilanthropyOpen Philanthropy is a philanthropic funder and advisor; our mission is to help others as much as we can with the resources available to us. We stress openness to many possibilities and have chosen our focus areas based on importance, neglectedness, and tractability. Our current giving areas include potential risks from advanced artificial intelligence, global health and development, scientific research, global public health policy, farm animal welfare, and biosecurity and pandemic preparedness. In 2024, we recommended $650 million to high-impact causes, and we’ve recommended over $4 billion in grants since our formation.
About the roleWe're seeking an expert in information security to lead our AI information security grantmaking and help shape our strategy in this critical area.
You'll work closely with other members of the AIGP team to identify, evaluate, and scale high-leverage funding opportunities that improve the security of advanced AI systems. You'll have substantial autonomy to develop and execute our AI information security strategy while managing a significant budget (likely in the $18-36 million/year range, possibly more depending on the quality of the opportunities you find).
Our work on AI information security includes safeguarding model weights and algorithmic insights, preventing system poisoning or sabotage, securing training data and compute resources, addressing vulnerabilities across the full machine learning supply chain (from compute resources to MLOps), and enabling secure third-party access for audits and evaluations. Your portfolio will likely span technical research, policy development, and ecosystem growth and support. Our previous grants have supported RAND's Meselson Center (which authored Securing Model Weights), security fieldbuilding projects such as Heron, and benchmarks like Cybench, CVEbench, and BountyBench.
Why we’re hiring now: As advanced AI rapidly progresses from research to real-world deployment, the challenges of securing frontier AI systems against theft, subversion, or sabotage by capable actors will become increasingly important. Strong information security at frontier AI projects could help prevent catastrophic misuse, and enable safer periods for AI alignment and control research and implementation. Yet we believe that without significant effort, information security at a number of frontier AI projects over the next few years will not be sufficient to ensure that systems remain secure. Because of this, we’d like to build out a dedicated AI information security workstream, with specialist knowledge in-house directing our efforts on this problem.
Core responsibilities include:
Strategic Direction & Execution: Develop, refine, and execute our strategy for supporting AI information security initiatives, focusing on the highest-impact interventions to reduce catastrophic risk
Grantmaking & Portfolio Management:
Source and evaluate promising grants, contracts, and projects
Design and run funding calls (e.g. Requests for Proposals)
Write clear and compelling grant recommendations
Build and maintain strong relationships with current and potential grantees
Oversee ongoing grants and actively seek new opportunities to advance the field
Technical Advising: Provide expert technical advice on security-related proposals and strategic questions across Open Philanthropy's AI teams
Network Development: Cultivate and maintain relationships spanning frontier AI labs, hyperscale cloud providers, government agencies, leading security consultancies, and academic institutions
You might be a great fit for this work if you:
Are strongly motivated to reduce catastrophic risks from advanced AI and see information security as a crucial intervention point for safer AI development.
Have deep professional experience (3-10+ years) in information security, including hands-on technical work and some project management experience.
Your knowledge of the security field will likely be ‘T-shaped’: some deep knowledge, lots of shallower knowledge.
While we're looking for depth in at least one relevant domain, the specific area matters less than you having gone deep enough to have developed technical taste and critical intuition.
Think critically about transformative AI scenarios and can reason through their security implications, especially for preventing worst-case outcomes.
Communicate well, explaining complex technical security concepts clearly and accurately to both specialist and non-specialist audiences.
Demonstrate strong judgment and strategic thinking, navigating uncertainty about both technical feasibility and strategic impact, and understanding where security fits in the larger AI governance ecosystem.
Take ownership proactively, identifying what needs to happen and making it happen, even when the path forward isn't clearly defined.
Desirable but not essential: Executive or team leadership experience, personnel management experience, broad familiarity with frontier AI research and development, experience in policy development or advocacy, and familiarity with research on AI alignment and control.
Above all, we are looking for people motivated to contribute to our mission of helping others as much as we can with the resources available to us. If this role aligns with your values and expertise, we encourage you to apply even if you don't meet every qualification listed above.
If you have more experience than we’re asking for above and think you might be overqualified, we’d still encourage you to apply. While we expect to hire at the Senior Program Associate level, we are open to hiring exceptional candidates for a more senior version of the role.
Process and timelinesOur application process will include:
An initial application that consists of answering a series of questions
An initial 30-minute interview and a paid work test
An interview with Alex Lawsen, who this role will report to
A series of final interviews with several Open Philanthropy team members, along with reference checks
We expect to make offers by mid-August and strongly encourage candidates to let us know if they need to hear back from us sooner at any point during the process.
Please note that due to time constraints, we cannot give feedback during the early stages of the process, including on work tests. Thank you for your understanding.
Role details & benefitsCompensation: The baseline compensation for this role is $215,485.91, which would be distributed as a base salary of $192,485.91 and an unconditional 401(k) grant of $23,000.
These compensation figures assume a remote location; there would be upward geographic adjustments for candidates based in San Francisco or Washington, D.C.
Time zones and location: You can work from anywhere but should be willing to overlap with the US East Coast timezone for at least 15 hours/week. We’d prefer someone who is based in the U.S. or open to traveling there periodically, but this isn’t a strict requirement.
We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
Benefits: Our benefits package includes:
Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses.
Dental, vision, and life insurance for you and your family.
Four weeks of PTO recommended per year.
Four months of fully paid family leave.
A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.
A continual learning policy that encourages staff to spend time on professional development with related expenses covered.
Support for remote work — we’ll cover a remote workspace outside your home if you need one, or connect you with an Open Philanthropy coworking hub in your city.
We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).
Start date: Flexible, though we’d prefer someone to start as soon as possible after receiving an offer.
We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.
If you need assistance or an accommodation due to a disability, or have any other questions about applying, please contact [email protected].
Please apply by 11:59 pm (Pacific Time) on Sunday, July 6, to be considered.
US-based Program staff are typically employed by Open Philanthropy Project LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.
Apply to this Job