About Me

Hi! My name is Sophie and I've committed to Stanford University's Class of 2030! I'm intending to major in Symbolic Systems.

This page is specifically intended to showcase some of my work in the AI safety space. AI safety clearly aligns with the ITN framework for prioritizing global problems; it's critically important, highly tractable due to its relative novelty, and neglected relative to its significance.

Experience Highlights

Jason Hausenloy's (Aspiring) Most Valuable Commodity
Veritas Fellow, Midas Project

Wrote expanded AI whistleblower guide to be published on Midas Project website; currently using graphic design skills to create a print-native version of the OpenAI Files.

For my full resume, reach out!

Research samples:
(1) Bioweaponry Research: Highest-Likelihood Misuse Pathways + Concrete Policy Interventions;
(2) AGI Manhattan Project: Key Factors to Ensure Success + Current Policy Opportunities;
(3) Mapping Emergent AI-Aided Biosecurity Risks

Have also researched maximizing middle-power agency in an AI-dominated world, cloud computing as a way to circumvent export controls, and novel AI evaluation techniques to mitigate increasing benchmark saturation.

Independent Researcher