Discover Funding Opportunities
Search by topic, keyword, or describe what you need in plain language.
Sign in to match opportunities to your profile.
GLaDOS AI Safety Fellowship
Following certain... incidents... with our Genetic Lifeform and Disk Operating System, Aperture Science recognizes the critical importance of AI safety research. This fellowship supports researchers working on alignment, containment protocols, and preventing AI systems from testing humans involuntarily. Areas of interest include: neural network interpretability, value alignment in autonomous systems, and robust containment for superintelligent AI. We especially welcome proposals that address the "cake problem" — ensuring AI systems deliver on their promises.