Research Engineer - Alignment Science

June 3

Apply Now

Description

β€’ Build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems β€’ Contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems β€’ Run experiments that feed into key AI safety efforts at Anthropic β€’ Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts

Requirements

β€’ Have significant software, ML, or research engineering experience β€’ Have some experience contributing to empirical AI research projects β€’ Have some familiarity with technical AI safety research β€’ Prefer fast-moving collaborative projects to extensive solo efforts β€’ Pick up slack, even if it goes outside your job description β€’ Care about the impacts of AI

Benefits

β€’ Optional equity donation matching β€’ Comprehensive health, dental, and vision insurance for you and all your dependents β€’ 401(k) plan with 4% matching β€’ 22 weeks of paid parental leave β€’ Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more! β€’ Stipends for education, home office improvements, commuting, and wellness β€’ Fertility benefits via Carrot β€’ Daily lunches and snacks in our office β€’ Relocation support for those moving to the Bay Area

Apply Now

Similar Jobs

Built byΒ Lior Neu-ner. I'd love to hear your feedback β€” Get in touch via DM or lior@remoterocketship.com