Join our Facebook group

👉 Remote Jobs Network

Research Engineer - Alignment Science

June 3

Apply Now
Logo of Anthropic

Anthropic

Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.

51 - 200

Description

• Build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems • Contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems • Run experiments that feed into key AI safety efforts at Anthropic • Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts

Requirements

• Have significant software, ML, or research engineering experience • Have some experience contributing to empirical AI research projects • Have some familiarity with technical AI safety research • Prefer fast-moving collaborative projects to extensive solo efforts • Pick up slack, even if it goes outside your job description • Care about the impacts of AI

Benefits

• Optional equity donation matching • Comprehensive health, dental, and vision insurance for you and all your dependents • 401(k) plan with 4% matching • 22 weeks of paid parental leave • Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more! • Stipends for education, home office improvements, commuting, and wellness • Fertility benefits via Carrot • Daily lunches and snacks in our office • Relocation support for those moving to the Bay Area

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com