August 27
β’ Own an LLM vertical with a focus on a specific safety domain, technique, or use case (either from defense or red-team attack perspective) β’ Generate high quality synthetic data, train LLMs, and conduct rigorous benchmarking. β’ Deliver robust, scalable, and reproducible production code. β’ Push the envelope by developing novel techniques and research that delivers the worldβs most harmless and helpful models. Your research will directly empower our customers to more feasibly deploy safe and responsible LLMs. β’ Co-author papers, patents, and presentations with our research team by integrating other membersβ work with your vertical.
β’ Deep domain knowledge in LLM safety techniques. β’ Extensive experience in designing, training, and implementing multiple different types of LLM models and architectures in the real world. Comfortability with leading end-to-end projects. β’ Adaptability and flexibility. In both the academic and startup world, a new finding in the community may necessitate an abrupt shift in focus. You must be able to learn, implement, and extend state-of-the-art research. β’ Preferred: past research or projects in either attacking or defending LLMs.
Apply NowAugust 13
11 - 50
Collaborate to improve AI models and develop software for autonomous driving solutions.
June 3
51 - 200
Join Anthropic's Interpretability team to enhance AI safety through mechanistic understanding.
May 30
51 - 200
πΊπΈ United States β Remote
π΅ $125k - $175k / year
π° $60M Series C on 2021-10
β° Full Time
π‘ Mid-level
π Senior
π Research Engineer
π½ H1B Visa Sponsor