Senior MLOps Engineer

October 19

Apply Now
Logo of Apixio

Apixio

Healthcare Provider Solutions • Payer Solutions • Risk Adjustment Technology • Health Plans Solutions • Prospective Solutions

201 - 500

Description

• Who We Are: Apixio is creating a Connected Care platform for healthcare. • About the role: seeking a skilled MLOps Engineer with expertise in Spark, Python, GPU, and Databricks. • Daily responsibilities include Development and Management of key system areas including: • Design, implement, and maintain scalable MLOps infrastructure and pipelines using Apache Spark, Python, and other relevant technologies. • Collaborate with data scientists and software engineers to deploy machine learning models into production environments. • Develop and automate CI/CD pipelines for model training, testing, validation, and deployment. • Implement monitoring, logging, and alerting solutions to track model performance, data drift, and system health. • Optimize and tune machine learning workflows for performance, scalability, and cost efficiency. • Ensure security and compliance requirements are met throughout the MLOps lifecycle. • Work closely with DevOps teams to integrate machine learning systems with existing infrastructure and deployment processes. • Provide technical guidance and support to cross-functional teams on best practices for MLOps and model deployment. • Stay updated on emerging technologies, tools, and best practices in MLOps and machine learning engineering domains. • Perform troubleshooting and resolution of issues related to machine learning pipelines, infrastructure, and deployments.

Requirements

• Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related field. • Proven experience (5+ years) as a MLOps Engineer, Software engineer, DevOps Engineer or related role. • Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. • Strong understanding of machine learning concepts, algorithms, and frameworks such as MLFlow, TensorFlow, PyTorch, or Scikit-learn. • Knowledge of big data processing technologies such as Apache Spark for handling large-scale data and distributed computing. • Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP) and familiarity with services like AWS SageMaker, Azure Machine Learning, or Google AI Platform. • Understanding of containerization technologies like Docker and container orchestration tools like Kubernetes for managing machine learning workflows in production environments. • Proficiency in version control systems (e.g., Git) and CI/CD tools for automating the deployment and management of machine learning models. • Hands-on experience with Databricks for data engineering and analytics (nice to have). • Experience designing and implementing CI/CD pipelines for machine learning workflows using tools like Jenkins, GitLab CI, or Azure DevOps. • Knowledge of version control systems (e.g., Git) and collaborative development workflows. • Strong problem-solving skills and attention to detail, with the ability to troubleshoot complex issues in distributed systems.

Benefits

• Meaningful work to advance healthcare • Competitive compensation • Exceptional benefits, including medical, dental and vision, FSA • 401k with company matching up to 4% • Generous vacation policy • Remote-first & hybrid work philosophies • A hybrid work schedule (2 days in office & 3 days work from home) • Modern open office in beautiful San Mateo, CA; Los Angeles, CA; San Diego, CA; Austin, TX and Dallas, TX • Subsidized gym membership • Catered, free lunches • Parties, picnics, and wine-downs • Free parking

Apply Now

Similar Jobs

October 18

Qloo

11 - 50

Maintain data pipelines and enhance ML models for Qloo's Taste AI technology.

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com