Senior MLOps Engineer

October 19

Apply Now
Logo of Apixio

Apixio

Healthcare Provider Solutions β€’ Payer Solutions β€’ Risk Adjustment Technology β€’ Health Plans Solutions β€’ Prospective Solutions

201 - 500 employees

Description

β€’ Who We Are: Apixio is creating a Connected Care platform for healthcare. β€’ About the role: seeking a skilled MLOps Engineer with expertise in Spark, Python, GPU, and Databricks. β€’ Daily responsibilities include Development and Management of key system areas including: β€’ Design, implement, and maintain scalable MLOps infrastructure and pipelines using Apache Spark, Python, and other relevant technologies. β€’ Collaborate with data scientists and software engineers to deploy machine learning models into production environments. β€’ Develop and automate CI/CD pipelines for model training, testing, validation, and deployment. β€’ Implement monitoring, logging, and alerting solutions to track model performance, data drift, and system health. β€’ Optimize and tune machine learning workflows for performance, scalability, and cost efficiency. β€’ Ensure security and compliance requirements are met throughout the MLOps lifecycle. β€’ Work closely with DevOps teams to integrate machine learning systems with existing infrastructure and deployment processes. β€’ Provide technical guidance and support to cross-functional teams on best practices for MLOps and model deployment. β€’ Stay updated on emerging technologies, tools, and best practices in MLOps and machine learning engineering domains. β€’ Perform troubleshooting and resolution of issues related to machine learning pipelines, infrastructure, and deployments.

Requirements

β€’ Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related field. β€’ Proven experience (5+ years) as a MLOps Engineer, Software engineer, DevOps Engineer or related role. β€’ Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. β€’ Strong understanding of machine learning concepts, algorithms, and frameworks such as MLFlow, TensorFlow, PyTorch, or Scikit-learn. β€’ Knowledge of big data processing technologies such as Apache Spark for handling large-scale data and distributed computing. β€’ Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP) and familiarity with services like AWS SageMaker, Azure Machine Learning, or Google AI Platform. β€’ Understanding of containerization technologies like Docker and container orchestration tools like Kubernetes for managing machine learning workflows in production environments. β€’ Proficiency in version control systems (e.g., Git) and CI/CD tools for automating the deployment and management of machine learning models. β€’ Hands-on experience with Databricks for data engineering and analytics (nice to have). β€’ Experience designing and implementing CI/CD pipelines for machine learning workflows using tools like Jenkins, GitLab CI, or Azure DevOps. β€’ Knowledge of version control systems (e.g., Git) and collaborative development workflows. β€’ Strong problem-solving skills and attention to detail, with the ability to troubleshoot complex issues in distributed systems.

Benefits

β€’ Meaningful work to advance healthcare β€’ Competitive compensation β€’ Exceptional benefits, including medical, dental and vision, FSA β€’ 401k with company matching up to 4% β€’ Generous vacation policy β€’ Remote-first & hybrid work philosophies β€’ A hybrid work schedule (2 days in office & 3 days work from home) β€’ Modern open office in beautiful San Mateo, CA; Los Angeles, CA; San Diego, CA; Austin, TX and Dallas, TX β€’ Subsidized gym membership β€’ Catered, free lunches β€’ Parties, picnics, and wine-downs β€’ Free parking

Apply Now

Similar Jobs

Built byΒ Lior Neu-ner. I'd love to hear your feedback β€” Get in touch via DM or lior@remoterocketship.com