Building products and software that make life insurance accessible to millions of families.
Data Science and User Research
201 - 500
September 6
Building products and software that make life insurance accessible to millions of families.
Data Science and User Research
201 - 500
• Build robust solutions for managing real-time and batch data • Develop hardened and repeatable CI/CD data models and pipelines to enable reporting, modeling, and machine learning • Design and implement data pipelines to support blending data from multiple sources and machine learning models, ensuring efficient data processing and feature engineering • Improve data availability and quality for our enterprise clients through automated monitoring and alerting • Leverage Google Cloud (GCP) tools and other services (e.g., Astronomer - Apache Airflow) to bring data workloads to production • Enable end-user configuration of product features, ensuring seamless synchronization with the broader application • Collaborate with cross-functional teams to deliver informed solutions that meet platform and client needs • Make team-based decisions, fostering shared responsibility for defensible design considerations
• 5+ years working in a data engineering role supporting product, analytics and data science teams • Proficient in SQL, and schema design, with experience in columnar databases such as Google BigQuery, Snowflake, or Amazon Redshift (familiarity with GraphQL a plus) • 4+ years of Python (or similar experience) writing efficient, testable, and readable code • Experience building and optimizing real-time and batch processing solutions, ensuring high availability and low latency, allowing for timely insights and actions. • Skilled in designing end-to-end data pipelines in cloud frameworks (GCP, AWS, Azure) with multi-stakeholder requirements • Familiarity with Google Cloud (GCP) tools (e.g., Cloud Run, Cloud Functions, Vertex AI, App Engine, Cloud Storage, IAM) • Experience with CI/CD pipelines for data processing (Docker, CircleCI, dbt, git) • Proficient in Infrastructure as Code (Terraform or Pulumi) and data orchestration tools (e.g., Apache Airflow
• remote (continuous 48 only)/hybrid workplace • meaningful benefits • substantial growth opportunities • equity
Apply NowAugust 31
201 - 500
Build data architecture for a fast-paced SaaS company's product.
🇺🇸 United States – Remote
💵 $110k - $130k / year
💰 Private Equity Round on 2019-12
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🗽 H1B Visa Sponsor
August 31
11 - 50
Clinical data engineer & analyst at AI-driven medical care startup.
🇺🇸 United States – Remote
💵 $125k - $200k / year
💰 $2M Seed Round on 2020-09
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
August 30
51 - 200
Design and maintain robust data pipelines for Captiv8's analytics and data science.
August 30
11 - 50
Design and maintain data pipelines for Vetcove's veterinary eCommerce platforms.