November 21
• At Collective[i], we value diversity of experience, knowledge, backgrounds and people who share a commitment to building a company and community on a mission to help people be more prosperous. • We recruit extraordinary individuals and provide them the platform to contribute their exceptional talents and the freedom to work from wherever they choose. • Our company is a wonderful place to learn and grow alongside an incredible and tenacious team. • Collective[i] was founded by three entrepreneurs with over $1B of prior exits. • Their belief in the power of Artificial Intelligence to transform life as we know it and improve economic outcomes at massive scale drove the decision to invest over $100m in the company which has created a state-of-the-art platform for prosperity that helps companies generate sales and people expand their professional connections. • In the last decade, Collective[i] has grown into a powerful community of scientists, engineers, creative talent and more, working together to help people succeed in business. • We are looking for a Senior Data Engineer to join our team. • We are seeking an experienced Senior Data Engineer with a strong background in AWS DevOps and data engineering to join our team. In this role, you will manage and optimize our data infrastructure, focusing on both data engineering and DevOps responsibilities. A key aspect of this role involves deploying machine learning models to AWS using SageMaker, so expertise with AWS and SageMaker is essential. Experience with Snowflake is highly desirable, as our data environment is built around Snowflake for analytics and data warehousing.
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • 5+ years of experience in Data Engineering with at least 3+ years working in AWS environments. • Strong knowledge of AWS services, specifically SageMaker, Lambda, Glue, and Redshift. • Hands-on experience deploying machine learning models in AWS SageMaker. • Proficiency in DevOps practices, including CI/CD pipelines, containerization (Docker, ECS, EKS), and infrastructure-as-code (IaC) tools like Terraform or CloudFormation. • Advanced SQL skills and experience in building and maintaining complex ETL workflows. • Proficiency in Python, with additional skills in Java or Scala • Practical experience with Airflow for DAG management and data orchestration. • Proficient in version control (GIT) and containerized deployment with Docker and managed services such as AWS Fargate, ECS, or EKS. • Effective communication, Result oriented approach.
Apply NowNovember 10
Data engineer at a consulting firm scaling data for startups and Fortune 500 companies.
October 19
501 - 1000
Data Engineer to combine data sources for vehicle valuation insights.
October 17
Build and optimize data architecture and pipelines for analytics initiatives.
September 27
Data Engineer for building, managing, and optimizing data pipelines at Tarkett.
March 23
Software Engineer to build and scale data pipelines for EvenUp's AI-driven infrastructure.