August 18, 2023
🇺🇸 United States – Remote
💵 $90k - $120k / year
⏰ Full Time
🟡 Mid-level
🟠 Senior
🧑💻 Full-stack Engineer
Airflow
Amazon Redshift
Apache
AWS
Azure
Cloud
Distributed Systems
ETL
GCP
Java
LESS
Machine Learning
Python
Scala
Spark
SQL
Terraform
• Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to ensure seamless integration of data solutions into our products and services. • Produce quality code and documentation. • Design, develop, and maintain scalable and efficient data pipelines to extract, transform, and load (ETL) data from diverse sources into our data platform. • Utilize your expertise in data management to architect and implement data models that support both analytical and operational use cases. • Work on data orchestration tasks, ensuring the smooth flow of data across various systems and components of our data ecosystem. • Troubleshoot and optimize data pipelines for performance, reliability, and scalability. • Contribute to the continuous improvement of our data engineering processes and tools, and actively participate in code reviews and design discussions. • Maintain the confidentiality, integrity, and availability of our systems and processes in compliance with our internal policies on information security and privacy. • Responsible for the development and deployment of secure code.
• Minimum of 3 years of professional experience as a Software Engineer, with a focus on data engineering, data platforms, and/or ML platform. • Proficiency in at least one programming language (e.g., Python, Java, Scala) commonly used in data engineering tasks. • Demonstrated experience with all aspects of delivering working software, including analysis, design, automated testing, continuous integration, and continuous deployment. • Proven ability to build and deploy large-scale data pipelines supporting mission-critical production use cases. • Experience working with cloud-based data solutions, such as AWS, GCP, or Azure. • Experience with working on ETL and pipelines. • Hands-on experience with data management and modeling techniques, ensuring data integrity and quality. • Familiarity with data orchestration tools and frameworks (e.g., Apache Airflow, Luigi, Prefect). • Experience with data warehousing solutions and proficient SQL knowledge (e.g. Snowflake, Amazon Redshift, Google BigQuery) • Excellent problem-solving skills and the ability to work in a collaborative team environment. • Bachelor’s degree in computer science, software engineering, or any other engineering field. • You can work Eastern time hours. Nice to have: • Experience with Terraform • Experience with dbt • Experience with open-source ETL frameworks like Singer, Meltano • Experience with machine learning and machine learning frameworks/libraries (e.g. Apache Spark ML, Sci-kit learn, AWS SageMaker, etc) • Experience developing, monitoring, and maintaining distributed systems & frameworks • Expertise with AWS specifically (e.g. AWS Lambda, Kinesis, API Gateway, ECS, etc.)
• Virtual-first and welcome employees from anywhere • Unlimited paid time off • Fridays are yours - complete flexibility on if or how much you focus on Black Crow each Friday
Apply NowAugust 17, 2023
2 - 10
🇺🇸 United States – Remote
💰 Seed Round on 2022-06
⏰ Full Time
🟡 Mid-level
🟠 Senior
🧑💻 Full-stack Engineer
August 11, 2023
11 - 50
🇺🇸 United States – Remote
💵 $140k - $170k / year
💰 $15M Series A on 2022-09
⏰ Full Time
🟡 Mid-level
🟠 Senior
🧑💻 Full-stack Engineer
🗽 H1B Visa Sponsor
August 5, 2023
11 - 50
🇺🇸 United States – Remote
💰 $1.7M Seed Round on 2014-11
⏰ Full Time
🟡 Mid-level
🟠 Senior
🧑💻 Full-stack Engineer