Join our Facebook group

👉 Remote Jobs Network

Data Engineer - Data Warehousing

3 days ago

Apply Now
Logo of Everstream Analytics

Everstream Analytics

We keep the world moving by keeping risk out of the way

AI • Machine Learning • Supply Chain Visibility • Supply Chain Risk Management • Multi-tier Visibility

51 - 200

Description

• As a Data Engineer at Everstream Analytics, you will play a critical role in building and maintaining our data infrastructure. • You will work with a team of talented engineers to design, develop, and optimize data pipelines and data products that support our multi-tenant cloud-native data platform, leveraging various AWS services such as Lambda, EMR, S3, Glue and Redshift as well as helping drive our future toolset. • Your expertise in distributed system design, data warehousing, data lakes, and ETL/Orchestration is essential in ensuring the scalability, reliability, and efficiency of our data infrastructure. • Design, implement, and maintain data pipelines that handle large volumes of data from various sources, ensuring data quality, integrity, and availability. • AWS Expertise: Utilize AWS services like Lambda, EMR, S3, Glue, and others to create scalable and cost-effective data solutions. • Relational Database Experience: Utilize PostgreSQL on RDS or similar database technologies, where applicable. • Stream Processing: Experience with Apache Kafka, Apache Spark or similar for real-time data processing and stream analytics. • Python Development: Primarily use Python for data engineering tasks, data transformation, and ETL processes. • Data Warehousing: Implement and manage data warehousing and/or data lake solutions for efficient data storage and retrieval to support engineering, data science, applications, and groups across our organization. • Collaboration: Work closely with Product Management, Data Science, and the leadership team to understand data requirements and deliver data solutions that meet business needs. • Monitoring and Optimization: Continuously monitor the performance of data pipelines to optimize scalability and efficiency. • Documentation: Maintain comprehensive documentation for data engineering processes, ensuring knowledge transfer within the team. • Leadership: Lead by example within the data engineering team, taking pride in your team’s deliverables, and performing as technical lead for a scrum team or on various projects, where applicable.

Requirements

• Proven experience in designing and building multi-tenant cloud-native data platforms in a SaaS or PaaS environment. • Strong experience with Cloud Data Warehouses such as AWS Redshift, Snowflake, BigQuery, Databricks. • Extensive experience with relational database technologies in a production environment, specifically PosgreSQL. • Strong expertise in AWS services and ETL/Orchestration (Glue, Spark, Airflow, Apache Seatunnel). • Proficiency in distributed system design, data warehousing, data lakes, and stream processing using Spark or similar. • Strong programming skills in Python. • Excellent problem-solving and troubleshooting skills. • Ability to work collaboratively with cross-functional teams and convey complex technical concepts to non-technical stakeholders. • Bachelor's or Master's degree in Computer Science, Data Engineering, related field, or equivalent experience.

Apply Now

Similar Jobs

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com