Data Engineer

16 hours ago

Apply Now

Description

• As a Mid-Level Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure. • You’ll work with cutting-edge AWS technologies to build robust and scalable data pipelines, ensuring the integrity, quality, and availability of our data. • Design and Implement Data Pipelines: Create and optimize ETL (Extract, Transform, Load) processes using AWS services such as Glue, S3, Lambda and Data Pipelines. • Develop scalable data architectures that support our microservice-based data ecosystem. • Utilize Terraform for infrastructure as code (IaC) to set up and manage resources. • Data Modeling and Transformation: Implement best practices in data modeling, ensuring efficient data structures. • Use PySpark and Python scripting for data transformation and enrichment. • Work with Glue crawlers to discover and catalog data sources. • AWS Services Expertise: Build and maintain Lambda functions for serverless data processing. • Leverage Glue jobs for data transformation and orchestration. • Utilize Glue catalog for metadata management. • Implement logging, monitoring, and alerting using AWS CloudWatch and other relevant tools. • Work with Delta Lake and Databricks for advanced analytics and data processing. • Query data stored in S3 data lakes using AWS Athena. • Collaboration and Communication: Interface with business stakeholders to gather requirements and deliver complete reporting solutions. • Collaborate with cross-functional teams to ensure data consistency and accuracy. • Continuous Improvement: Stay updated with industry trends and emerging technologies related to data engineering. • Identify opportunities for process optimization and automation.

Requirements

• Bachelor’s degree in Computer Science, Information Systems, or a related field. • Minimum 3 years of experience in data engineering, preferably with a focus on AWS technologies. • Proficiency in Python and PySpark. • Familiarity with Terraform for infrastructure provisioning. • Experience with Glue jobs, Lambda functions, and Glue crawlers. • Knowledge of Delta Lake, Databricks, and S3 data lake storage. • Strong problem-solving skills and attention to detail. • Excellent communication and teamwork abilities.

Apply Now

Similar Jobs

17 hours ago

Vozy

51 - 200

Data Engineer improving database performance for Vozy's voice AI platform.

5 days ago

The Motley Fool

501 - 1000

Freelance Data Architect for optimizing Snowflake data warehouse at The Motley Fool.

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com