16 hours ago
• As a Mid-Level Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure. • You’ll work with cutting-edge AWS technologies to build robust and scalable data pipelines, ensuring the integrity, quality, and availability of our data. • Design and Implement Data Pipelines: Create and optimize ETL (Extract, Transform, Load) processes using AWS services such as Glue, S3, Lambda and Data Pipelines. • Develop scalable data architectures that support our microservice-based data ecosystem. • Utilize Terraform for infrastructure as code (IaC) to set up and manage resources. • Data Modeling and Transformation: Implement best practices in data modeling, ensuring efficient data structures. • Use PySpark and Python scripting for data transformation and enrichment. • Work with Glue crawlers to discover and catalog data sources. • AWS Services Expertise: Build and maintain Lambda functions for serverless data processing. • Leverage Glue jobs for data transformation and orchestration. • Utilize Glue catalog for metadata management. • Implement logging, monitoring, and alerting using AWS CloudWatch and other relevant tools. • Work with Delta Lake and Databricks for advanced analytics and data processing. • Query data stored in S3 data lakes using AWS Athena. • Collaboration and Communication: Interface with business stakeholders to gather requirements and deliver complete reporting solutions. • Collaborate with cross-functional teams to ensure data consistency and accuracy. • Continuous Improvement: Stay updated with industry trends and emerging technologies related to data engineering. • Identify opportunities for process optimization and automation.
• Bachelor’s degree in Computer Science, Information Systems, or a related field. • Minimum 3 years of experience in data engineering, preferably with a focus on AWS technologies. • Proficiency in Python and PySpark. • Familiarity with Terraform for infrastructure provisioning. • Experience with Glue jobs, Lambda functions, and Glue crawlers. • Knowledge of Delta Lake, Databricks, and S3 data lake storage. • Strong problem-solving skills and attention to detail. • Excellent communication and teamwork abilities.
Apply Now17 hours ago
51 - 200
Data Engineer improving database performance for Vozy's voice AI platform.
🇺🇸 United States – Remote
💰 $1M Debt Financing on 2022-10
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
2 days ago
51 - 200
Data Engineer guiding clients' data modernization journey.
🇺🇸 United States – Remote
💰 Private Equity Round on 2018-06
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
2 days ago
10,000+
Data Engineer at Aboitiz Data Innovation driving data management solutions.
5 days ago
501 - 1000
Freelance Data Architect for optimizing Snowflake data warehouse at The Motley Fool.
🇺🇸 United States – Remote
💵 $80 - $90 / hour
💰 $25M Private Equity Round on 2009-11
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
5 days ago
51 - 200
Revology seeks AWS Data Architect for gaming data architecture projects.
🇺🇸 United States – Remote
💰 Private Equity Round on 2020-06
⏳ Contract/Temporary
🟡 Mid-level
🟠 Senior
🚰 Data Engineer