Data Engineer

Yesterday

Apply Now
Logo of Edelman

Edelman

Public Relations • Digital Communications • Social Media Marketing • Strategic Communications • Marketing

5001 - 10000 employees

Founded 1952

🤝 B2B

📱 Media

Description

• Edelman is a voice synonymous with trust, reimagining a future of communication. • Our culture thrives on boldness, empathy, and curiosity. • We understand diversity, equity, inclusion, and belonging transform our workplaces. • We are in pursuit of an inspiring and equitable workplace that fosters collaboration. • Currently seeking a Data Engineer with 3-5 years’ experience in AGILE environments. • Desired skills include cloud infrastructure tools like Apache Airflow, Databricks, and Snowflake. • Familiarity with real-time data processing and AI implementation is advantageous. • Collaborative work environment valuing every team member’s voice. • Exciting journey in designing modern data pipelines and optimizing workflows. • Focus on data ingestion, transformation, storage and analysis ensuring high data quality. • Exploring Generative AI for tasks like data enrichment and automated reporting. • Unique opportunity to work on batch processing, streaming data pipelines, and automation. • Empowering engineers to explore new tools while delivering high-quality solutions.

Requirements

• Minimum of 3 years experience deploying enterprise level scalable data engineering solutions. • Strong examples of independently developed data pipelines end-to-end. • Proven track record of building and managing scalable cloud-based infrastructure on AWS. • Proven track record of implementing and managing AI model lifecycle. • Experience using Apache Airflow, Snowflake, Lucene-based search engines. • Experience with Databricks (Delta format, Unity Catalog). • Advanced SQL and Python knowledge with associated coding experience. • Strong Experience with DevOps practices for CI/CD. • Experience wrangling structured & unstructured file formats. • Understanding and implementation of best practices within ETL and ELT processes. • Data Quality best practice implementation using Great Expectations. • Real-time data processing experience using Apache Kafka. • Work independently with minimal supervision. • Takes initiative and is action-focused. • Mentor and share knowledge with junior team members. • Collaborative with strong ability to work in cross-functional teams. • Excellent communication skills. • Fluency in spoken and written English.

Benefits

• Competitive compensation package• Annual bonuses• Paid Time Off policy• Region-specific benefits

Apply Now

Similar Jobs

December 7

As a Data Architect, design data solutions for pharmaceutical clients at BASE life science. Collaborate with stakeholders to drive data governance and integration strategies.

November 29

Drive data strategy and best practices for EcoVadis' data-driven solutions. Collaborate with teams to enhance data operations.

November 20

Join Plain Concepts as a Big Data Engineer, creating innovative data solutions remotely. Utilize your skills in Python/Scala, Spark, and Cloud technologies for impactful projects.

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com