Data Engineer

October 14

Apply Now

Description

• Design and develop scalable data pipelines, ensuring they support the processing of large datasets efficiently. • Integrate advanced analytics and machine learning models into our systems, leveraging your experience with Apache Spark for big data processing and analytics to derive insights and improve decision-making processes. • Enhance our data governance and security measures, applying best practices to ensure compliance and protect sensitive information. • Work closely with cross-functional teams, including data scientists and business stakeholders, to understand requirements and deliver high-impact data solutions. • Deploy and manage robust data infrastructure, utilizing technologies like Kubernetes for orchestration, and ensuring high availability and performance of our data systems.

Requirements

• Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. • At least 2 years of relevant experience in data engineering, with a strong portfolio of designing and implementing data solutions. • Deep expertise in big data technologies (Apache Spark, DBT), cloud platforms (AWS, Azure, GCP), and data development in data lake/delta lake architectures. • Proficiency in programming languages (Python, Java, Scala), SQL, and infrastructure as code technologies (Terraform, Helm charts for Kubernetes). • Expert knowledge of Kubernetes, object storages (S3, Azure Data Lake Store, MinIO), and data modelling (dimensional model - OLAP, data semantic layers). • Strong problem-solving skills, excellent communication abilities, and the capacity to thrive in a fast-paced environment.

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com