December 5
• responsible for at-scale infrastructure design, build and deployment • building and maintaining architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies • evaluating and translating technical design to workable technical solutions/code and technical specifications at par with industry standards • driving creation of re-usable artifacts • establishing scalable, efficient, automated processes for data analysis, data model development, validation, and implementation • working closely with analysts/data scientists to understand impact to the downstream data models • writing efficient and well-organized software to ship products in an iterative, continual release environment • contributing and promoting good software engineering practices across the team • communicating clearly and effectively to technical and non-technical audiences • defining data retention policies • monitoring performance and advising any necessary infrastructure changes.
• ready to start immediately • openness to work every day between 7 a.m. and 3 p.m. CET • 3+ years’ experience with Azure Data Factory and Databricks • 5+ years’ experience with data engineering or backend/fullstack software development • strong SQL skills • Python scripting proficiency • experience with data transformation tools - Databricks and Spark • experience in structuring and modelling data in both relational and non-relational forms • experience with CI/CD tooling • working knowledge of Git • good verbal and written communication skills in English.
Apply Now