Senior Data Engineer

2 days ago

Apply Now

Description

•About Sand Technologies •Sand Technologies is a global leader in digital transformation, empowering leading organisations and governments worldwide to achieve their digital aspirations. •We offer a comprehensive suite of services, including enterprise AI solutions, data science, software engineering, and IoT, delivered from our centres in the Americas, Europe, and Africa. •Our training programmes cultivate the next generation of agile digital leaders. •We believe in harnessing technology to deliver real impact and value, helping organisations bridge the gap between their current reality and digital future. •About the Role •A Senior Data Engineer, has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. •You will work closely with cross-functional teams and contribute to the strategic direction of our data initiatives. •Responsibilities include leading the design of data pipelines, architecting data solutions, and ensuring data quality.

Requirements

•Proven experience as a Senior Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure, and designing data architectures. •Proven experience working with Big Data and tools used to process Big Data •Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. •Excellent understanding of data engineering principles and practices. •Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders. •Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment. •Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. •Knowledge of data governance frameworks and practices. •Understanding of machine learning workflows and how to support them with robust data pipelines. •Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. •Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. •Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. •Experience in using modern data architectures, such as lakehouse. •Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). •Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. •Strong understanding of data governance and best practices in data management. •Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. •Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. •SQL (for database management and querying) •Apache Spark (for distributed data processing) •Apache Spark Streaming, Kafka or similar (for real-time data streaming) •Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc)

Apply Now

Similar Jobs

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com