Software Development • Agile Development • DevOps • Scrum • Mobile Applications
1001 - 5000
October 30
Apache
AWS
Azure
Cloud
Distributed Systems
Docker
ETL
Google Cloud Platform
Hadoop
Informatica
Kubernetes
NoSQL
Numpy
Pandas
PySpark
Python
Spark
SQL
Unity
Go
Software Development • Agile Development • DevOps • Scrum • Mobile Applications
1001 - 5000
• Responsible for at-scale infrastructure design, build and deployment focusing on distributed systems • Building and maintaining architecture patterns for data processing, workflow definitions, and system integrations using Big Data and Cloud technologies • Evaluating and translating technical design to workable solutions/code at industry standards • Driving creation of re-usable artifacts • Establishing scalable, efficient, automated processes for data analysis • Collaborating with analysts/data scientists to understand impact to downstream data models • Writing efficient and well-organized software for iterative product releases • Contributing and promoting good software engineering practices • Communicating clearly to technical and non-technical audiences • Defining data retention policies, monitoring performance, advising necessary infrastructure changes.
• 3+ years’ experience with GCP (BigQuery, Dataflow, Pub/Sub, Bigtable or other NoSQL database, Dataproc, Storage, Kubernetes Engine) • 5+ years’ experience with data engineering or backend/fullstack software development • Strong SQL skills • Python scripting proficiency • Experience with data transformation tools - Databricks and Spark • Data manipulation libraries (such as Pandas, NumPy, PySpark) • Experience in structuring and modelling data in both relational and non-relational forms • Ability to elaborate and propose relational/non-relational approach, normalization / denormalization and data warehousing concepts (star, snowflake schemas) • Designing for transactional and analytical operations • Experience with CI/CD tooling (GitHub, Azure DevOps, Harness etc.) • Good verbal and written communication skills in English • Work from European Union region and work permit are required. • Nice to have: Apache Hadoop, experience with data modelling tools, preferably DBT, Enterprise Data Warehouse solutions, preferably Snowflake, familiarity with ETL tools (such as Informatica, Talend, Datastage, Stitch, Fivetran etc.), experience in containerization and orchestration (Docker, Kubernetes etc.), cloud (Azure, AWS, GCP) certification.
Apply NowOctober 24
51 - 200
Data Engineer at TECKNOWORKS optimizing client productivity through data management.
October 24
51 - 200
Data Engineer at Tecknoworks, optimizing data pipelines for client solutions.
October 20
1001 - 5000
Ness Digital Engineering seeks Big Data Engineer to develop tech solutions in Romania.
October 17
51 - 200
Transform raw data into insights as a Data Engineer at Sales Consulting.