November 21, 2023
• Designing, developing, and managing end-to-end data pipelines • Data processing and transformation using Spark • Providing technical governance to enhance ways of working • Champion DevOps and CI/CD methodologies to ensure agile collaboration and robust data solutions • Engineer and orchestrate data models and pipelines • Lead development activities using Python, PySpark and other technologies • Write high-quality code that contributes to a scalable and maintainable data platform
• Experience with Big Data Technologies such as Spark, Kafka in a customer-facing post-sales, technical architecture or consulting role • Experience working on Big Data Architectures independently • Comfortable writing code in Python • Experience working across Azure including Azure Data Factory, Azure Synapse, Azure Delta Lake Storage, Delta Lake etc • Experience with Purview, Unity Catalog etc • Experience with streaming Data in Kakfa/Event Hubs/Stream Analytics etc • Experience working in the Databricks ecosystem • MLOps
Apply Now