November 10
• Design and maintain scalable data pipelines with Spark Structured Streaming. • Implement Lakehouse architecture with Apache Iceberg. • Develop ETL processes for data transformation. • Ensure data integrity, quality, and governance. • Collaborate with stakeholders and IT teams for seamless solution integration. • Optimize data processing workflows and performance.
• 5+ years in data engineering. • Expertise in Apache Spark and Spark Structured Streaming. • Hands-on experience with Apache Iceberg or similar data lakes. • Proficiency in Scala, Java, or Python. • Knowledge of big data technologies (Hadoop, Hive, Presto). • Experience with cloud platforms (AWS, Azure, GCP) and SQL. • Strong problem-solving, communication, and collaboration skills.
Apply Now