September 25
• Develop new data pipelines and maintain our data ecosystem, focusing on fault-tolerant ingestion, storage and data lifecycle, and the computation of metrics, reports and derived information • Communicate efficiently with your team-mates in order to develop software and creative solutions for our customer needs. • Write high-quality, reusable code, test it, and bring it to production • You are familiar with applying best practices according to industry standards whilst promoting a culture of agility and excellence.
• Several years of experience developing in a modern programming language, preferably Java and Python • Significant experience with developing and maintaining distributed big data systems with production quality deployment, and monitoring • Exposure to high-performance data pipelines, preferably with Apache Kafka & Spark • Experience with scheduling systems such as Airflow, and SQL/NoSQL Databases • Experience with cloud data platforms is a plus • Exposure to Docker and/or Kubernetes is preferred • Good command of spoken and written English • University degree in computer sciences or equivalent professional experience.
• Challenging projects in a highly professional, but also collaborative and supportive environment • Working in small and excellently skilled teams • Opportunities for your continuous professional development • Competitive compensation depending on experience and skills • Hybrid and Remote work options • Service Recognition Awards, our way of celebrating and rewarding long-term contributions • Awesome Referral Bonus Program, because great people know great people • Team gatherings and team-building activities to foster connections, a sense of belonging and camaraderie.
Apply Now