November 13
• At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors. • Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 500 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success. • Maritime domain awareness involves the effective fusion of fragmented pieces of information that compose the complex maritime landscape. Vessel tracking data and trading history, satellite imagery, sanction lists and OSINT comprise some of the data sources which, when combined efficiently, provide an understanding of events that take place at ports and in open seas. Rapidly evolving global geopolitical developments and tensions greatly affect trading, disturbing the supply chain and altering the patterns of shipping activity. • As a senior data scientist with Kpler’s risk and compliance team you will develop algorithms that will detect deviations from normal maritime activity and will discriminate trivial abnormalities from those that imply deceptive shipping practices. This venture requires dealing with high volumes of dispersed and often highly imbalanced data.
• You have at least 5 years of experience in the DS role, deploying models into production. • You have proven experience delivering end-to-end ML solutions that produce business value. • You are proficient in Python. • You have expert knowledge of at least one cloud computing platform (preferably AWS). • You are fluent in English. • You have expertise on applications focusing on geospatial data and mobility analytics (highly desirable). • You have proven experience with big data technologies, specifically Spark and Kafka. • You have experience working with state-of-art ML pipeline technologies (such as MLflow, Sagemaker...) or building a ML pipeline by yourself (Docker, Kubernetes, Paperspace, Airflow...). • You have a Ph. D. in a quantitative field (computer science, mathematics, physics, engineering...). • You are familiar with the shipping industry and commodity trading. • You are comfortable with software engineering best practices. • You value code simplicity, performance and attention to detail. • You have experience working in an international environment.
Apply Now