November 5
• We are seeking a highly motivated and experienced Data Engineer to join our Data Engineering team. • In this role, you will be at the forefront of designing and developing scalable, robust data architectures and solutions utilizing the latest technologies from Google Cloud Platform (GCP) and AWS. • You will collaborate closely with cross-functional teams to understand their data needs and will focus on building, optimizing and scaling data platform solutions that drive insights for marketing strategies, personalization efforts and operational efficiencies. • As a senior member of the team, you will work closely with data scientists, machine learning engineers, data analysts and cross-functional teams, playing a critical role in shaping the company's data architecture. • Design, develop, and maintain scalable, high-performance data infrastructure to support the collection, storage, and processing of large datasets in real time and batch modes. • Build reliable, reusable services and APIs that allow teams to interact with the data platform for ingestion, transformation, and querying of data. • Develop internal tools and frameworks to automate and streamline data engineering processes. • Collaborate with senior management, product management, and other engineers in the development of data products. • Develop tools to monitor, debug, and analyze data pipelines. • Design and implement data schemas and models that can scale. • Mentor team members to build the company's overall expertise. • Work to make our company an innovator in the space by bringing passion and new ideas to work every day.
• At least 5 years of proven experience as a Data Engineer in developing platform level capabilities for a data-driven midsize to large corporations. • Strong object-oriented programming skills in languages such as Python, Java or Scala, with experience building large-scale, fault-tolerant systems. • Experience with cloud platforms (GCP, AWS, AZURE) with strong preference to GCP. • Experience with BigQuery or similar (Redshift, Snowflake, other MPP databases) • Experience building data pipelines & ETL • Experience with command line, version control software (git) • Excellent communication and collaboration skills. • Ability to work independently and quickly become productive after joining. • Knowledge of distributed data processing frameworks such as Apache Kafka, Flink, Spark, or similar. • Experience with DBT (Data Build Tool) and Looker. • Experience with machine learning pipelines or MLOps.
Apply NowOctober 30
Build and optimize Trustly's big data environment for payment solutions.
October 1
Data Engineer role at WEX, focusing on data infrastructure and transformation workflows.
September 17
Build and maintain data infrastructure for WEX's analytics needs.
September 17
Data Engineer role focusing on data pipeline development at WEX Inc.