June 3
• Design and implement data pipelines for the collection, storage, and transformation of data from a variety of sources • Develop and maintain data models and schema to support data analysis and reporting • Write and maintain ETL jobs to extract, transform, and load data into our data warehouse • Collaborate with ML engineers, data analysts, and other stakeholders to understand data requirements and develop solutions to support data-driven decision-making • Monitor and optimize data pipelines and processes to ensure data quality and performance
• Bachelor's or Master's degree in Computer Science, Data Science, or a related field • 3+ years of experience as a Data Engineer or similar role • Strong programming skills in Python, Java, or a similar language • Experience with SQL and data modeling concepts • Experience with cloud-based data warehousing solutions such as Redshift, BigQuery, or similar • Experience with ETL tools such as Spark, Flink, Databricks, Snowflake, etc. • Experience with messaging systems such as RabbitMQ, Kafka, etc. • Knowledge of the underlying cloud infrastructure on how the various data pipeline component fit together • Excellent problem-solving and communication skills • Based somewhere between the GMT-3 and GMT+2 timezones
• Nice work environment • Competitive Salary • Health Insurance • Stock Options • Annual Company Trip in a secret location • and more
Apply Now