Yesterday
Airflow
Apache
AWS
Azure
Cloud
ETL
Google Cloud Platform
Hadoop
Kafka
NoSQL
Oracle
Postgres
Python
Spark
SQL
Tableau
Tensorflow
• The data engineer will play a pivotal role in building and operationalizing the minimally inclusive data necessary for the enterprise data management, analytics and business intelligence initiatives following industry standard practices and tools. • The bulk of the data engineer’s work would be in building, managing, and optimizing data integration pipelines and then moving these data pipelines effectively into production for key analytics consumers (business domain owners, business/data analysts, product owners, decision makers on operational, tactical, and strategic levels) or any group that needs curated insights for data informed problem-solving use cases across the enterprise.
• A master’s degree in computer science, data science, software engineering, or related field. • At least three years of experience in BI development, data analytics, data engineering, software engineering, or a similar role. • Expertise in data modelling, ETL development, data architecture, master data management. • Strong experience with various Data Management architectures like data warehouse, data lake, LakeHouse architecture, Data Fabric vs Data Mesh concepts and the supporting processes like data Integration, MPP engines, governance, metadata management. • Intermediate experience in Apache technologies such as Spark, Kafka and Airflow to build scalable and efficient data pipelines. • Strong experience in designing, building, and deploying data solutions that capture, explore, transform, and utilize data to create data products and support data informed initiatives. • Proficiency in ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration and data virtualization. • Basic knowledge and ability in data science languages/tools such as R, Python, TensorFlow, Databricks, Dataiku, SAS, or others. • Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (i.e. AWS, OCI, Azure, GCP) and modern data warehouse tools (Snowflake, Databricks, etc) • Strong experience with database technologies such as SQL, NoSQL, PostgreSQL, Oracle, Hadoop, Teradata etc. • Intermediate experience working with popular data discovery, analytics, and BI software tools like PowerBI, Tableau, Qlik Sense, Looker, ThoughtSpot, MicroStrategy or others for semantic-layer-based data discovery is advantage. • Expert problem-solving skills, including debugging skills, allowing the determination of sources of issues in unfamiliar code or systems, and the ability to recognize and solve repetitive problems.
• Training and career development opportunities internally • Strong emphasis on personal and professional growth • Friendly, supportive working environment • Opportunity to work with colleagues based all over the world, with English as the company language
Apply NowOctober 15, 2024
Design and implement ETL processes for the Libertex trading platform.