November 23
• Analyze large and complex datasets to extract meaningful insights and drive decision-making processes. • Design, develop, and maintain robust data pipelines for efficient data ingestion, transformation, and storage. • Build, validate, and maintain data models to support machine learning and statistical analysis needs. • Write efficient, scalable, and well-documented Python code to support data engineering and analysis tasks. • Monitor, troubleshoot, and enhance the performance of data systems and pipelines. • Work closely with data scientists, analysts, and other engineers to develop cohesive data solutions.
• Strong proficiency in Python and familiarity with data processing libraries (e.g., Pandas, NumPy, PySpark). • Experience with SQL for data extraction and manipulation. • Experience in designing, building, and managing data pipelines, ETL workflows, and data warehousing solutions. • Ability to apply statistical methods for data analysis and familiarity with machine learning concepts. • Proven ability to troubleshoot complex data issues and continuously improve workflows for efficiency and accuracy. • Effective communication skills to convey data insights to technical and non-technical stakeholders alike. • Bonus: Experience with cloud platforms (e.g., AWS, GCP), containerization (e.g., Docker), and orchestration tools (e.g., Airflow) is a plus.
Apply NowNovember 22
Join Sharesource as a Data Engineer, designing and implementing data pipelines for clients.
November 10
Support data engineering needs for analytics projects at Arch.
October 20
Data Engineer for building scalable data pipelines at A-VNG.
October 1
Data engineer transforming raw data for SAVii's wellness services.
July 15
Manage data migration for Xero cloud-based practice and tax tools, supporting ABs & SMBs.