Healthcare software solutions • Online Interview software • Voice Analytics • Outsourced product development • Agile methodologies
11 - 50
5 days ago
Healthcare software solutions • Online Interview software • Voice Analytics • Outsourced product development • Agile methodologies
11 - 50
• Responsible for expanding and optimizing our data and data pipeline architecture. • Optimize data flow and collection for cross functional teams. • Support software developers, database architects, data analysts and data scientists. • Ensure optimal data delivery architecture is consistent throughout ongoing projects. • Create and maintain optimal data pipeline architecture. • Assemble large, complex data sets that meet functional / non-functional business requirements. • Identify, design, and implement internal process improvements. • Build infrastructure for optimal extraction, transformation, and loading of data using SQL and AWS. • Build analytics tools that utilize data pipeline to provide actionable insights. • Work with stakeholders including Executive, Product, Data and Design teams. • Keep data separated and secure across national boundaries. • Create data tools for analytics and data scientist team members. • Work with data and analytics experts for greater functionality in our data systems.
• 5+ years of experience in a Data Engineer role. • Experience with relational SQL and NoSQL databases, including Postgres, Oracle and Cassandra. • Experience with data pipeline and workflow management tools. • Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift. • Experience with stream-processing systems: Storm, Spark-Streaming, Amazon Kinesis, etc. • Experience with object-oriented/object function scripting languages: Python, Java, NodeJs. • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. • Strong analytic skills related to working with both structured and unstructured datasets. • Build processes supporting data transformation, data structures, metadata, dependency and workload management. • A successful history of manipulating, processing and extracting value from large disconnected datasets. • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. • Strong project management and organizational skills. • Experience supporting and working with cross-functional teams in a dynamic environment.
Apply Now5 days ago
1001 - 5000
Collect data on chemical substances from various regulatory and industry sources.
October 25
1001 - 5000
Data Engineer at TwiningsOvo focusing on building data pipelines and modeling.
October 24
1001 - 5000
Data Engineer to migrate Oracle data warehouse to Snowflake at Duck Creek.
🇮🇳 India – Remote
💰 $230M Private Equity Round on 2020-06
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer