Staffing Services • Software Development • Web design & development • Mobile Apps Devleopment • SEO / Internet Marketing
51 - 200
November 10
Airflow
Amazon Redshift
Apache
AWS
Cassandra
ETL
Hadoop
HDFS
Java
NoSQL
Oracle
Postgres
PySpark
Python
Scala
Spark
SQL
Go
Staffing Services • Software Development • Web design & development • Mobile Apps Devleopment • SEO / Internet Marketing
51 - 200
• Create and maintain optimal data pipeline architecture. • Assemble large, complex data sets that meet functional / non-functional business requirements. • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. • Build analytics tools that utilize the data pipeline to provide actionable insights for Information Asset as well as Information Asset Customers’ operational efficiency and other key business performance metrics. • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. • Create data tools for analytics to build and optimize offering into an industry leading solution. • Work with data and analytics experts to strive for greater functionality data systems.
• Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. • Strong analytic skills related to working with unstructured datasets. • Build processes supporting data transformation, data structures, metadata, dependency and workload management. • A successful history of manipulating, processing and extracting value from large, disconnected datasets. • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. • Strong project management and organizational skills. • Excellent writing and verbal communication skills and a high level of customer orientation.
Apply NowNovember 8
Remote data engineering role focusing on SQL and data pipelines using Snowflake.
October 13, 2023
51 - 200