Enrollment & Eligibility • ACA Tracking & Reporting • Billing & Financial Reporting • Total Population Health Management • COBRA Services
1001 - 5000
💰 Private Equity Round on 2021-12
October 20
🇺🇸 United States – Remote
💵 $100k - $130k / year
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🗽 H1B Visa Sponsor
Amazon Redshift
AWS
Cloud
DynamoDB
ETL
Hadoop
MapReduce
NoSQL
Oracle
Postgres
Python
Shell Scripting
Spark
SQL
Unix
Enrollment & Eligibility • ACA Tracking & Reporting • Billing & Financial Reporting • Total Population Health Management • COBRA Services
1001 - 5000
💰 Private Equity Round on 2021-12
• This role will serve on the Innovation Works team. The Data Engineer will be responsible for architecting, developing, implementing, and operating stable, scalable, low cost solutions to source data from production systems into the data lake (AWS) and data warehouse (Redshift) and into end-user facing applications (AWS Quicksight). • Building fault tolerant cloud solutions for Data Engineering • Aggregate, organize and translate large amounts of data to meet business requirements • Develop and optimize data and date pipeline architecture as well optimize data flow • Design and build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources such as Oracle, Amazon Relational Databases (RDS), SQL and AWS ‘big data’ technologies • Implement data storage solutions in AWS, utilizing services like Amazon S3, Redshift, RDS, and DynamoDB. Ensure systems are scalable and optimized for performance. • Partner with software engineers, BI team members, and data scientists to architect and build data-driven solutions, assist with data-related technical issues and support their data infrastructure needs. • Maintaining and Enhancing Existing Data Loads to the Data Warehouse and Data Lake. • Maintaining Streaming Data from production Systems. • Peer Reviewing code. • Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining, and production.
• Degree in Computer Engineering/Science or related field, with 5+ years of professional experience in database/data lake development • Experience with multiple data sources such as Oracle, SQL, RDS, data lakes as well as NoSQL solutions. • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets • 3+ years experience with AWS big data cloud services such as Kinesis, Redshift, EMR, Athena and Glue deployed through Cloudformation • Proficient with ETL and Data Warehouse/Lake processes • Strong experience using Python or Unix shell scripting (preferably both) and a bonus if you have used boto3. • Experience with Architecting Cloud Solutions • Experience in leading Multiple sprint project and Epics • Excellent verbal and written communication skills • Strong troubleshooting and problem-solving skills • Thrive in a fast-paced, innovative environment • Project management and organizational skills.
• If this position is full-time or part-time benefit eligible, you will receive a comprehensive benefits package which can be viewed here: https://businessolver.foleon.com/bsc/job-board-businessolver-virtual-benefits-guide/
Apply NowOctober 20
11 - 50
Design and maintain data architecture for MDCalc, a medical reference platform.
October 19
201 - 500
Data Engineer at Restore managing robust data infrastructure for wellness.
October 19
501 - 1000
Senior associate in data engineering for automotive industry at J.D. Power.
October 19
51 - 200
Data Engineer at Career.io to shape data infrastructure and drive decisions.
October 18
501 - 1000
Data Architect ensuring robust data ecosystem for healthcare solutions.