October 11
•Lead data engineering efforts by collaborating with cross-functional teams to harness data. •Architect and oversee the development of scalable, dependable, and high-performance integrated data platforms. •Create and maintain advanced ETL pipelines using Python. •Implement real-time streaming data pipelines for application integration. •Leverage advanced SQL expertise to optimize queries and data retrieval. •Harness AWS services for efficient data storage and processing. •Manage cloud storage systems to optimize data asset storage efficiency. •Implement Docker containerization and orchestration to support scalable ETL pipelines. •Apply advanced statistical methods for intricate data analysis and modeling. •Utilize Snowflake data warehouse for loading data and optimizing query performance. •Use strong communication skills to facilitate data exchange with external vendors. •Ensure all data deliverables adhere to regulatory and security requirements. •Establish CI/CD pipelines for automated deployment and rigorous testing. •Leverage Spark for distributed data processing tasks. •Utilize data modeling techniques to guarantee data accuracy and integrity. •Enhance data pipelines quality, security, efficiency, and scalability.
•Bachelor’s Degree or U.S. equivalent in Computer Science, Computer Engineering, Information Technology, Telecommunications Engineering, Business Analytics, or related field, plus 5 years of professional experience as a Data Engineer, Information Architect, or any occupation/position/job title involving data engineering including constructing ETL pipelines in a production environment. •2 years of professional experience processing data with a massively parallel technology (including Snowflake, Redshift, and Spark or Hadoop based big data solution); •2 years of professional experience prototyping Python-based ETL solutions and translating complex requirements into actionable tools; •2 years of professional experience utilizing SQL including advanced query optimization; •2 years of professional experience with data modeling and relational database technologies utilizing data warehousing systems including Snowflake and Redshift; •2 years of professional experience utilizing cloud platforms including AWS or GCP; •2 years of professional experience utilizing Spark for distributed data processing.
•All roles at SmartAsset are currently and will remain remote- flexibility to work from anywhere in the US. •Medical, Dental, Vision - multiple packages available based on your individualized needs •Life/AD&D Insurance - basic coverage at 100% company paid, additional supplemental available •Supplemental Short-term and Long-term Disability •FSA: Medical and Dependant Care •401K •Equity packages for each role •Time Off: Vacation, Sick and Parental Leave •EAP (Employee Assistance Program) •Financial Literacy Mentoring Program •Pet Insurance •Home Office Stipend
Apply NowOctober 11
501 - 1000
Lead Data Engineering function at Axios for delivering trustworthy news content.
🇺🇸 United States – Remote
💵 $177.2k - $236.2k / year
⏰ Full Time
🔴 Lead
🚰 Data Engineer
🗽 H1B Visa Sponsor
October 4
51 - 200
Lead data architecture and solutions using AWS at Solutions by Text.
🇺🇸 United States – Remote
💰 Private Equity Round on 2021-11
⏰ Full Time
🔴 Lead
🚰 Data Engineer
🗽 H1B Visa Sponsor
October 3
501 - 1000
Data Warehouse Architect at ProAssurance defining data solutions and ETL processes.
October 3
501 - 1000
Director of Data Engineering to enhance internal data platform at LiveRamp.
🇺🇸 United States – Remote
💵 $183k - $270k / year
💰 Series C on 2013-03
⏰ Full Time
🔴 Lead
🚰 Data Engineer