November 12
• Serve as the subject matter expert in technologies used for our data strategy in the cloud. • Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals. • Solve complex data problems to deliver insights that helps the organization's business to achieve their goals. • Create data products for analytics and data scientist team members to improve their productivity. • Advise, consult, mentor and coach other data and analytic professionals on data standards and practices. • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. • Lead the evaluation, implementation and deployment of emerging tools and process for analytic data engineering to improve the organization's productivity as a team. • Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.
• Bachelor’s degree in a quantitative discipline such as Computer Science, MIS or related field with 10 years of related experience. • Related Experience includes but is not limited to: Development and tuning on any RDBMS technology (Oracle, SQL Server, Teradata, SingleStore). • Strong knowledge of using Spark for fast imagery and big data processing. • Expert knowledge in managing and tuning of Data Lake and Lakehouse storage objects with accompanying data engineering pipelines. • Expert knowledge in Python and SQL development and troubleshooting. • Willingness to undertake assignments involving unfamiliar subjects and demonstrate aptitude to learn quickly. • Excel at communicating complex ideas and concepts both verbally and through writing. • Experience in Agile/Scrum development with DevOps methodologies geared toward rapid prototyping and piloting of cloud solutions. • Prefer 8 or more years of knowledge and experience in development and tuning of ETL processes using any type of ETL tool or architecture. • Experience in creating, analyzing and documenting ER and Dimensional models. • Unix/Linux experience is preferred. Data Engineers maintain Unix files, commands, and scripts, which requires an understanding of Unix operating system concepts. • Prefer 4 - 5 years of experience in using the Kimball methodology to design and develop Enterprise Data Warehouses and Data Marts. • Prefer 3 or more years of experience with Spark in a Databricks environment. • Prefer 3 or more years of experience with imagery (or other big data) processing and/or data preparation with Spark. • Prefer 6 or more years of experience using Python (preferably in a Data Engineering or Data Science role). • Understanding of data replication tools (such as: Informatica Data Replication or Oracle GoldenGate). • Experience in designing and/or developing Analytics applications using tools such as Power BI, QlikView, SAS, RapidMiner or other comparable products. • Willingness to travel throughout organization and service territory and work extended hours, as required.
• Competitive pay plus incentive compensation • Company-sponsored pension plan • 401(k) savings plan with matching employer contribution • Choice of medical, prescription drug, dental, vision, and life insurance programs • Skills development training with tuition reimbursement • Commitment to workforce diversity
Apply NowNovember 12
Design and build scalable data pipelines at Sand Technologies.
November 10
Lead a data engineering team for AI solutions at Ex Parte.
November 10
Ex Parte seeks a senior data engineer for its AI self-service portal.
November 10
2 - 10
Develop data solutions for a leading retail company's analytics capabilities.
November 9
Design and build scalable data pipelines for Fetch’s rewards platform.
🇺🇸 United States – Remote
💰 Debt Financing on 2022-04
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🦅 H1B Visa Sponsor