November 12
• Serve as the subject matter expert in technologies used for our data strategy in the cloud. • Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals. • Solve complex data problems to deliver insights that helps the organization's business to achieve their goals. • Create data products for analytics and data scientist team members to improve their productivity. • Advise, consult, mentor and coach other data and analytic professionals on data standards and practices. • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. • Lead the evaluation, implementation and deployment of emerging tools and process for analytic data engineering to improve the organization's productivity as a team. • Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.
• Bachelor’s degree in a quantitative discipline such as Computer Science, MIS or related field with 10 years of related experience. • Related Experience includes but is not limited to: Development and tuning on any RDBMS technology (Oracle, SQL Server, Teradata, SingleStore). • Strong knowledge of using Spark for fast imagery and big data processing. • Expert knowledge in managing and tuning of Data Lake and Lakehouse storage objects with accompanying data engineering pipelines. • Expert knowledge in Python and SQL development and troubleshooting. • Willingness to undertake assignments involving unfamiliar subjects and demonstrate aptitude to learn quickly. • Excel at communicating complex ideas and concepts both verbally and through writing. • Experience in Agile/Scrum development with DevOps methodologies geared toward rapid prototyping and piloting of cloud solutions. • Prefer 8 or more years of knowledge and experience in development and tuning of ETL processes using any type of ETL tool or architecture. • Experience in creating, analyzing and documenting ER and Dimensional models. • Unix/Linux experience is preferred. Data Engineers maintain Unix files, commands, and scripts, which requires an understanding of Unix operating system concepts. • Prefer 4 - 5 years of experience in using the Kimball methodology to design and develop Enterprise Data Warehouses and Data Marts. • Prefer 3 or more years of experience with Spark in a Databricks environment. • Prefer 3 or more years of experience with imagery (or other big data) processing and/or data preparation with Spark. • Prefer 6 or more years of experience using Python (preferably in a Data Engineering or Data Science role). • Understanding of data replication tools (such as: Informatica Data Replication or Oracle GoldenGate). • Experience in designing and/or developing Analytics applications using tools such as Power BI, QlikView, SAS, RapidMiner or other comparable products. • Willingness to travel throughout organization and service territory and work extended hours, as required.
• Competitive pay plus incentive compensation • Company-sponsored pension plan • 401(k) savings plan with matching employer contribution • Choice of medical, prescription drug, dental, vision, and life insurance programs • Skills development training with tuition reimbursement • Commitment to workforce diversity
Apply NowNovember 12
501 - 1000
Design and build scalable data pipelines at Sand Technologies.
November 12
501 - 1000
Design and oversee data architecture for healthcare innovations at Abarca.
November 12
501 - 1000
Data Architect assists customers with Google Cloud migration strategies at 66degrees.
November 12
5001 - 10000
Consultant role to solve data problems using code at AAA.
🇺🇸 United States – Remote
💵 $104.4k - $139.2k / year
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🗽 H1B Visa Sponsor
November 11
5001 - 10000
Data Engineer role transforming data into insights for ICF.
🇺🇸 United States – Remote
💵 $84.5k - $143.7k / year
💰 Grant on 2023-02
⏰ Full Time
🟠 Senior
🚰 Data Engineer
🗽 H1B Visa Sponsor