Data Engineer - Data Pipeline Architect

November 6

Apply Now
Logo of Unreal Staffing, Inc

Unreal Staffing, Inc

Unreal Engine • unreal engine developers • Game Development

2 - 10

Description

• Are you passionate about designing, building, and maintaining data pipelines that support robust data architectures and facilitate seamless data flow? • Do you excel in creating scalable solutions that empower data-driven decision-making? • If you’re ready to develop and optimize data systems that drive impactful analytics, our client has the perfect role for you. • We’re seeking a Data Engineer (aka The Data Pipeline Architect) to build and manage cloud-based data infrastructures that support analytical needs and operational efficiencies. • Your role will be critical in ensuring data systems are optimized for performance, reliability, and scalability. • Key Responsibilities: • Design and Implement Scalable Data Pipelines: Develop and maintain data pipelines that support data ingestion, transformation, and integration using cloud technologies. • Manage and Optimize Data Storage Solutions: Architect and maintain data lakes and data warehouses using platforms like BigQuery, Redshift, Snowflake, or similar cloud-based solutions. • Collaborate with Data Teams for Strategy Development: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and align data solutions with business goals. • Ensure Data Quality and Reliability: Implement and manage processes for data validation, error handling, and consistency checks. • Develop and Automate ETL Processes: Build ETL (Extract, Transform, Load) workflows to handle complex data transformations. • Monitor and Maintain Data Infrastructure: Use monitoring tools to track the performance and reliability of data systems. • Optimize Data Processing and Resource Management: Implement strategies for efficient resource allocation and cost-effective data processing.

Requirements

• Cloud Data Platform Expertise: Experience with cloud data platforms such as AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Azure Data Lake, Synapse). • Programming and Scripting Knowledge: Proficiency in Python, Java, or Scala for building data pipelines and data processing tasks. • ETL and Data Pipeline Management: Proven ability to develop, maintain, and optimize ETL processes that handle large volumes of data. • SQL and Database Management: Strong ability to write complex SQL queries and work with relational and NoSQL databases. • Problem-Solving and Critical Thinking: Excellent problem-solving skills with a proactive approach to identifying and resolving data-related challenges.

Benefits

• Health and Wellness: Comprehensive medical, dental, and vision insurance plans with low co-pays and premiums. • Paid Time Off: Competitive vacation, sick leave, and 20 paid holidays per year. • Work-Life Balance: Flexible work schedules and telecommuting options. • Professional Development: Opportunities for training, certification reimbursement, and career advancement programs. • Wellness Programs: Access to wellness programs, including gym memberships, health screenings, and mental health resources. • Life and Disability Insurance: Life insurance and short-term/long-term disability coverage. • Employee Assistance Program (EAP): Confidential counseling and support services for personal and professional challenges. • Tuition Reimbursement: Financial assistance for continuing education and professional development. • Community Engagement: Opportunities to participate in community service and volunteer activities. • Recognition Programs: Employee recognition programs to celebrate achievements and milestones.

Apply Now

Similar Jobs

November 5

HackerOne

201 - 500

Develop scalable data products for HackerOne's lakehouse architecture.

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com