Data Infrastructure Engineer (US)

March 6, 2023

Apply Now
Logo of Onehouse

Onehouse

A pre-built lakehouse foundation.

11 - 50

Description

• As a foundational member of the Data Infrastructure team, you will productionize the next generation of our data tech stack by building the software and data features that actually process all of the data we ingest. • Accelerate our open source <> enterprise flywheel by working on the guts of Apache Hudi's transactional engine and optimizing it for diverse Onehouse customer workloads. • Act as a SME to deepen our teams' expertise on database internals, query engines, storage and/or stream processing. • Design new concurrency control and transactional capabilities that maximize throughput for competing writers. • Design and implement new indexing schemes, specifically optimized for incremental data processing and analytical query performance. • Design systems that help scale and streamline metadata and data access from different query/compute engines. • Solve hard optimization problems to improve the efficiency (increase performance and lower cost) of distributed data processing algorithms over a Kubernetes cluster. • Leverage data from existing systems to find inefficiencies, and quickly build and validate prototypes. • Collaborate with other engineers to implement and deploy, safely rollout the optimized solutions in production.

Requirements

• Strong, object-oriented design and coding skills (Java and/or C/C++ preferably on a UNIX or Linux platform). • Experience with inner workings of distributed (multi-tiered) systems, algorithms, and relational databases. • You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions. • An ability to prioritize across feature development and tech debt with urgency and speed. • An ability to solve complex programming/optimization problems. • An ability to quickly prototype optimization solutions and analyze large/complex data. • Robust and clear communication skills. • Nice to haves (but not required): • Experience working with database systems, Query Engines or Spark codebases. • Experience in optimization mathematics (linear programming, nonlinear optimization). • Existing publications of optimizing large-scale data systems in top-tier distributed system conferences. • PhD degree with 2+ years industry experience in solving and delivering high-impact optimization projects.

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com