Concerts • Festivals • Ticketing • Sponsorship • Artist Management
10,000+
💰 Post-IPO Debt on 2023-01
September 30
Concerts • Festivals • Ticketing • Sponsorship • Artist Management
10,000+
💰 Post-IPO Debt on 2023-01
• Lead a team of data engineers and collaborate closely with cross-functional teams to ensure the successful delivery of scalable and efficient data integration services • Lead efforts to optimize SQL queries, data pipelines, and overall system performance primarily on Databricks and other data warehouses, such as Snowflake and SQL Server for optimal performance and scalability • Drive initiatives to lower overall costs associated with data engineering on the Databricks platform through optimization of resource utilization, efficient data processing, and strategic resource allocation • Manage cross-team dependencies and work with tech leads on solutions • Encourages a culture of team-driven decision-making, ownership, and accountability • Establish and promote best practices, standards, and documentation for data engineering to ensure consistency, reliability, and maintainability of data pipelines • Manage and mentor a team of data engineers and managers to foster a culture of collaboration, innovation, and continuous learning • Set performance expectations, conduct regular performance evaluations, and provide feedback to team members • Identify skills gaps and training needs within the team and develop appropriate development plans • Foster a diverse and inclusive work environment, ensuring equal opportunities for all team members • Oversee the design, development, and maintenance of scalable data engineering infrastructure, ensuring optimal performance and reliability • Drive the implementation of efficient data processing pipelines and workflows to support analytics, reporting, and machine learning initiatives
• Proven experience (10+ years) in data engineering, with a strong emphasis on developing complex data ingestion and performance optimization • Strong experience Python, cloud databases, and ETL is required • Expert in developing scalable and efficient data pipelines • Strong proficiency in SQL optimization, performance tuning, and query execution on distributed systems • Extensive hands-on experience with the Databricks platform • Experience with pySpark, Snowflake, SQL Server, and Redshift is preferred • Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka • Knowledge of software engineering best practices, version control systems (e.g., Git), and CI/CD pipelines • Strong leadership and management skills, as well as experience leading and developing high-performing teams • Requires a deep understanding of data engineering principles, strong managerial skills, and the ability to drive innovation and operational excellence • Deep technical experience across multiple technologies, governance frameworks, data security, and data platforms • Strong track record partnering with product stakeholders to refine technical data engineering roadmap • Excellent oral and written communication and presentation skills • Demonstrated experience with cross-functional teams in a geographically and culturally diverse environment • Exceptional strategic thinking, problem-solving, and decision-making skills • Excellent collaboration and partnering skills • Excellent people management skills and the ability to influence • Manage multiple priorities, be results-oriented, and be delivery-focused • Experience working in Agile development environments and using tools such as Jira or Confluence
• Medical, Vision and Dental benefits for you and your family, including Flexible Spending Accounts (FSA) and Health Savings Accounts (HSAs) • Generous paid time off policy including paid holidays, sick time and paid days off for your birthday • 401(k) program with company match • Stock Program • New parent programs & support including caregiver leave and childcare cash, infertility support • Tuition reimbursement, student loan repayment internal growth and development programs & trainings • Volunteer time off, crowdfunding network
Apply NowSeptember 29
11 - 50
Data Engineer at GoodParty.org, building ETL pipelines to empower democracy.
September 25
501 - 1000
Guide data decisions for SageSure's online applications and databases.