March 18
• Building and maintaining data ingestion and data streaming infrastructure • Evolving and maintaining web crawling and scraping infrastructure • Building ETL pipelines to connect to and load data from partner and public APIs and datasets • Collaborating with global engineering teams
• Experience with large scale web scraping • Solid Python programming experience • Solid experience with Apache Spark (PySpark), nice to have experience with Data Dicks Platform • Familiarity with crawling, extracting and processing data tools (e.g. BrightData, Scrapy, BeautifulSoup) • Familiarity with extracting data from paid partners and publicly available API endpoints • Familiarity with ETL systems like Apache NiFi a plus • Understanding of streaming and/or event sourcing architectures • Experience with Elasticsearch, SQL Server, MongoDB • Experience with MS Azure or similar, and monitoring tools like DataDog • Experience with agile development, version control, open source practices, code review
• Continuous Learning: Develop holistic skillset of technical, soft, and management skills • Top Benefits: Best benefits package in Latin America including personal time off, family leave, health benefits, personal & work reimbursements • Work Directly with Clients: Build relationships that last, opportunity for Clients to hire engineers after 2 years
Apply Now