July 27
• Design, construct, install, test, and maintain highly scalable data management systems • Ensure systems meet business requirements and industry practices • Design, implement, and continuously expand data pipelines by performing extraction, transformation, and loading activities • Develop, maintain, and optimize data pipeline architectures • Collaborate with data scientists, analysts, and other stakeholders to ensure data accuracy and availability • Monitor and troubleshoot data processing systems to ensure seamless data flow • Integrate new data management technologies and software engineering tools into existing structures • Employ data engineering and cloud computing best practices to manage and process large datasets efficiently • Implement and manage data security policies and procedures
• Minimum 4 years of experience as a Data Engineer • At least 3 years of experience with AWS • Strong experience with Python programming language • Proficiency in using AWS services, including but not limited to API Gateway, Lambda, SQS, SNS, EC2, CloudFront, CloudWatch, IAM, and S3 • Solid experience with relational and non-relational databases • Strong understanding of data warehousing, ETL processes, and data architecture • Experience with data pipeline and workflow management tools • Strong problem-solving skills and attention to detail • Excellent communication and collaboration abilities • Experience with data modeling, data schema design, and data integration • Familiarity with big data technologies such as Hadoop, Spark, Snowflake, or Kafka is a plus • Experience with AI and Chat GPT models is a plus • English C1 or above is required
• Contractor model • 100% remote • Salary in USD • Paid Vacations • Day off for birthdays • Benefits courses and/or certifications • Work on leading projects for our US customers, and not on the bench
Apply Now