SYMVOS® is a leading provider of global people and process solutions, focused on enhancing business operations through digital transformation. The company emphasizes innovation, diversity, and inclusion in its services, which range from business process outsourcing and staffing to cyber security and digital marketing. With a commitment to fostering growth opportunities for clients, SYMVOS employs cutting-edge technologies to address challenges and enable meaningful relationships between brands and their customers.
IT Technology • Business Process Outsourcing • Digital Transformation • Diversity Talent Acquisition • Automation
December 9, 2024
AWS
Azure
Cloud
Cyber Security
Docker
ETL
Google Cloud Platform
Hadoop
Java
Kafka
Kubernetes
MySQL
Node.js
NoSQL
Postgres
Python
Scala
Spark
SQL
SYMVOS® is a leading provider of global people and process solutions, focused on enhancing business operations through digital transformation. The company emphasizes innovation, diversity, and inclusion in its services, which range from business process outsourcing and staffing to cyber security and digital marketing. With a commitment to fostering growth opportunities for clients, SYMVOS employs cutting-edge technologies to address challenges and enable meaningful relationships between brands and their customers.
IT Technology • Business Process Outsourcing • Digital Transformation • Diversity Talent Acquisition • Automation
• This is a remote position. • Job Role: Data Engineer. • Overview: As a Data Engineer, you will play a crucial role in designing, building, and maintaining scalable data pipelines and infrastructure for our organization. • You will work closely with data scientists, analysts, and other stakeholders to ensure optimal data flow and integration for analytics, machine learning, and business intelligence purposes. • This role requires a deep understanding of data architecture, ETL processes, data modeling, and proficiency in programming and scripting languages. • Key Responsibilities: • Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines to ingest, transform, and store large volumes of data from various sources (e.g., databases, APIs, logs). • Optimize data pipelines for performance, reliability, and scalability. • Data Integration and ETL Processes: Develop and implement ETL processes to cleanse, transform, and integrate data into data warehouses or data lakes. • Ensure data quality and consistency across different data sources. • Data Modeling: Design and implement data models and schemas to support analytics and reporting requirements. • Collaborate with data analysts and scientists to understand data requirements and translate them into technical solutions. • Database Management: Manage and optimize databases (SQL and NoSQL) for performance and scalability. • Implement database schema changes, indexes, and optimizations as needed. • Data Infrastructure: Design and deploy infrastructure for data storage and processing, considering factors such as availability, reliability, and cost-effectiveness (e.g., cloud services like AWS, Azure, GCP). • Collaboration and Communication: Work closely with cross-functional teams (e.g., data scientists, analysts, software engineers) to support their data infrastructure needs. • Monitor data pipelines and infrastructure to ensure data availability, integrity, and performance. • Perform troubleshooting and resolve issues related to data processing and storage.
• 5+ years of proven experience as a Data Engineer or in a similar role. • Strong programming skills in languages such as Python, Java, Scala, or similar for data manipulation and scripting. • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). • Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud platforms and services (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). • Understanding of data warehousing concepts and architectures. • Knowledge of data modeling, ETL tools, and data integration techniques. • Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s degree preferred). • Relevant certifications (e.g., AWS Certified Big Data - Specialty) are a plus.
Apply NowNovember 22, 2024
11 - 50
Work as a Postgres Database Engineer focused on conservation analytics in AWS environments. Join a remote team driving innovative solutions.
November 10, 2024
Design and enhance AWS data environment for a software consulting firm.
November 8, 2024
AWS Data Engineer at KYNITE focusing on cloud services and big data.
Discover 100,000+ Remote Jobs!
We use powerful scraping tech to scan the internet for thousands of remote jobs daily. It operates 24/7 and costs us to operate, so we charge for access to keep the site running.
Of course! You can cancel your subscription at any time with no hidden fees or penalties. Once canceled, you’ll still have access until the end of your current billing period.
Other job boards only have jobs from companies that pay to post. This means that you miss out on jobs from companies that don't want to pay. On the other hand, Remote Rocketship scrapes the internet for jobs and doesn't accept payments from companies. This means we have thousands more jobs!
New jobs are constantly being posted. We check each company website every day to ensure we have the most up-to-date job listings.
Yes! We’re always looking to expand our listings and appreciate any suggestions from our community. Just send an email to Lior@remoterocketship.com. I read every request.
Remote Rocketship is a solo project by me, Lior Neu-ner. I built this website for my wife when she was looking for a job! She was having a hard time finding remote jobs, so I decided to build her a tool that would search the internet for her.