Senior Data Engineer

October 26

Apply Now
Logo of Cybernetic Controls Ltd

Cybernetic Controls Ltd

recruitment • remotehiring • offshore recruitment • remote jobs • work from home

51 - 200

Description

• Overview • At Cybernetic Controls Limited (CCL), we are committed to global leadership in providing innovative digital solutions that empower businesses to reach their full potential. As a remote-first company, we believe in empowering our employees to work in a way that best suits their individual needs, fostering a culture of flexibility and trust. Since our founding in 2020, we have successfully delivered high-quality resources to our clients in the FinTech sector across various business areas. Read more on the Cybernetic Controls website. • Our Client: • We are a multi-award winning RegTech company on a mission to transform the quality of regulatory reporting in the financial services industry. We’ve combined regulatory expertise with advanced technology to develop our market-leading quality assurance services. Unique in being able to fully assess data quality, our services are used by some of the world’s largest investment banks, asset managers, hedge funds and brokers, helping them to reduce costs, improve quality and increase confidence in their regulatory reporting. • Job summary • Our client is seeking a Senior Data Engineer to join our fast-growing team. The successful candidate will join the testing team to work on ETL and development tasks. This is an exciting and challenging opportunity to build out new pipelines combining and processing large amounts of structured data from a variety of sources with the power of PySpark at your fingertips.

Requirements

• Excellent Python and PySpark programming (including Pandas/PySpark dataframes, web and database connections) • Excellent understanding of ETL processes within Amazon Web Services (AWS) • Apache Spark, AWS Glue, Athena, S3, Step Functions, Lake Formation • Software development lifecycle best practices • Test-driven development • Serverless computing (AWS Lambda, API Gateway, SQS, SNS, EventBridge, S3, etc.) • SQL and NoSQL database design and management (DynamoDB, MySQL) • Strong SQL coding skills (Spark-SQL, Presto SQL, MySQL, etc.) • Infrastructure as code (CloudFormation) • Experience in Shell Scripting (preferably Linux) • Version control with Git/Github • Agile principles, processes and tools • Excellent written and verbal communication skills. • Designing, deploying and managing complex production data pipelines that interact with a range of data sources (file systems, web, database, users) • Strong experience with Amazon Web Services (AWS) • 5 years’ work in data engineering field • At least 2 years' experience with PySpark and AWS data tools (particularly, Glue) • Data modeling, data pipeline architecture, Big Data implementation. • Software development lifecycle best practices. • Financial knowledge would be an asset. Qualifications/Training: • Bachelor’s degree or equivalent in Computer Science or a related subject.

Benefits

• Competitive salary package • Private healthcare contribution • Annual pay review • Regular team socials • Working within a culture of innovation and collaboration • Opportunity to play a key role in a pioneering growth company • Company Laptop will be provided

Apply Now

Similar Jobs

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@remoterocketship.com