Spark • Impala • Kafka • Kudu • Scala
201 - 500 employees
Founded 2014
🤖 Artificial Intelligence
☁️ SaaS
🏢 Enterprise
💰 $2.5M Seed Round on 2018-03
Yesterday
Airflow
Amazon Redshift
AWS
Azure
Cassandra
Cloud
ElasticSearch
Google Cloud Platform
Hadoop
HDFS
Kafka
Matillion
NoSQL
Python
Spark
SQL
Terraform
Go
Spark • Impala • Kafka • Kudu • Scala
201 - 500 employees
Founded 2014
🤖 Artificial Intelligence
☁️ SaaS
🏢 Enterprise
💰 $2.5M Seed Round on 2018-03
• Join phData, a leader in the modern data stack. • Work on technology projects related to Snowflake, Cloud Platform (AWS/Azure), and services hosted on Cloud. • Operate and manage modern data platforms, respond to Pager Incidents, solve complex problems. • Possess clear ownership of tasks, continually grow, and learn with a 24/7 rotational shift.
• Working knowledge of SQL and the ability to write, debug, and optimize SQL queries. • Good understanding of writing and optimising Python programs. • Experience in providing operational support across a large user base for a cloud-native data warehouse (Snowflake and/or Redshift). • Experience with cloud-native data technologies in AWS or Azure. • Proven experience learning new technology stacks. • Strong troubleshooting and performance tuning skills. • Client-facing written and verbal communication skills and experience. • Production experience and certifications in core data platforms such as Snowflake, AWS, Azure, GCP, Hadoop, or Databricks. • Production experience working with Cloud and Distributed Data Storage technologies such as S3, ADLS, HDFS, GCS, Kudu, ElasticSearch/Solr, Cassandra or other NoSQL storage systems. • Production experience working with Data integration technologies such as Spark, Kafka, event/streaming, Streamsets, Matillion, Fivetran, HVR, NiFi, AWS Data Migration Services, Azure DataFactory or others. • Production experience working with Workflow Management and Orchestration such as Airflow, AWS Managed Airflow, Luigi, NiFi. • Working experience with infrastructure as code using Terraform or Cloud Formation. • Expertise in scripting language to automate repetitive tasks (preferred Python). • Well versed with continuous integration and deployment frameworks with hands-on experience using CI/CD tools like Bitbucket, Github, Flyway, Liquibase. • Bachelor's degree in Computer Science or a related field
• Medical Insurance for Self & Family • Medical Insurance for Parents • Term Life & Personal Accident • Wellness Allowance • Broadband Reimbursement • Professional Development Allowance • Reimbursement of Skill Upgrade Certifications • Certification Reimbursement
Apply Now2 days ago
As an AWS DevOps Engineer, automate deployments and design best practices on AWS. Collaborate with clients to enhance cloud infrastructure and services.
2 days ago
Join Masabi as a Site Reliability Engineer, enhancing fare collection technology and platform reliability.
🇮🇳 India – Remote
💰 Venture Round on 2022-03
⏰ Full Time
🟡 Mid-level
🟠 Senior
⛑ DevOps & Site Reliability Engineer (SRE)
December 14
Join Velsera as a DevOps Engineer, automating deployments and managing cloud infrastructure to enhance operational processes.
December 14
Join a team creating breakthrough software products that drive growth for industry leaders.
🇮🇳 India – Remote
💰 Private Equity Round on 2021-10
⏰ Full Time
🟡 Mid-level
🟠 Senior
⛑ DevOps & Site Reliability Engineer (SRE)
December 13
Join Red Hat as a Site Reliability Engineer for OpenShift in India. Develop and manage scalable cloud solutions with a focus on automation.
🇮🇳 India – Remote
💰 Corporate Round on 1999-03
⏰ Full Time
🟡 Mid-level
🟠 Senior
⛑ DevOps & Site Reliability Engineer (SRE)