Job Description :
We are seeking a motivated and experienced Operations Engineer. Cloudera believes in using the same technology we deliver to customers to be an operationally efficient business that’s driven by data.
This person will be responsible for the ongoing maintenance, upgrade, and expansion of our in-house Enterprise Hadoop environment.
In that capacity, this person will work closely with engineering, helping refine and apply best practices for the operation of an enterprise-scale multi-tenant Cloudera Hadoop cluster.
The Operations Engineer will also contribute to a cost-effective use of public and / or private cloud technologies.
This person will also monitor and improve the flow of data that is critical for Cloudera’s operations, working with Data Engineers and Application Developers to improve the quality and performance of jobs.
Additionally, performance tuning of Hadoop clusters and Hadoop MapReduce / Spark routines and management and support of Hadoop Services including HDFS, Hive, Impala, and Spark will be required.
The Operations Engineer will develop deployment, continuous integration, and other DevOps practices to ensure applications operate reliably throughout their lifecycle.
The Operations Engineer will also manage the infrastructure necessary for our DevOps practices.
At Cloudera, our goal is to make each individual feel valued for his or her contributions to the company’s mission. We strive to hire data driven individuals who can help us become a better company through improved understanding of our business with data.
Maintain, upgrade, and expand a business critical Hadoop cluster
Work closely with engineering on cluster improvements and issues
Intelligently use cloud services to improve efficiency and reduce costs
Track and improve cluster and job performance
Monitor critical applications or data pipelines and improve their reliability
Create DevOps practices to standardize and simplify deployment and CI practices
On-call and weekend work is required
Knowledge of Cloudera Manager and the Hadoop ecosystem
Experience debugging issues in unfamiliar systems
Exposure to cloud systems like AWS, Azure, GCE, or Openshift
Knowledge of containerized systems like Docker and Kubernetes
Excellent collaboration and communication skills
Nice to Have
Programming experience, especially in Java or Python
Experience with configuration management, especially SaltStack
Experience with continuous integration tools like Jenkins