5+ years of experience working in Bigdata technologies.
Experience with crafting and building large scale data pipelines in distributed environments with technologies such as Hadoop, Spark, Kafka, Hive etc.
Experience with NoSQL datastores like Cassandra, Elasticsearch, HBase, MongoDB.
Proven skills in designing, tuning & optimizing scalable, highly available distributed systems which can handle high data volumes.
Strong understanding of software engineering principles and fundamentals including data structures and algorithms.
Proficient & hands-on in Java is a must.
Good data modelling experience to address scale and read / write performance.
Excellent written and oral communication skills on both technical and non-technical topics.
Excellent general analytical & problem solving skills.
Experience with cloud computing platform like AWS or GCP is a plus.
Experience with web technology stack including REST Web Services, Springboot, Docker, Kubernetes is a plus.
Act as a leader within area of expertise to motivate, guide & inspire teammates.Lead effort to build scalable, distributed and highly available systems and pipelines.
Self directed, self motivated and detail oriented with ability to come up with good design proposals and thorough analysis of production issues.
Work with cross functional teams to drive requirements. Develop, coach & train junior developers; work with the team manager and PM in estimating scope and team capacity;
respond to urgent requests from executives or business needs; ensure world-class products & experiences.
Education & Experience