Big Data Engineer - Hadoop Ecosystem (4-6 yrs) Gurgaon/Gurugram (Systems/Product Software)
Vinclo System
Gurugram, India
23h ago
source :

Urgent opening for Big Data Engineer in Gurgaon location

Skills Required :

1. 4+ years of experience with the Hadoop ecosystem, integrating and implementing solutions using technologies like - Hive, Pig, Mapreduce, HDFS etc.

2. Should have a proficient understanding of distributed computing paradigm and realtime processing vs batch processing paradigm.

3. Should be proficient in working with Hadoop Ecosystem and other distributed computing frameworks.

4. Should have an experience in writing ETL processes and connect to different data sources using different mediums / technologies.

5. Proficient in at least one of the following programming language - Java, Python, Scala.

6. Experience in AWS EMR would be an added advantage.

Roles and Responsibilities :

1. Integrate with different data sources to collect and process and transfer data in realtime using different Big Data tools and frameworks.

2. Write ETL processes using various realtime distributed computing frameworks like Spark, Storm, Flink, etc in Java / Python / Scala

3. Integrate with different databases of Hadoop ecosystem like Hive, Impala, HBase, etc.

Add to favorites
Remove from favorites
My Email
By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
Application form