Clairvoyant India - Big Data Engineer - Java/Hadoop
Clairvoyant India Pvt Ltd
Hyderabad
2d ago
source : hirist.com

Company backgroundAt Clairvoyant, we re building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services.

In the big data space, we lead and serve as innovators, troubleshooters and enablers. Big data practice at Clairvoyant, focuses on solving the customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.

Role SpecificsIn this role, you'll get ... To work with an energetic team that strives to produce high quality scalable software and a team that is highly motivated to up the game every quarter.

To work on building a nearrealtime data solution for large scale problems. To interact with the client on a daily basis and an opportunity to explore the domain, understand the problem directly from the client, and participate in the brainstorming sessions.

To work on various niche technologies in the Data Engineering space like Spark Streaming, Kafka, HBase, Hadoop Ecosystem, JavaJ2EE To work on various services on Cloud Platforms (AWSGCP).

To gain experience in building hybrid (Onprem and Cloud) data solutions.We expect you to have...&nbsp Passion, Comprehension and Analytical &amp Logical Capabilities Ability to understand the vision and mission of the Department &amp Company.

Ability to understand the problem space we are dealing with. Ability to learn quickly and capable of driving conversations with teamclient to solve problems.

Ability to comprehend the problem, analyze, discuss and brainstorm the possible solutions with the team and propose the optimal solution.

Ability to break down complex problems into logical smaller pieces, solve and stitch the solutions back to solve the bigger problem.

Must Have SkillsAbility to code in Java Should be a regular coder. Should be able to write algorithms and convert the algorithms into Code.

Good hold on Collections Framework, OOPs concepts. Ability to code to Interfaces. Spark Good understanding of Spark (including streaming) architecture Spark Job life cycle Ability to map the Spark features to the problem at hand.

Ability to explain the use cases for shared variables. Also the pros and cons of the same. Monitoring of jobs for correctness and performance issues and ability to identify the bottlenecks in the Job.

Performance Tuning techniques.Cloud Engineering Cloud exposure to Rehost RePlatform Modernise legacy applications. A strong foundation on Cloud Capabilities and ability to apply them to reallife Data workloads.

At least 2 years of experience in any of the public cloud (AWSAZUREGCP) Cloud Data Pipeline experience with (GLUEDMSS3)Processing (EMRLAMBDASTEP Functions), Visualisation (Quicksight) capabilities and Ecosystem Understanding of HDFS architecture.

Should be able to explain the read and write process in HDFS along with the internals (which component handles what task) Good understanding of Hive Ability to decide on what kind of tables are to be used.

YARN and its role in Spark and MapReduce jobs.Good To Have Design Patterns and Clean Code Principles Spring Boot and REST APIs experience NoSQL Databases like HBaseMongoDB Kafka or any equivalent technology Cloud Platform experience (GCP preferably) Python

Report this job
checkmark

Thank you for reporting this job!

Your feedback will help us improve the quality of our services.

Apply
My Email
By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
Continue
Application form