Bigdata Lead
Saama Technologies
Pune, India
6d ago

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce.

Strong understanding of development languages to include : Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2.

x framework principals and usages.

Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java.

Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective.

Should be proficient in Kafka and integration with Spark.

Should be proficient in Spark SQL and data warehousing techniques using Hive.

Should be very proficient in Unix shell scripting and in operating on Linux.

Should have knowledge about any cloud based infrastructure.

Good experience in tuning Spark applications and performance improvements.

  • Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development;
  • Version control systems, automated builds, etc.

    Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation Skills : -

    Hadoop, Spark, Apache Hive, Apache Flume, Java, Python, Scala, MySQL, Game Design and Technical Writing

    Add to favorites
    Remove from favorites
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form