Software Engineer, Big Data
Mast Global
Bangalore, Karnataka, IN
14d ago

Description OVERVIEW The Enterprise Data Warehouse Software Engineer in Big Data is a key role for BI development activities required to enable Business Intelligence solutions for L Brands.

Responsibilities will include providing technical solutions to complex business requirements through innovative data architecture design and data services solutions.

This role requires hands-on experience in data warehouse, business intelligence, Data modeling, ETL, HADOOP / MapR technologies and data services development using SQL, SPARK with JAVA, HIVE techniques, and architecture.

This role will interface and work in collaboration with a highly talented, energetic and diverse BI team and other cross functional teams (POS, finance, manufacturing, enterprise planning, HRMS, Supply-

Chain and several other systems) to devise end-to-end solutions. Responsibilities

  • tDesign & develop data flows, data models & data warehouses / data solutions
  • tCollaborate with report developers to source relevant data and build solution to support development of dashboards / reports
  • tProvide user support including incident management, data issues & maintenance of daily / weekly data refresh schedules & on-
  • call responsibilities to meet business SLA's

  • tDeliver required documentation for build and support responsibilities, including architecture and data flow diagrams Qualifications Education Required : Bachelor's Degree in Computer Science / Information Systems / Mathematics / Sciences Preferred : Master's Degree in Computer Science / Information Systems Required Skills / Qualifications
  • tMust have extensive hands on experience in designing, developing, and maintaining software solutions in Hadoop cluster
  • tMust have working experience in Spark with JAVA
  • tMust demonstrate Hadoop best practices
  • tMust have experience with strong UNIX shell scripting, SQOOP, eclipse (Any IDE)
  • tMust have experience with Developing Pig scripts / Hive QL, UDF for analyzing all semi-structured / unstructured / structured data flows
  • tMust have experience with NoSQL Databases like HBASE, Mongo or Cassandra
  • tDemonstrates broad knowledge of technical solutions, design patterns, and code for medium / complex applications deployed in Hadoop production
  • tGood to have knowledge on Spring Boot Microservices
  • tGood to have working experience with Developing MapReduce programs running on the Hadoop cluster using Java / Python
  • tGood to have working experience in the data warehousing and Business Intelligence systems
  • tParticipate in design reviews, code reviews, unit testing and integration testing
  • tAssume ownership and accountability for the assigned deliverables through all phases of the development lifecycle
  • tSDLC Methodology (Agile / Scrum / Iterative Development)
  • Apply
    Add to favorites
    Remove from favorites
    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form