About Standard Chartered
We are a leading international bank focused on helping people and companies prosper across Asia, Africa and the Middle East.
To us, good performance is about much more than turning a profit. It's about showing how you embody our valued behaviours - do the right thing, better together and never settle - as well as our brand promise, Here for good.
We're committed to promoting equality in the workplace and creating an inclusive and flexible culture - one where everyone can realise their full potential and make a positive contribution to our organisation.
This in turn helps us to provide better support to our broad client base.
As a Hadoop Developer, you will be tasked with analysis, design, development and implementation of business-specific Hadoop solutions and / or framework.
In addition, as Hadoop developer you will also work closely with Project teams, partners and 3rd party vendors. Key Roles and Responsibilities :
Perform hands on development to build the data extraction, movement and integration, leveraging state of the art tools and practices, including both streaming and batched data ingestion techniques.
Create data integration pipelines to extract, transform, and integrate data from a variety of sources and formats for analysis and use across use cases.
Perform data profiling, discovery, and analysis to identify / determine the suitability and coverage of data, and identify the various data types, formats, and data quality which exist within a given data source.
Analyse, Design, Develop, Test, Document and Implement solutions.
Work with source system and business SME’s to develop an understanding of the data requirements and options available within data sources to meet the data and business requirements.
Create re-usable data extraction / ingestion pipelines and templates to demonstrate the logical flow and manipulation of data required to move data from source systems into the target data lake and / or sandbox.
Assist in creation of data requirements and data model design as necessary and appropriate.
Help maintain code quality, organization, and automatization
Plans and tracks implementations
Ensures bugs are tracked to a satisfactory closure during all stages of test
Undertakes continuous improvement for the current Infrastructure Our Ideal Candidate :
Minimum 8+ years of hands-on working experience in Big Data platform.
Should be good in Java / Scala and Unix Shell Scripting.
Knowledge in Integrated Development Environment (IDE) such as Eclipse and IntelliJ.
Strong query language skills (SQL, Hive, ETL, Hadoop, Spark, Talend).
Strong data analysis skills using Hive, Spark, R, Microstrategy and Tableau.
Experience in scheduling tools like Control-M, Oozie.
Data Integration, Data Security on Hadoop ecosystem.
Documentation - Requirements / Use Cases / Business Rules / User Stories / Test Cases / Design / Architecture / Data Dictionary, Etc.
Knowledge in DevOps technologies will be an added advantage.
Proven problem-solving skills.
Self-starter with the ability to work independently or as a team.
Should have an attitude to adopt quickly to the changes.
Excellent written, communication and presentation skills.
Experience of working in Agile environment.