candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-
visibility, cross-division outcomes.
Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake.
Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC -
Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines -
Integrate new data sources into DWH - Manage DWH Cloud (AWS / AZURE / Google) and Infrastructure Skills Needed : - Very strong in SQL.
Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) -
Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. -
Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools.
Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance.