Design and implement big data platforms for various customer engagements.
Building highly scalable data pipelines for multi cloud environment (both private and public clouds)
Designing and implementing Datalakes, Data warehouse and DataMarts according to the business requirements
Support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Establishing necessary security controls for the data platform by safe guarding data in motion and at rest.
Establishing data governance for the data platforms.
Handling, transforming & managing Big Data using Big Data Frameworks & NoSQL databases.
Building complete infrastructure to ingest, transform & store data for further analysis & business requirement.
4 to 8 years of experience in software design and implementation and of that minimum 5+ years in implementing big data solutions using big data frameworks.
Strong expertise in setting up data pipelines in Azure Cloud.
Strong experience in hosting big data frameworks cloud native platforms (Kubernetes and Docker)
Working knowledge of message queuing, stream processing, and highly scalable big data’ data stores
Big Data stack : Spark, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka,
Strong experience in Hadoop ecosystem; HDFS, Hive, Spark, Presto, NiFi, Flume, Sqoop etc.
Good experience in stream processing systems like Storm, Spark streaming etc.
Good expertise in DevOps stack for establishing an agile big data platform.
Good skills in Infrastructure As Code and Configuration management tools like Terraform and Ansible.
Good expertise in NoSQL databases like MongoDB, Influx, Timescale, Neo4J, Cassandra etc.
Good expertise in SQL databases and its administration
Good automation capabilities using shell scripting, python and PowerShell.