1. 5 years of experience in software application development
2. At least 2 years experience with Big Data / Hadoop architecture and related technologies
3. Hands-on experience with Spark RDDs, Datasets, Dataframes, Spark SQL,
4. Hands-on experinece with streaming technologies such Spark Streaming and Kafka
5. Hands-on experience using SQL, Spark SQL, HiveQL
6. Hands-on experience with Java 8, Scala or Python and use of IDEs for the same
7. Hands-on experience using technologies such as Hive, Pig, Sqoop,
8. Knowledge of dealing with SQL and NoSQL databases such as Orace, DB2, Teradata, Cassandra
9. Experience in cloud technologies that uses Amazon Web Services (AWS) such as S3, Athena, Glue, EMR, Lambda, RDS, EC2, DynamoDB, Kinesis, Firehose .
10. Hands-on experience using CI / CD processes for application software integration and deployment using Maven, Git, Jenkins, Jules
11. Knowledge of SDLC and Agile software development practices
12. Experience enabling scheduling for big data jobs
13. Hands-on experience working in unix environment
14. Good written, verbal, presentation and interpersonal communication skills, given an opportunity willing to work in a challenging and cross platform environment.
15. Strong Analytical and problem-solving skills. Ability to quickly master new concepts and applications