Job Description Summary
As part of GE’s Finance Data Lake team, this individual will be part of platform engineering team having high innovation and R&D quotient by nature.
This technical team tasked with provisioning of platforms on cloud for application development for GE's mission critical Financial products used across GE.
This person needs to work independently and serves as the technical leader within the team in developing technical skills and solving problems in collaboration.
Essential Responsibilities :
Perform microservices based scalable application development with latest of the technology stack available wing to wing for all tire of application architecture.
Demonstrate proficiency in implementation of logical / physical data models that support best practices and write code that meets standards and delivers desired functionality using the technology selected for the project.
Perform a variety of data loads & data transformations with creating for parsing, formatting, & transforming data into units consistent with analytical needs
Bring in best of the class devOps practices to the team with Agile / Scrum, CI / CD, & container technologies.
Lead multiple workstreams and engagement with stakeholders.
Proactively share information across the team, to the right audience with the appropriate level of detail and timeliness, solve issues and tech challenges collaboratively.
Qualifications / Requirements :
Bachelor's Degree in Computer Science, Information Technology or equivalent with minimum 5 years of experience as data engineer.
A minimum of 2 year of experience using Hadoop ecosystem, Map-Reduce, Spark, NoSQL (HBase, MongoDB etc.), Cassandra is required
A minimum of 2 year of experience using Scripting (Pig, Python, Perl, etc.) is required
Work with various big data technologies, like Map-Reduce, Hive , Spark, NoSQL (HBase, MongoDB etc.), Cassandra or other Hadoop ecosystem.
Greenplum, HVR, Talend Etc.
Scripting / Programming experience in bash, Python, perl is highly desirable.
Previous experience in technologies such as Elasticsearch, Mongo, MemSQL, Kafka, Memcached and Redis is a major plus.
Understands logical and physical data models, big data storage architecture, data modelling methodologies, metadata management, master data management, data lineage & data profiling
A minimum of 2 years of experience with Core Java, Java WebServices development SOAP, REST APIs
A minimum of 1 year of experience working on Database(s), SQL is required
Understands the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team
Experience with visualization tools as Tableau, Spotfire, OBIEE is plus.
Knowledge in tools like Jenkins, Chef, Ansible, Terraform, Dockers, Kubernetes.
Technical understanding of cloud technologies such as AWS, Azure.
Desired Characteristics :
Deep passion for learning, Positive, can do attitude.
Asks follow-up questions when presented with new data / projects. Sees the broader implications of an idea
Presents new ideas and concepts. Makes connections among previously unrelated ideas
Expresses the information clearly and concisely. Able to project knowledge with relevant data, Good communication skills (written and spoken).
Demonstrated awareness of how to function in a team setting
Strong in debugging and troubleshooting abilities
Flexibility, team player & sense of ownership.
Able to drive team and goals through agile engineering methods.
Ability to travel as needed
Additional Eligibility Qualifications :
Relocation Assistance Provided : Yes