Advisory_Data and Analytics _ Data Engineer - Big Data, Python, Data Analytics, Staff
Ernst & Young
Bangalore, India
2d ago
source : Wizbii

Job Description :

EY's Financial Services Office (FSO) is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge.

FSO practice provides integrated advisory services to financial institutions and other capital markets participants, including retail and commercial banks, investment banks, broker-dealers, asset managers (traditional and alternative), insurance and trading companies, and the Corporate Treasury functions of leading Fortune 500 Companies.

Within EY’s FSO Advisory Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future.

We dive deep into big data to extract the greatest value and discover opportunities in key functions like Banking and Insurance.

We help you to detect fraud using forensic data analytics, learn what customers really want, help create new business models, manage risk and support complex transaction decisions.

We combine technical progression with our commercial know-how. But we don't start with technology. Instead, we start with a question : What are the key areas that require immediate assistance?

Then we seek answers by analyzing the data. This way we help create a compelling business case for embedding the right analytical practices at the heart of your decision-making.

We also recognize that when it comes to data analytics, technology is only half of the equation. We help you deploy analytics into business processes at a stage where significant decisions are made by the management.

Organizational processes, culture and the human element are as critical as technology for extracting the true value from data analytics.

Key Responsibilities

  • Demonstrate strong technical capabilities and knowledge of building and maintaining large volume data solutions
  • Design solutions for data aggregation, improve data foundational procedures, integrate new data management technologies and software into the existing system and build data collection pipelines
  • Experience in programming, database management, data warehousing for big data applications.
  • Team player in large scale client engagements and / or smaller client engagements while consistently delivering quality client services.
  • Understanding business and technical requirements
  • Conducting of data discovery activities, performing root cause analysis, and making recommendations for the remediation of data quality issues.
  • Should be able to write effective, scalable python code
  • Support the growth of the data migration / data warehouse & data integration practice through internal initiatives and identifying new business opportunities
  • Develop back-end components to improve responsiveness and overall performance
  • Integrate user-facing elements into applications
  • Test and debug programs
  • Improve functionality of existing systems
  • Assess and prioritize feature requests
  • Coordinate with internal teams to understand user requirements and provide technical solutions
  • Building a quality culture
  • Foster teamwork and lead by example
  • Training and mentoring of project resources
  • Participating in the organization-wide people initiatives
  • Need to work as a team member to contribute in various technical streams of Data and Analytics implementation project.
  • Provide product and design level technical best practices
  • Interact with the onsite coordinators
  • Completion of assigned tasks on time and regular status reporting to the lead
  • Mandatory Experience

    Bachelors+ with adequate industry experience

    Minimum 4+ years of experience on Big Data technologies and Python programming

    Very good experience on Big Data Analytics along with sound knowledge on Java and Python.

    Should have completed at least 2 full life cycle experience in data analytics project

    Establishing scalable, efficient, automated processes for large scale data analyses and management

    Discover, design, and develop analytical methods to support novel approaches of data and information processing

    Prepare and analyse historical data and identify patterns

    Prior experience and expertise in the following domain :

    o Building or improving data transformation processes, data visualisations and applications

    o Exposure to Kafka, Hadoop and data processing engines such as Spark or Hadoop MapReduce

    o Big Data querying tools such as Pig or Hive or Impala

    o Solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data

    Experience working on Hadoop cluster, Big Data technologies like Python, Spark, etc. along with hands on experience in writing Java programs for Big data analytics.

    Significant programming experience in high-level languages such as Python, Java, Scala etc.,

    Excellent analytical (including problem solving), technical and communication skills

    Write effective, scalable code in Java for Big Data Analytics.

    Hands-on experience in writing effective code in Python / R / SAS

    Must have experience writing map-reduce programs using Java

    To create Solution approach for innovative scenarios

    To define hypothesis and to identify the analysis trail for the given business problem.

    To collaborate with technology team and support the development of analytical models with the effective use of data and analytic techniques.

    To validate the model results and articulate the insights to the business team.

    Drive the Business requirements gathering for analytics projects

    Intellectual curiosity - eagerness to learn new things

    Experience with unstructured data

    Ability to effectively visualize and communicate analysis results

    Strong problem-solving skills & proactive attitude

    Client focused with good presentation, communication and relationship building skills.

    Up-to-date of the latest industry regulations and have a keen interest in future technology trends

    Should be able to identify profitable opportunities, sell-on at client work, make a significant contribution in supporting a market proposition / capability and wider practice contribution

    Should inspire change and make an impact within the team and in a client organisation

    Desired Experience :

    Experience working on Banking and Capital Markets

    Must have experience working on Hadoop cluster, Big Data technologies like Python etc. along with hands on experience in writing Java programs for Big data analytics.

    Sound knowledge on writing Map-reduce programs using java

    Familiarity with cloud systems such as AWS, Azure

    Experience using Agile methodologies

    Experience in using Statistical techniques like : Regression Modeling / Optimization / Structuring

    Drive the Business requirements gathering for analytics projects

    Good to have Spark, Pandas, Scikit learn

    Good to have programming experience (e.g. application development or scripting languages Perl, VBasic, VBScript, Unix Shell scripts)

    Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB

    Hands-on programming experience in Kafka, Apache Spark using SparkSQL and Spark Streaming or Apache Storm.

    Willingness to travel to meet client needs

    Prior Client facing skills

    Job Description :

    EY's Financial Services Office (FSO) is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge.

    FSO practice provides integrated advisory services to financial institutions and other capital markets participants, including retail and commercial banks, investment banks, broker-dealers, asset managers (traditional and alternative), insurance and trading companies, and the Corporate Treasury functions of leading Fortune 500 Companies.

    Within EY’s FSO Advisory Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future.

    We dive deep into big data to extract the greatest value and discover opportunities in key functions like Banking and Insurance.

    We help you to detect fraud using forensic data analytics, learn what customers really want, help create new business models, manage risk and support complex transaction decisions.

    We combine technical progression with our commercial know-how. But we don't start with technology. Instead, we start with a question : What are the key areas that require immediate assistance?

    Then we seek answers by analyzing the data. This way we help create a compelling business case for embedding the right analytical practices at the heart of your decision-making.

    We also recognize that when it comes to data analytics, technology is only half of the equation. We help you deploy analytics into business processes at a stage where significant decisions are made by the management.

    Organizational processes, culture and the human element are as critical as technology for extracting the true value from data analytics.

    Key Responsibilities

  • Demonstrate strong technical capabilities and knowledge of building and maintaining large volume data solutions
  • Design solutions for data aggregation, improve data foundational procedures, integrate new data management technologies and software into the existing system and build data collection pipelines
  • Experience in programming, database management, data warehousing for big data applications.
  • Team player in large scale client engagements and / or smaller client engagements while consistently delivering quality client services.
  • Understanding business and technical requirements
  • Conducting of data discovery activities, performing root cause analysis, and making recommendations for the remediation of data quality issues.
  • Should be able to write effective, scalable python code
  • Support the growth of the data migration / data warehouse & data integration practice through internal initiatives and identifying new business opportunities
  • Develop back-end components to improve responsiveness and overall performance
  • Integrate user-facing elements into applications
  • Test and debug programs
  • Improve functionality of existing systems
  • Assess and prioritize feature requests
  • Coordinate with internal teams to understand user requirements and provide technical solutions
  • Building a quality culture
  • Foster teamwork and lead by example
  • Training and mentoring of project resources
  • Participating in the organization-wide people initiatives
  • Need to work as a team member to contribute in various technical streams of Data and Analytics implementation project.
  • Provide product and design level technical best practices
  • Interact with the onsite coordinators
  • Completion of assigned tasks on time and regular status reporting to the lead
  • Mandatory Experience

    Bachelors+ with adequate industry experience

    Minimum 4+ years of experience on Big Data technologies and Python programming

    Very good experience on Big Data Analytics along with sound knowledge on Java and Python.

    Should have completed at least 2 full life cycle experience in data analytics project

    Establishing scalable, efficient, automated processes for large scale data analyses and management

    Discover, design, and develop analytical methods to support novel approaches of data and information processing

    Prepare and analyse historical data and identify patterns

    Prior experience and expertise in the following domain :

    o Building or improving data transformation processes, data visualisations and applications

    o Exposure to Kafka, Hadoop and data processing engines such as Spark or Hadoop MapReduce

    o Big Data querying tools such as Pig or Hive or Impala

    o Solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data

    Experience working on Hadoop cluster, Big Data technologies like Python, Spark, etc. along with hands on experience in writing Java programs for Big data analytics.

    Significant programming experience in high-level languages such as Python, Java, Scala etc.,

    Excellent analytical (including problem solving), technical and communication skills

    Write effective, scalable code in Java for Big Data Analytics.

    Hands-on experience in writing effective code in Python / R / SAS

    Must have experience writing map-reduce programs using Java

    To create Solution approach for innovative scenarios

    To define hypothesis and to identify the analysis trail for the given business problem.

    To collaborate with technology team and support the development of analytical models with the effective use of data and analytic techniques.

    To validate the model results and articulate the insights to the business team.

    Drive the Business requirements gathering for analytics projects

    Intellectual curiosity - eagerness to learn new things

    Experience with unstructured data

    Ability to effectively visualize and communicate analysis results

    Strong problem-solving skills & proactive attitude

    Client focused with good presentation, communication and relationship building skills.

    Up-to-date of the latest industry regulations and have a keen interest in future technology trends

    Should be able to identify profitable opportunities, sell-on at client work, make a significant contribution in supporting a market proposition / capability and wider practice contribution

    Should inspire change and make an impact within the team and in a client organisation

    Desired Experience :

    Experience working on Banking and Capital Markets

    Must have experience working on Hadoop cluster, Big Data technologies like Python etc. along with hands on experience in writing Java programs for Big data analytics.

    Sound knowledge on writing Map-reduce programs using java

    Familiarity with cloud systems such as AWS, Azure

    Experience using Agile methodologies

    Experience in using Statistical techniques like : Regression Modeling / Optimization / Structuring

    Drive the Business requirements gathering for analytics projects

    Good to have Spark, Pandas, Scikit learn

    Good to have programming experience (e.g. application development or scripting languages Perl, VBasic, VBScript, Unix Shell scripts)

    Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB

    Hands-on programming experience in Kafka, Apache Spark using SparkSQL and Spark Streaming or Apache Storm.

    Willingness to travel to meet client needs

    Prior Client facing skills

    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form