SSE/TC-Google Cloud Platform
Tredence
Bengaluru, India
3d ago

We are looking for an analytical, big picture thinker who is driven to enhance and further the mission of Tredence by delivering technology to internal business and functional stakeholders.

You will serve as a leader to drive the IT strategy to create value across the organization. This Data Engineer will be empowered to lead the engagement to focus on implementing both low-level, innovative solutions, as well as the day-to-day tactics that drive efficiency, effectiveness and value

You will play a critical role in creating and analysing deliverables to provide critical content to enable fact-based decision making, facilitation and achievement of successful collaboration with the business stakeholders.

You will analyse, design, and develop best practices business changes through technology solutions.

Technical Requirements

  • Have Implemented and Architected solutions on Google Cloud Platform using the components of GCP
  • Experience with Apache Beam / Google Dataflow / Apache Spark in creating end to end data pipelines.
  • Experience in some of the following : Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning.
  • Experience programming in Java, Python, etc.
  • Expertise in at least two of these technologies : Relational Databases, Analytical Databases, NoSQL databases.
  • Certified in Google Professional Data Engineer / Solution Architect is a major Advantage
  • Experience

  • 4-7 years’ experience in IT or professional services experience in IT delivery or large-scale IT analytics projects
  • Candidates must have expertise knowledge of Google Cloud Platform; the other cloud platforms are nice to have.
  • Expert knowledge in SQL development.
  • Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc).
  • Experience with Apache Beam / Google Dataflow / Apache Spark in creating end to end data pipelines.
  • Experience in some of the following : Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning.
  • Experience programming in Java, Python, etc.
  • Identify downstream implications of data loads / migration (e.g., data quality, regulatory, etc.)
  • Implement data pipelines to automate the ingestion, transformation, and augmentation of data sources, and provide best practices for pipeline operations.
  • Capability to work in a rapidly changing business environment and to enable simplified user access to massive data by building scalable data solutions
  • Advanced SQL writing and experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with complex datasets.
  • About you :

  • You are self-motivated, collaborative, eager to learn, and hands on
  • You love trying out new apps, and find yourself coming up with ideas to improve them
  • You stay ahead with all the latest trends and technologies
  • You are particular about following industry best practices and have high standards regarding quality.
  • Report this job
    checkmark

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form