Ralph Lauren
Bangalore, Karnataka, India
3d ago

At Ralph Lauren, we unite and inspire the communities within our company as well as those in which we serve by amplifying voices and perspectives to create a culture of belonging, ensuring inclusion, and fairness for all.

We foster a culture of inclusion through : Talent, Education & Communication, Employee Groups and Celebration.

Position Overview

Who we are : The Ralph Lauren Corporation, a global leader in luxury fashion and design. Our Global Development Center (GDC) is focused on building high-quality technology solutions to enhance the business & customer experience across channels and geographies.

Ralph Lauren is embarking on a multi-year Transformational journey to Digitize the Value Chain (DVC) which will reinvent the way we work by leveraging our talent and creativity with the latest innovations, lean processes, data driven & implementing modern technologies.

The Program will consist of COE teams across 4 Tracks : Product Transformation, Supplier Collaboration, Supply & Demand Alignment and a Data team that will consist of a hybrid of talent with business, technology and analytical experience.

The Lead Data Engineer, DVC is an emerging role in Ralph Lauren’s DVC Data team and will play a pivotal role in delivering insights for the most critical data and analytics initiatives for Ralph Lauren.

Purpose & Scope : Based in Bengaluru, India this Lead Data Engineer will work with the DVC program & Global Analytics team to build, maintain, and optimize data pipelines for key data and analytics consumers including business and data analysts and data scientists covering our digital and physical channels and value chain.

Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines.

This would enable faster data access, integrated data reuse and vastly improved time-to-solution for Ralph Lauren’s data and analytics initiatives.

The data engineer will be measured on their ability to integrate analytics and (or) data science results with Ralph Lauren’s business processes.

This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting better understanding of data and analytics.

The data engineer will also be tasked with working with key business stakeholders, IT experts and subject-matter experts to plan and deliver optimal enterprise data assets.

Essential Duties & Responsibilities

  • Build data pipelines : The primary responsibility of data engineers is to architect, build, and maintain data pipelines that will provision high quality data ready for analysis.
  • This includes ingestion, exploration, modeling, and curation of high value data.

  • Lead and Mentor Data Engineers : The senior data engineer will be responsible for leading and developing a team of data engineers focused on the growth in the team’s skills and ability to execute as a team using DevOps and DataOps principles.
  • Drive Automation through effective metadata management : The data engineer will be responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
  • Learning and using modern data preparation, integration and AI-enabled metadata management tools and techniques.
  • Tracking data consumption patterns.
  • Performing intelligent sampling and caching.
  • Monitoring schema changes.
  • Recommending or sometimes even automating existing and future integration flows.
  • Collaborate across departments : The newly hired data engineer will need strong collaboration skills in order to work with varied stakeholders within the organization.
  • In particular, the data engineer will work in close relationship with data analysts and business analysts in refining their data requirements for various data and analytics initiatives and their data consumption requirements.

  • Educate and train : The data engineer should be curious and knowledgeable about new data initiatives and how to address them.
  • This includes applying their data and / or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements.

    The data engineer will be required to train counterparts such as data scientists, data analysts, LOB users or any data consumers in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.

  • Participate in ensuring compliance and governance during data use : It will be the responsibility of the data engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.
  • Data engineers should work with data governance teams (and information stewards within these teams) and participate in vetting and promoting content created in the business and by data scientists to the curated data catalog for governed reuse.

  • Become a data and analytics evangelist : The data engineer will be considered a blend of data and analytics evangelist, data guru and fixer.
  • This role will promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals.

    Experience, Skills & Knowledge

    Education and Experience

  • A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field is required.
  • An advanced degree in computer science (MS), statistics, applied mathematics (Ph.D.), information science (MIS), data management, information systems, information science (postgraduation diploma or related) or a related quantitative field is preferred.
  • The ideal candidate will have a combination of IT skills, data governance skills, analytics skills and Retail industry knowledge with a technical or computer science degree.
  • At least 8 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and / or other areas directly relevant to data engineering responsibilities and tasks.
  • At least 4 years of experience working in cross-functional teams and collaborating with business stakeholders in Retail in support of a departmental and / or multi-departmental data management and analytics initiative.
  • Deep Retail Industry knowledge or previous experience working in the business would be a plus.
  • Technical Knowledge / Skills

  • Strong experience with advanced analytics tools for Object-oriented / object function scripting using languages such as R, Python, Scala, or similar.
  • Strong ability to design, build and manage data pipelines in PySpark and related technologies for data structures encompassing data transformation, data models, schemas, metadata and workload management.
  • The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.

  • Strong experience with popular database programming in relational and nonrelational environments including on AWS Redshift, AWS Aurora, SQL Server and similar platforms.
  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies.
  • These should include ETL / ELT, data replication / CDC, message-oriented data movement and upcoming data ingestion and integration technologies such as stream data integration and data virtualization.

  • Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.
  • Strong experience in working with both open-source and commercial message queuing technologies such as Kafka, Amazon Simple queuing Service, stream data integration technologies such as Apache Nifi, Apache Kafka Streams, Amazon Kinesis and stream analytics technologies such as Apache Kafka KSQL.
  • Basic experience working with popular data discovery, analytics and BI software tools like MicroStrategy, Tableau, PowerBI and others for semantic-layer-based data discovery.
  • Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
  • Basic understanding of popular open-source and commercial data science platforms such as Python, R, KNIME, Alteryx, others are a strong plus but not required / compulsory.
  • Basic experience in working with data governance, data quality, and data security teams and specifically and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification.
  • Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others.
  • Experienced in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
  • Interpersonal Skills and Characteristics

  • Strong experience supporting and working with cross-functional teams in a dynamic business environment.
  • Required to be highly creative and collaborative. An ideal candidate would be expected to collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly.
  • The successful candidate will also be required to have regular discussions with data consumers on optimally refining the data pipelines developed in nonproduction environments and deploying them in production.

  • Required to have the accessibility and ability to interface with, and gain the respect of, stakeholders at all levels and roles within the company.
  • Is a confident, energetic self-starter, with strong interpersonal skills.
  • Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
  • Report this job

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form