Fractal Analytics helps global Fortune 100 companies power every human decision in the enterprise by bringing analytics and AI to the decision.
Purpose of Role
This is a key role that will be accountable for the development and operations of the Data Platform to drive maximum value from data for business users .
You will work as part of a cross-functional agile delivery team, including analysts, architects, big data engineers, machine learning engineers, and testers.
You will have the opportunity to work on complex problems, implementing high performance solutions that will run on top of our cloud based big data platform.
Work as part of the Data Engineering team to uphold and evolve common standards and best practices, collaborate to ensure that our data solutions are complimentary and not duplicative.
Build and maintain high-performance, fault-tolerant, secure and scalable data platform to support multiple data solutions use cases.
Interface with other technology teams to design and implement robust products, services and capabilities for the data platform making use infrastructure as code and automation.
Build and support platforms to enable our data engineers and data scientists to build our cloud based big data platform.
Create patterns, common ways of working, and standardised guidelines to ensure consistency across the organisation.
Strong experience on AWS architecture / administration in production environments.
Solid experience of network and security on cloud-based environments, specifically on AWS services such as VPCs, Security Groups, NACLs and IAM roles.
Deep understanding of CI / CD using tools like Jenkins / Bamboo / AWS Code Pipeline / AWS Code Commit and configuration management using tools like Ansible, Puppet / Chef and code repositories based on GIT.
Expertise on CloudFormation / Terraform for automated provision of infrastructure.
Experience with object oriented and functional design, coding, and testing patterns as well as experience in engineering software platforms and large-scale data infrastructures.
Experience writing production quality code in Python / Java / Scala.
Experience of building and maintaining distributed platforms to handle high volume of data.
Strong platform-level design, architecture, implementation and troubleshooting skills.
Good understanding of Enterprise patterns and best practices applied to data engineering and data science use cases at scale.
Good understanding of AWS cloud storage and computing platform (especially S3, Athena, Redshift, Glacier, EMR, EC2, RDS).
Good understanding of DevOps / DataOps in an Agile Environment, familiarity with Jira and Confluence.
Understanding on any one of BI tools such as Tableau, Qlik, Looker.
Understanding of insurance value / supply chain.
Experience of Docker / Kubernetes would be beneficial.
Knowledge of streaming data technologies such as Kafka, AWS Kinesis and AWS Lambda.
Great problem-solving skills, and the ability and confidence to hack their way out of tight corners.
Ability to prioritise and meet deadlines.
Conscientious, self-motivated, and goal orientated.
Excellent attention to detail.
Willingness and an enthusiastic attitude to work within existing processes / methodologies."