Senior Data Engineer (Big Data -> Hadoop / Spark / Kinesis -> Strategy & Delivery)
£Competitive, plus bonus, 27 days holiday and pension
Bristol, United Kingdom
We are looking for an experienced and motivated Senior Data engineer who is passionate leveraging data and building scalable data systems. You will be working alongside a growing team of likeminded engineers in Dyson’s rapidly expanding IoT platform engineering group. You will be responsible for building world-class technology employing the latest continuous delivery and devops practices to deliver the next generation of business insights and data systems for Dysons connected products and platform
Market OverviewIn May 2016 Dyson launched the second of its connected products, the Dyson Pure Cool Link, which joins our existing 360 Eye robot vacuum cleaner with its companion eco-system, Dyson Link. Dyson Link is our IoT solutions to enable Dyson products to work in a connected environment. It includes the key components required to create a connected product, from mobile apps, web/CRM integrations and cloud services (provisioning, asset management, message routing, product/app analytics, persistence and scaling).
Function OverviewInternet connected products is a growing area for Dyson where we aim to continue are reputation of being innovative and disruptive. We’re building world-class cross functional Agile teams and adopting the latest technology and techniques to ensure we can deliver our ambitious vision in the connected space. You’ll be working to create a world-class user experience in one of the fastest moving consumer technology domains, alongside other engineers, designers, commercial strategists and electronics engineers.
AccountabilitiesA senior data engineer will be expected to demonstrate the following: • Push code daily that will be relied upon by our ever-growing fleet of connected users and products • Answering business needs with data analytics • Developing new features and extending existing data platform using Java, Python and a range of deployment automation and monitoring tools • Support and coaching of software developers and BI data engineers through advice, guidance and pairing • Improving team efficiency by analysing working practices and procedures • Work effectively as a key member of an agile development team utilising Scrum based methodologies and tool suites e.g. Atlassian JIRA/Stash • Steering the direction of development in order to assist platform growth and feature enrichment • Review the code of others for accuracy and functionality and to offer guidance for improvement if needed • Monitor and assist with the deployment of code through test environments towards production and the handling of any issues that arise
- Experience in data cleansing, schema reconciliation, and related tools (e.g. Uber Paricon)
- API scalability, load balancing, and security (REST, MQTT, JSON, GraphQL, encryption, key management)
- Strong background in data and analytics systems, primarily the Apache Big-Data toolsets (e.g. HBASE, Cassandra, Kafka, and Spark).
- Experience in architecting and building connectivity data systems and integrations.
- Experience of integrating very large datasets with line of business data.
- Proven track record of delivering strategic solutions for end-users.
- Exposure to CI/CD development scenarios (Jenkins, Jira, nexus, git, canary deployment)
- Experience in scalability/failure testing (Simian Army/Chaos Monkey)
- Proven track record of developing robust requirements specifications.
- Flexible and dynamic approach to development.
- The ability to quickly adopt new concepts, languages and techniques quickly and convey the benefits to others.
- Good understanding and experience of agile application development practices.
- Ability to communicate complex ideas simply.
- Proven ability to work in an interdisciplinary team.