Jefferson Frank are currently supporting a client looking for an experienced Data Engineer to join a growing engineering team and lead on Data Engineering work. This is a broad role not only supporting blue-chip private sector and public sector clients on cutting-edge data engineering and data-driven marketing solutions, but also supporting to create and scale products and to build the next generation of Data Engineering talent.
About the client:
They are a team of data specialists who design, build, and embed cutting edge-data solutions that create value. Through combining in-house expertise with an extensive network of specialist and academic partnerships, they help clients to become more efficient and more effective through data.
They have ability to support across all parts of the transformation journey. From strategy and engineering, through to analysis, machine learning and BI, they bring all the necessary skillsets together to produce genuine business impact and strong ROI for clients.
About the role:
The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data intensive software systems and building them from the ground up, who understands the value of good quality software engineering in data. The Data Engineer will work with software developers, engineers and data scientists on data initiatives and will ensure the creation and maintenance of well-designed ELT & ETL processes, data architecture, semantic layers and APIs for both internal and external clients, primarily in AWS, but also Azure & GCP.
You will have good potential for personal and career development, learn from and share knowledge with a range of talented, highly skilled and internationally diverse team of colleagues. All of this while embarking on an exciting journey with a pioneering, fast-growing company situated at the heart of London's Tech City.
* Working within a team of engineers & data scientists amongst other stakeholders to identify and implement best solutions for the analytical needs of a project
* Utilising & improving best practices of software engineering, version control, integration & unit testing, ticket management etc
* Delivering high quality code to solve data problems
* Communicating complex concepts both within and outside the technical teams
* Understanding & researching new technologies to ensure we implement the best solution for the problem
* Communicating roadmaps, deliverables & ticket updates with clients
Personal Specification: Knowledge, Experience and Skills
You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimising or even re-designing our company's data architecture to support our next generation of products and data initiatives. They will be enthused by teaching data engineering to others and be happy to challenge inefficient processes.
You must also be comfortable in new technologies and understand how to optimise your own learning so that you can deliver efficient solutions in technologies you may not have used before.
Knowledge & Experience
* Architected end-to-end solutions for clients in a fast paced agile, environment
* Experience of working collaboratively across teams, to ensure learning and development of platforms, understanding alignment, and consistent coding standards
* Strong background in handling relational, semi-structured and unstructured data
* Worked extensively in building scalable and high-performing code, fit for production
* Determined technology choices and infrastructure on a project-by-project, case-by-case basis to ensure solutions are delivered without compromise
* Experience of developing training or teaching materials an advantage
Skills & Competencies
* Strong Python programmer
* Deep AWS understanding across core and extended services
* Understanding of IAC tooling (Terraform, Serverless)
* Strong SQL knowledge and experience, both DML & DDL across a variety of dialects
* Experience with at least one business intelligence tool - Tableau or PowerBI would be ideal
* Knowledge of data engineering frameworks would be beneficial (dbt, Snowflake, Databricks)
* Understanding of DevOps principles and paradigms, ideally with proven skills in software such as Github Actions & CodePipeline.
* Strong problem-solving skills, willingness to take ownership and risks, and enthusiasm in the face of technical challenges
* Strong project management and organisational skills, with the ability to prioritise and meet deadlines in a calm and effective way
* Excellent interpersonal skills and the ability to work in a team environment.
* Strong communication skills, with the ability to support client meetings and communicate data engineering techniques to develop the skills of others
* Client focused, with the ability to interpret and understand client needs and deliver results
* The ability to understand the wider company operations, goals and objectives and where data engineering can add value
* A proactive approach and the ability to be self-managing and innovative.