Ref: 140922SMJFI_1665046804

Lead Data Engineer

England, Tyne and Wear

Job description

Lead Data Engineer

140922SMJFI_1665046804

Lead Data Engineer

£90,000 - £110,000

Newcastle/Remote

Description

Would you like to be part of a fast-growing technology and engineering organisation that nurtures an open, collaborative learning environment? Are you passionate about technology, people and developing your coding skills? Whatever your aspirations, They are trying to create the best engineering consultancy in the UK and looking for brilliant engineers to be part of the journey.

This company helps organisations solve their biggest, most exciting engineering problems. They have created banks from scratch on Kubernetes and AWS, built streaming analytics solutions that protect the country and built platforms to enable whole organisations to move to AWS and Azure, and everything in between. They do all this in a work environment where regular social events, inclusivity and an ego-free culture mean They've been officially voted a "Great Place to Work" for five years in a row.



Here's what you will do

* You will solve the problems that others cannot.
* You will also spend a day a week working on a combination of internal products and your own development.
* You'll create data platforms based around modern, cloud-native technologies:
* Python or Scala
* Cloud Platforms: AWS, Azure or GCP
* Streaming Data: Kafka or Kinesis
* Data Processing: Spark or Pandas
* SQL Databases: SQL Server, PostgreSQL, MySQL or similar
* Data Warehouses: Synapse, Redshift or BigQuery
* Pipeline Orchestration: Airflow, Azure Data Factory or similar
* Dataiku or Databricks
* And more…

Who you are?

You are perceptive, personable, culturally sensitive and demonstrate a high degree of emotional intelligence. Being able to work collaboratively in a matrix organisation is essential, as is the ability to build data platforms using either cloud native products or commercial data analytics / data warehouse software.

You will also display:

* Demonstrable experience of building data pipelines using Spark or Pandas
* Experience working with one or more of the main cloud providers (AWS, Azure or Google)
* Experience of big data platforms (EMR, Databricks or DataProc)
* Experience building data platforms such as Data Lakes, Data Warehouses or Data Meshes
* Understanding of Data Security and Data Governance principles

* Have a drive for self-improvement and learning, including learning new programming languages
* Approach solving problems pragmatically

* Experience supporting and operating production systems

It would be great if you had any of the following desirable skills:

* Experience of building automated data quality checks and metrics
* Experience creating and/or maintaining production software delivery pipelines using common CI/CD tools (GitHub Actions, Azure DevOps, Jenkins, CircleCI etc.)
* Experience productionising machine learning algorithms
* Experience with Infrastructure as Code (Terraform, CloudFormation, ARM templates etc.)
* Experience with data reporting and visualisation tools (Power BI, Tableau, Qlik etc.)



If this is of interest please apply below or reach out to Steven Mckay at Jefferson Frank: s.mckay@jeffersonfrank.com