The ideal candidate for this position will have:
* 3+ years' experience in a similar role.
* Advanced knowledge of SQL is essential.
* High proficiency in Python.
* Significant experience of working with cloud-based data infrastructure, such as Snowflake, Redshift and Databricks.
* Experience of working with a range of ETL/ELT tools within a modern data stack, as well as Apache Airflow.
* Experience of extracting data through API calls.
* High level of pride in the quality of their code and documentation.
* Desire to take on and solve complex problems, and to decipher solutions into tasks that can be managed within an Agile development framework.
* Highly proactive nature, taking immediate steps to communicate issues and find solutions, as they arise.
* Familiar with data regulations, including GDPR and the Data Protection Act (2018).
The successful candidate will have experience of taking the lead in designing, building and managing scalable solutions within a modern data stack. They will also relish working within a fast-paced, rapidly changing start-up environment, and love helping others to leverage data as they strive towards their goals. While the scope and focus of this role may change as we grow, your key responsibilities will include:
* Immerse yourself in our data and work collaboratively with our Data Analyst, Data Scientist and ProdEng Team as you grow your understanding of the data we capture and how we're currently using it.
* Spend time with key leaders around the business to understand their data requirements and develop best in class solutions.
* Take ownership of our existing data infrastructure and pipelines, including PostgreSQL DB and Apache Airflow workflow management platform.
* Integrate new data from our Snappy and Hungrrr production DBs as new features are released, and from other systems and external sources as new requirements arise.
* Recommend new tooling that is scalable and that will enable us to next-level the analysis prepared by the data team.
* Build and maintain data pipelines to process diverse data sets at scale and with low latency.
* Explore ways to measure and improve data quality, and to identify fraud patterns within our data.
* Create and maintain technical documentation of data flows and data structures within the analytics data stack.
* Ensure reliability and security, including implementing GDPR policies.
* Share your knowledge and expertise with others to help increase data literacy around the business.