Cloud Data Engineer x2, Hybrid, East London - DataLakes, PySpark, SQL, Azure, Python, AWS, Databricks, Agile
We are looking for an experienced data engineers responsible for the design, development, and maintenance of applications. You will be working alongside other engineers and developers working on different layers of the infrastructure. Therefore, a commitment to collaborative problem-solving, sophisticated design, and the creation of quality products are essential.
Role & Responsibilities
* Collaborate with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines so that data is shared effectively across various business systems.
* Build ETL/ELT and Ingestion pipelines and design optimal data storage and analytics solutions using cloud and on-perm technologies.
* Ensure the design, code and procedural aspects of the solution are production ready, in terms of operational, security and compliance standards.
* Participate in day-to-day project and product delivery status meetings, and provide technical support for faster resolution of issues.
Skills and Experience
* Demonstrable design & development experience and experience with big data technologies like Spark/Flink and Kafka
* Proficient in Python, PySpark, or Java/Scala.
Hands-on experience with some of the following technologies:
* Azure/AWS - Data Lake Projects
* Spring/Guice or any other DI framework,
* RESTful Web Services.
* Proficient in querying and manipulating data from various DB (relational and big data).
* Experience of writing effective and maintainable unit and integration tests for ingestion pipelines.
* Experience of using static analysis and code quality tools and building CI/CD pipelines.
If you are interested and meet the above requirements please send your latest CV and call 0191 338 7568