Ref: RedshiftOH_1630506078

Redshift Data Engineer

England, London

Job description

Redshift Data Engineer

RedshiftOH_1630506078

Good Afternoon,

I am recruiting for the below contract role. Please send me your CV if this sounds interesting.

Role - Redshift Data Engineer (RedshiftSpecialist)

Rate - Rates Negotiable

Length - 3 Months + Extensions

Location - 100% Remote

Start - ASAP



The Role

I am currently working with an AWS Premier Consulting Partner who are looking for a Data Engineer who is experienced with Matillion to join them

Extensive knowledge across the AWS tech stack is expected and any AWS certifications are a massive bonus, but the stand-out requirement here from the client is for proven experience using Matillion in a commercial working environment

The Project

The purpose of this role inside the Customer Platform team is to provide an interface/capability to deliver external data requirements - sourcing data that is not available inside the Customer Platform Team itself. These data requirements are related to marketing and support requirements and will be sourced from the Hive data platform.

You will be reporting to the Customer Platform Technical Architect and will work closely with data scientists, data analysts, DevOps and team leads who own the Hive data platform, ensuring that the business receives the solutions they need.

What you will be doing

* Designing data systems that will scale to large numbers of users.
* Interfacing with Data Scientists and porting machine learning algorithms to production systems.
* Peer review other engineer's code to ensure quality.
* Drive best practices across teams and products for data-driven product development and delivery
* Use Github when required to enable code re-use across the team
* Work in an Agile environment
* Work alongside other members of the Data Team on large projects to meet deadlines and requirements set by our stakeholders

Key Skills Needed

* Excellent experience and knowledge of Redshift
* Good knowledge of Java or Scala or Python and concurrent programming
* Quick learner with eagerness to learn new things and experiment with new technologies
* Willing to learn Data Science algorithms and produce code to implement them at scale
* Experience in working with the following technologies
*

* Spark
* AWS
* Kafka
* Cassandra
* Kubernetes
* Redshift

* Familiarity with deployment & container technologies including Jenkins, Docker, Serverless
* Interest in Real-time and distributed systems

Nice To haves

* Excellent general problem-solving abilities
* Strong analytical mind
* Resilience and Tenacity
* Proactive, initiating attitude - able to take prompt action to accomplish objectives and goals beyond what is required.
* Self-awareness/Personal Development Orientation

AWS Certs are very useful.

I look forward to seeing your CVs!



Kind Regards

Ollie