I'm a specialist AWS recruiter working on an exciting, cloud-native, rapidly growing Premier AWS Consulting Partner project which is looking to double its headcount over the next 12 to 18 months.
The company is an all-in AWS cloud-native services provider focused on helping customers utilize the capabilities of the cloud to achieve Operational Excellence, Security, Reliability, Performance, and Total Cost Optimisation. They are focused on Cloud Migrations, Containers, Servile, Data and Analytics, ML/AI, and High-Performance Compute.
They work closely with AWS Mid-Market, Start-up, and Enterprise Customers, helping them leverage the capabilities of the cloud. They will soon be a Premier Partner and AWS rates them highly and refers a lot of business their way.
Role: Data Architect
The successful candidate will build data solutions using state-of-the-art technologies to acquire, ingest and transform big data sets.
* Partner with our users and other data product teams to understand their needs and build impactful data/analytics solutions.
* You will design and build data pipelines to support applications and data science projects following software engineering best practices.
* Design and develop data applications using big data technologies (Hadoop, AWS) to ingest, process, and analyse large disparate datasets.
* Build robust data pipelines on Cloud using Airflow, Spark/EMR, Kinesis, Kafka, Lambda, or other technologies.
* Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS 'big data' technologies.
* Experience delivering Data migration projects in AWS (preferably) but other cloud solutions will work too
* SQL / ETL experience and understanding
* Some ELT tools - Informatica, Airflow, AWS Glue, etc
* Must have experience writing core data transformations in Python.
If this sounds like it may be of interest, please let me know.