Data Engineer
Type: PERM
Location: Westlake Village, CA
Summary:
Retail corporation seeking Data Engineer that will provide a strong leadership role in developing the Enterprise data architecture of Digital and Retail businesses. Candidate will work across different company brands, providing technical guidance, architecture, and enforcing technical standards. Provides assessments and prototyping of new concepts and technologies and often develops capabilities to be handed off to operational development teams. Finally, ideal candidate will analyze complex problems as well as be an active member of enterprise technology design decisions.
Responsibilities:
* Accountability for modernization, migration/transformation to a cloud data platform
* Design and build reliable, scalable data infrastructure with leading privacy and security techniques to safeguard data
* Architect scalable, secure, low latency, resilient and cost-effective solutions for enabling predictive and prescriptive analytics across the organization
* Design/ Architect frameworks to Operationalize ML models through serverless architecture and support unsupervised continuous training models
* Take over and scale our data models (Tableau, Dynamo DB, Kibana)
* Communicate data-backed findings to a diverse constituency of internal and external stakeholders
* Build frameworks for data ingestion pipeline both real-time and batch using best practices in data modeling, ETL/ELT processes, and hand off to data engineers
* Participate in technical decisions and collaborate with talented peers
Review code, implementations and give meaningful feedback that helps others build better solutions
* Helps drive technology direction and choices of technologies by making recommendations based on experience and research
Qualifications:
* Experience designing and implementing AWS big data and analytics solutions in large digital and retail environments
* Experience in distributed architectures such as Microservices, SOA, RESTful APIs, and data integration architectures.
* AWS (Services: Redshift, Lambda, Apache Pinot, Sage Maker Kinesis/Kafka, S3, Glue and Athena)
* Python programming
* Data Pipelines/ETL Pipelines
* SQL, Spark, Snowflake or Redshift
* AWS Certified Big Data - Specialty desirable - Nice to have
APPLY TODAY!
*Please Note: This is opportunity is not open for C2C, all candidate MUST be based in the US, and is open to candidates who do not need sponsorship. 3rd party applications will not be considered.*
