Sr. AWS Big Data Engineer - Los Angeles
This opportunity will give you the chance to make an impact with your technical expertise and work with a growing team of forward-thinking, innovative colleagues. This is a thriving company, but also one that is positioned for exceptional growth in 2018 and beyond. The team is working with very large data sets and they are looking for a talented data engineer to drive forward their data and analytics strategy, and in turn the business as a whole.
They are looking for an innovative Senior Data Engineer to maintain and architect the data systems and pipelines to support our efforts. The Senior Data Engineer would work alongside their existing Senior Data Engineer and use a variety of leading database technologies (AWS Redshift, MongoDB) and tools (AWS EC2, AWS S3, Python) to process and store their existing data. The role calls for expertise in managing AWS resources and maintaining and expanding their Python-based data ingestion pipelines. There is also the opportunity to architect new data stores for our ever-growing data needs, and so a creative, problem solving mindset is highly desirable.
* Help maintain and enhance their robust data warehouse that houses data from partners, vendors and other data sources.
* Manage and improve the data ingestion pipelines that service their data warehouse, which are currently built in Python.
* Deploy and use AWS resources like Redshift and RDS clusters and EC2 instances to support their work, and manage the security around those resources through AWS Security Groups and VPCs
* Collaborate with our Data Science team to help build out our data analytics automated pipeline in Python
* Make recommendations and provide strategic support regarding ways to make data and database operations more efficient and effective
* Experience managing large AWS database resources, either RDS, Redshift or DynamoDB, and the setup of VPCs and Security Groups to manage access to these resources
* Excellent SQL skills, with experience in building and interpreting complex queries
* Excellent Python programming skills, with a track record of well-designed and maintainable code.
* Experience in database design and structure, with an emphasis on scalability
* A strong desire to develop new and innovative ways to improve our data storage and processing.
* Exposure to Big Data tools (Hadoop, Kafka, Spark, etc.)
If you or someone you know is interested in this position, please send your resume directly to email@example.com or call 480-530-2039. Ask for Sean! My client is looking to start the interview process as soon as possible.
Jefferson Frank is the Amazon Web Services (AWS) recruiter of choice. We work with organizations worldwide to find and deliver the best AWS professionals on the planet. Backed by private equity firm TPG Growth, we have a proven track record servicing the AWS permanent and contract recruitment market and, to date, have worked with over 30,000 organizations globally from our offices in North America, Europe, and Asia-Pacific.
At Jefferson Frank, our mission is simple: we want happy customers. Whether you're an AWS professional walking into your dream AWS job, or an organization hiring an incredible contractor for your cloud migration project, our goal is to deliver an unrivalled customer experience. Work with us and you'll get the personalized experience you deserve - one you'll simply not find at any other recruitment agency. At Jefferson Frank, we find great people great jobs in AWS.
I understand the need for discretion and would welcome the opportunity to speak to any Big Data and cloud analytics candidates that are considering a new career or job either now or in the future. Confidentiality is of the upmost importance. For more information on available AWS Big Data Jobs as well as the cloud market, I can be contacted at 480-530-2039. Please see www.jeffersonfrank.com for more information.