Data Engineers (Sr./Mid) @ AWS Technology, SaaS Partner - Remote (US or Canada - Remote)
One of our best clients is looking for Data Engineers at various Senior, mid-level positions looking for you to join their growing team. These are expected to be fully-remote, full-time positions and brand new position with a Series-C funded AI/ML focused SaaS company that's rapidly growing and looking for the best and brightest.
Job Requirements, Experience Required:
* Expert level experience with programming languages Java/Scala/Kotlin etc.
* Minimum 4 years of experience in building and optimizing 'Big Data' data pipelines, architectures and data sets.
* Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
* Strong experience with big data tools: Spark, Kafka, Elasticsearch, Hadoop etc.
* Experience with stream-processing systems: Flink, Spark-Streaming, Kafka Streams etc.
* Experience with Cloud services such as AWS EC2, EMR, EKS etc is a plus.
* Experience with working in Docker and Kubernetes is a plus.
* Experience building, architecting, and optimizing 'big data' data pipelines leveraging a mix of Java, Scala, Python, Glue, Sagemaker, EMR, Lambda, Step Function, Cloud formation / Terraform
* Well versed in Kafka streaming pipeline development and maintenance experience.
* Well versed in Designing ELT/ETL Frameworks focused at Apache Spark Streaming and Batch
* Well versed in Integrating data from diverse, different sources and formats into the Datalake in structured data formats like parquet, csv files.
* Experience manipulating, processing, and extracting value from large, disconnected datasets using ETL/ELT methodology and technologies such as Deltalake, Databricks, Matillion etc and integrating data from silos to a Master Data source
* Well versed in writing spark transformation jobs to handle complex nested json data formats and joining streaming data with parquet files in Datalake.
* Good knowledge in administration of Datalake using Lake Formation and roles.
* Support production workloads (on S3, Spark, Kinesis, Kafka etc) and build processes supporting data transformation, data structures, metadata, dependency, and workload management.
* Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
* Experience building data products incrementally and integrating and managing datasets from multiple sources
* 3 - 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
Company, Employee Benefits:
* Working with a well-established, highly-valued company with a world-class team of senior developers and software professionals to act as mentors as you develop.
* Working with an array of interesting products to own and an entrepreneurial environment to thrive in.
* Exposure to a wide variety of modern web technologies.
* 401K Match & Annual Contribution
* Excellent Employee/Family Benefits (Medical, Dental, Health)
* Unlimited PTO
* Remote work capacity
* Flexible Salary Compensation depending on position/experience.
If you or someone you know is interested in this position, please send your resume directly to firstname.lastname@example.org or call 480-530-2031.
Jefferson Frank is the global leader for Amazon Web Services recruitment, advertising more AWS roles than any other agency. We deal with both AWS Partners & End Users throughout North America. By specializing solely in placing candidates in the AWS market we have built relationships with most of the key employers in North America and have a complete understanding of where the best opportunities and AWS jobs are.
Jefferson Frank is acting as an Employment Agency in relation to this vacancy.