Ref: LR04182022_1650292835

Data Engineer

USA, New York

  • 150000 to 200000 USD
  • Engineer Role
  • Skills: AWS, S3, Spark, Kinesis, Kafka, Java, Scala, Python, Glue, Sagemaker, EMR, Lambda, Step Function, Cloud formation / Terraform. Well versed in Designing ELT/ETL Frameworks, Apache Spark
  • Level: Mid-level

Job description

Data Engineer

LR04182022_1650292835

This is a REMOTE full-time Position------------ NO C2C OR CONTRACTS

Data Engineer Position Overview

As a Data Engineer you will be responsible for developing and enhancing the various Real Time Data flow pipelines as well as enabling sophisticated Data Analysis from the data at rest in multiple data lakes, while also maintaining strict high performance and throughput requirements. You will also work closely with other Data Engineers, Data Scientists and Security experts to bring new ideas in Data Exploration, Analytics and Machine Learning to fruition as product features that will enable new ways of catching malicious actors and help protect our customers from various forms of exploits and abuse.



Responsibilities

* Build and enhance an optimal real time data pipeline architecture using technologies such as Spark Streaming, Kafka Streams, Kafka Messaging, Elasticsearch and other Big Data technologies.
* Identify, design, and implement improvements in the data pipelines to achieve ever higher throughput and scalability.
* Work with data scientists and security experts to strive for greater functionality in our core products.
* Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
* Work within an Agile workflow to organize tasks and collaborate with other team members (Jira).
* Work in a Test-Driven Development environment focused on producing reliable, well-documented production code.

Requirements

* Bachelor's degree or equivalent experience in Computer Science, or another relevant field.
* Expert level experience with programming languages Java/Scala/Kotlin etc.
* Minimum 4 years of experience in building and optimizing 'Big Data' data pipelines, architectures and data sets.
* Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
* Experience with big data tools: Spark, Kafka, Elasticsearch, Hadoop etc.
* Experience with stream-processing systems: Flink, Spark-Streaming, Kafka Streams etc.
* Experience with Cloud services such as AWS EC2, EMR, EKS etc is a plus.
* Experience with working in Docker and Kubernetes is a plus.