Your current job search

2 search results

For Permanent and Contract in Sunnyvale

    Ref: Data Engineer_1653492652

    Data Engineer (AWS, Kafka/Spark Streaming, Kotlin)

    USA, California, Sunnyvale

    • 170000 to 190000 USD
    • Data Science Role
    • Skills: AWS, Kafka Streaming, Spark Streaming, Kotlin, Java
    • Seniority: Senior

    Job description

    Job Description:

    As a Data Engineer, you will be responsible for developing and enhancing the various Real Time Data flow pipelines as well as enabling sophisticated Data Analysis from the data at rest in multiple data lakes, while also maintaining strict high performance and throughput requirements. You will also work closely with other Data Engineers, Data Scientists and Security experts to bring new ideas in Data Exploration, Analytics and Machine Learning to fruition as product features that will enable new ways of catching malicious actors and help protect our customers from various forms of exploits and abuse.

    Role & Responsibilities:

    * Build and enhance an optimal real time data pipeline architecture using technologies such as Spark Streaming, Kafka Streams, Kafka Messaging, Elasticsearch and other Big Data technologies.
    * Identify, design, and implement improvements in the data pipelines to achieve ever higher throughput and scalability.
    * Work with data scientists and security experts to strive for greater functionality in our core products.
    * Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
    * Work within an Agile workflow to organize tasks and collaborate with other team members (Jira).
    * Work in a Test-Driven Development environment focused on producing reliable, well-documented production code.

    Skills & Qualifications:

    * Expert level experience with programming languages Java/Scala/Kotlin etc.
    * Minimum 4 years of experience in building and optimizing 'Big Data' data pipelines, architectures and data sets.
    * Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
    * Experience with big data tools: Spark, Kafka, Elasticsearch, Hadoop etc.
    * Experience with stream-processing systems: Flink, Spark-Streaming, Kafka Streams etc.
    * Experience with Cloud services such as AWS EC2, EMR, EKS etc is a plus.
    * Experience with working in Docker and Kubernetes is a plus.