We are looking for a Data Engineer to advance our clients data pipeline enabling our business analytics, customer-facing analytics, and machine learning product features. The Data Engineer will be a member of the Data Science team and work to integrate numerous data systems into a unified data warehouse capable of serving business intelligence and machine learning algorithms. You will demonstrate your foundational business domain and data architecture knowledge to become quickly comfortable with our data, be fully committed to data quality, develop scalable data pipelines, and design a data warehouse with efficient data models for querying that data. The Data Engineer will collaborate with our business operations team, data scientists, and software engineers in building and optimizing an innovative product that our customers love to use every day.
This job is not only about how well you develop; it's about how you lend your positivity and presence, combined with your skill set to an energized environment and highly collaborative team. Strong sense of humor required, sarcasm detection skills a plus.
* Translate business requirements into technical requirements, and develop data pipelines from many industry and proprietary systems into a unified data warehouse.
* Collaborate with business operations, software engineers, data analysts, and data scientists.
* Build scalable, maintainable data pipelines that extract, transform, and load (ETL - DMS/Stitch) high-performance data warehouses (Redshift/Snowflake).
* Implement the monitoring, auditing, and alerting capabilities (CloudWatch) that ensure the ongoing maintenance and quality of a data product.
* You have 4 years of experience with a deep understanding of data engineering concepts and database designs.
* Advanced SQL knowledge for working with relational data (Postgres) and programming experience (Java/Python) in working with unstructured data and/or APIs.
* Experience with ETL and/or other integration tooling (DMS/Stitch).
* Experience with data analysis and visualization tools (Mode).
* Working knowledge of message queuing (SNS/SQS/RabbitMQ) and stream processing (Kinesis).
* Strong communication skills, a positive attitude, and empathy.
* Self-awareness and a desire to continually improve.
* Desire to write and maintain a clean and well-tested code base, avoiding tech debt.
* Bachelor's degree in Computer Science, Engineering or a related field, or equivalent training, fellowship, or relevant work experience.
Preferred but not required
* Experience with business operations tools such as Salesforce, Zuora, Hubspot, etc.
* Experience with Apache Spark.
* General understanding of the data science and machine learning technology landscape.
* Experience with AWS services including Lambda, DynamoDB, etc.
* Experience with DevOps principles such as CI/CD and IaC (Terraform).
Benefits to you:
* Competitive salary
* Employee Stock Option Plan
* Generous health and commuter benefits
* Dog Friendly Office
* The chance to contribute to an upbeat, fully engaged culture