Position: Data Engineer/Big Data Developer
Location: Houston, TX
Duration: 6-12 Months Contract
1. Developed Spark ETL/ELT jobs that read data from Snowflake and write back to Snowflake
2. Develop scripts Release Management process of Spark job source for with VSTS as the backend to support
3. Develop Azure Data Factory pipelince to create, schedule and manage Spark ETL jobs.
4. Develop and instrament Databricks job code with Azure Application Insights APIs to prform monitoring and alerting
5. Develop Helm Charts to create, install and deply NiFi Azure Kubernetes Services
6. Document, demonstrate and present completed work to dring the end of sprint
In order to so -- The skills required are
1. Good coding skills with preferably SparkSQL and PySpark in addition can be Java and Scala etc.
2. Good knowledge of the many Azure Cloud Services e.g. Azure Databricks, Azure Data Factory, Azure Kubernetes Serices
3. Good knowledge of Spark development with Spark Streaming, SparkSQ, SparkX, Spark ML etc.
5. Exerience with Docker and Kubernetes and IaC and DevOps principals
6. Experence with NiFi, Datameer pipelines and VisualBuilder workbooks
6. Experence with Databricks CLI and VSTS Release Managment APIs
5. Great personality, atitutude to learn and collaborative spirit.
6. Should have successfully delivered good engagement with large enterprise customers here in USA, prefer members of past Devon projects
7. Good intelligence and fresheness of ideas.
8. Experience with Continuous Integration and related tools (i.e. Jenkins, Hudson, Maven, VSTS)
9. Experience with Code Quality Governance related tools (Sonar, Gerrit, PMD, FindBugs, Checkstyle, Emma, Cobertura, etc)
10. Experience with Source Code Management Tools (Github, VSTS)
11. Knowledge of standard tools for optimizing and testing code
12. Ability to operate effectively and independently in a dynamic, fluid environment