Are you a Big Data Engineer or Developer who can deliver consulting services including planning, designing and implementing new solutions using the latest Big Data technologies? Do you want to work remotely to implement and help develop a cutting edge Big Data solutions, create data pipelines which will migrate data from customers on-prem systems and load it into a cloud hosted Enterprise Data Platforms? Do you want to work on large scale custom Big Data consulting projects? If you have a passion for big data and solving complex problems then this could be the job for you!
Role & Responsibilities
* Working with the Data Architects to implement data pipelines
* Working with our Big Data Principal Architects in the development both proof of concepts and complete implementations.
* Working on complex and varied Big Data projects including tasks such as collecting, parsing, managing,analyzing, and visualizing very large datasets.
* Translating complex functional and technical requirements into detailed designs.
* Writing high-performance, reliable and maintainable code.
* Performing data processing requirements analysis.
* Performance tuning for batch and real-time data processing.
* Securing components ofclients' Big Data platforms.
* Diagnostics and troubleshooting of operational issues.
* Health-checks and configuration reviews.
* Data pipelines development - ingestion, transformation, cleansing.
* Data flow integration with external systems.
* Integration with data access tools and products.
* Assisting application developers and advising on efficient data access and manipulations.
* Defining and implementing efficient operational processes
Skills & Qualifications
While we realize you might not have everything on the list to be the successful candidate for the Big Data Developer job you will likely have at least 3 years experience in similar roles. The position requires specialized knowledge and experience in performing the following:
* Experience building data pipelines in any public cloud (AWS Glue,GCP Dataflow, Azure DataFactory) or any equivalent
* Experience writingETL (Any popular tools)
* Experience in datamodeling, data design and persistence (e.g. warehousing, data marts, data lakes).
* Strong Knowledge of Big Data architectures and distributed data processing frameworks:Hadoop, Spark, Kafka, Hive
* Experience and working knowledge of various development platforms, frameworks and languages such as Java, Python,Scala and SQL
* Experience with Apache Airflow,Oozie and Nifi would be great
* General knowledge of modern data-center and cloud infrastructure including server hardware, networking and storage.
* Strong written and verbal English communication skills
Bonus Point Skills & Qualifications
* Experience with BI platforms, reporting tools, datavisualization products, ETL
* Experience with data streaming frameworks.
* DevOpsexperience with a good understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc)
* Experience withHbase.
* Experience in data management best practices, real-time and batch data integration, and datarationalization
* Remote Flexibility! Work from home twice a week!
* Amazing Team! Grow and learn by working with some of the industry's top talent!
* Competitive Pay! Go home with heaps of money for a job well done!
Minimum - $45/hour
Maximum - $65/hour
12 Month Contract Engagement (Either Corp to Corp or W2)
Possibility of Contract to Hire
United States Citizen, or Permanent Resident
If this is the kind of role you are looking to get into and you want to fast track your application, introduce yourself and send me your resume directly at J.Wilson@JeffersonFrank.com