Skip to content

 

Denver, Colorado - USD Full Time Posted: Friday, 14 September 2018
 
 
Job Description

Designs, develops, and implements infrastructure to provide highly-complex, reliable, and scalable database to meet the client organization's objectives and requirements. Analyzes business requirements for database design, and executes changes to database as required. 6-8 years experience

Position Overview:
As a Big Data (Hadoop) Architect, you will be responsible for Cloudera Hadoop development, high-speed querying, managing and deploying Flume, HIVE and PIG, test prototypes and oversee handover to operational teams and propose best practices/standards. Expertise with Designing, building, installing, configuring and developing a Cloudera Hadoop ecosystem.

Principal Duties and Responsibilities:
Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems.
Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop.
Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive.
Work with and integrate multiple types of data, including unstructured, structured, and streaming.
Support the development of data science and analytics solutions and product that improve existing processes and decision making.
Build internal capabilities to better serve end-clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.
Contribute to business and market development.

Specific skills and abilities:
Strong computer science and programming background
Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)
Ability to design and build data models, semantic layer to access data sets
Ability to own a complete functional area - from analysis to design to development and complete support
Ability to translate high-level business requirements into detailed design
Build integration between data systems ( restful API, micro batch, streaming) using technologies ( eg Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)
Hands on experience working the Cloudera Hadoop ecosystem and technologies
Strong desire to learn a variety of technologies and processes with a "can do" attitude
Experience guiding and mentoring 5-8 developers on various tasks
Aptitude to identify, create, and use best practices and reusable elements
Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:
Bachelor's degree, Masters degree required.
Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, Back End programming, java, JS, Node.js and OOAD.
Hands on experience in Scala and Python.
7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.

.*
Notes:
This is a 1-year contract with potential for extension or FTE conversion

Company Description We help Enterprises transform themselves digitally.

Denver, Colorado, United States of America
IT
USD
PriceSenz
PriceSenz
JS91C11F2E/516442424
9/14/2018 6:14:43 PM

We strongly recommend that you should never provide your bank account details to an advertiser during the job application process. Should you receive a request of this nature please contact support giving the advertiser's name and job reference.

Other jobs like this

Littleton, Colorado
USD
Denver, Colorado
USD
Denver, Colorado
USD
Broomfield, Colorado
USD