Hadoop Developer

Description:

Are you an experienced Hadoop Developer, excited to work with an international team and motivated by cutting-edge assignments? Then we want you to join WebCreek, as we innovate solutions for our world-class clients like Shell, Hewlett-Packard, and Nike.  


In addition to wielding your ninja-like tech expertise, you’ll be:

  • High energy versatile resource with both development skills and business acumen to operate at a fast and accurate speed.
  • A champion for innovative, agile development practices and experimenting with the latest technology and trends to produce amazing solutions
  • A self-starter and team player, capable of working with a team of Architects, Developers, Business/Data Analysts, QA and client stakeholders.
  • Cross functional and ability to tackle any environment, willing to jump into other development roles as needed.
  • Lead projects and direct activities of a team, related to special initiatives or operations.
  • Collaborate with programmers in diverse geographical locations to coordinate delivery.
  • Proficient understanding of distributed computing principles
  • Strong written and verbal communications


We know that you’ll already be…:

  • Experienced using Hadoop technologies (HDFS, Hive, Impala, Map Reduce, Pig, Yarn )
  • Master of using MapR Convergent Platform
  • Exposure working with HBase.
  • Excellent analytical capabilities - Strong interest in algorithms
  • Skilled in Database, SQL, ETL modeling and data analysis 
  • 3 to 5 years’ experience


...with a commanding knowledge of: 

  • Database knowledge from a development/ETL perspective
  • Experience with populating dimensional models for reporting and analytics
  • Experience working with Hadoop core concepts and technologies
  • Ability to know when to use what tool… Spark, Hive, Impala vs ETL tools and a working knowledge of each
  • Spark programming experience
  • Hive programming experience
  • Strong knowledge of Hadoop table design
  • Experience with Hadoop storage formats and techniques such as Avro, Parquet, Bucketing, ORC, UDFs, Partitioning Strategies, statistics refreshes, etc
  • Can troubleshoot a myriad of Hadoop issues related to ETL jobs, Spark, HDFS, Yarn, query optimization, memory, and CPU.
  • Strong database experience related to OLTP and OLAP processing environments



And we’d be even more stoked if you knew:

  • Power BI
  • Apache Kafka
  • Hive.
  • Spark SQL
  • NoSQL Databases


Worth your weight in gold:


Salary is commensurate with experience. We value the knowledge you bring to the table, and invest in our teams to help them grow more! 



Questions about us?


Our company was established back in 1996, with HQ in Houston, Texas. Now, we join more than 100 people in one, steadily-growing team, with offices across Latin America (Mexico, Peru, Colombia and Ecuador) and in Lviv, Ukraine.


Visit our web-site: http://webcreek.com


Please, send your CV to jobs@webcreek.com



Want to apply later?

Type your email address below to receive a reminder

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field
Error
Error
insert_drive_file
insert_drive_file