Hadoop Systems Architect with over 3 years of hands-on Architecture and Development experience with various Hadoop technologies Spark, Hive, MapReduce, NoSQL databases like HBase.
· Experience designing and developing data ingestion and processing/transformation frameworks leveraging Hadoop Open Source tools/technologies
· Should have worked in Big data space for at least 3 years on Hortonworks distribution.
· Experience with a variety of data ingestion tools, e.g. Apache NiFi, Sqoop, Flume,
· Experience with the Big Data processing frameworks
· Must have hands on experience with Spark Streaming, Spark SQL, Kafka for real-time data processing
· Well-versed in the development challenges inherent with highly scalable, highly available, and highly resilient systems
· Expert level of understanding of Hadoop ecosystem components: Hive, Oozie, Spark, HBase, Tez, Kerberos, their internal working, interactions, Debugging techniques, is a must
· Experience in design of Security Architecture involving LDAP, AD, Kerberos, Knox and Ranger
· Performance tuning of various Hadoop components
· Hadoop Best Practices implementation
· Deep knowledge of Hadoop file formats (e.g. Avro, Parquet, ORC, etc.)
· Working experience in DevOps/ Agile environments highly desired
· Experience with Bitbucket
· Working knowledge of micro-service, event driven architecture and Lambda architecture
· Working knowledge of MPP parallel data processing design, SQL, BI tools, and data management.
· Coding experience with Scala. Python
· Demonstrated success working with cross-functional teams
· Data flows design from Kafka Event Streaming to HDFS, HBase and Hive data stores
· Support Scrum teams day to day helping code reviews and detail design walkthrough
· Be hands-on to perform POCs and Tools evaluation
· Performance Tuning effort
· Nice to have Analytics implementation experience