Apply now to work for one of the largest Bank's in the world!
Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications
- Building efficient storage for structured and unstructured data
- Transform and aggregate the data using data processor technologies
- Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi and Kafka on AWS Cloud
- Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
- Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
- Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
- At least 3 years of professional work experience in big data platform
- At least 1 year of experience in SQL, including working with relational and query authoring.
- At least 1 year of experience message queuing, stream processing, and big data data stores.
- At least 1 year of experience working with unstructured datasets.
- 2+ years of experience with the Hadoop Stack
- 2+ years of experience with Cloud computing (AWS)
- 1+ years of experience with any big data visualization tools
- 2+ years of Python or Java development experience
- 1+ years of scripting experience
- 1+ years' experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
- 1+ years of UNIX/Linux experience