Imply is behind a data platform that’s used at some of the largest companies in the world to answer complex questions from trillions of events and data points in less than a second. To power our platform, we build and maintain Apache Druid, an open-source real-time analytical database used at thousands of companies. Our customers use Imply to answer questions from “What do tweens in North Carolina who listen to Justin Bieber like to buy“ to “What updates to our infrastructure have caused the CPU to spike when some customers from Europe hit our servers” to “Why are we seeing an increase in traffic going through our backbone from Japan as opposed to being routed internally?”
We are a collaborative and supportive team. We measure our individual success by how well our team does and by how well we push each other to grow professionally. We believe teams should have significant say in what they build and how, and therefore should be responsible for the eventual success of what they’re building, whether it’s in ensuring customers can use it or sales people can sell it or support people can support it. As part of Imply, you’ll get exposure and ownership over how whatever you build fits not just with the rest of the product but with the rest of the org. We're flexible on location, and have a work from home as needed culture.
We’re looking for talented database engineers to develop a next generation analytics platform focused on interactivity and streaming data. As a database engineer at Imply, you will be heavily involved in the development and technical direction of the open-source Druid project.
You might work on:
- Implementing cutting-edge compression algorithms, storage formats, and other database optimizations.
- Building distributed ingestion systems that can handle throughput rates in the tens of millions of records per second.
- Implementing new query capabilities within Druid
- Expand Druid’s SQL capabilities
Required experience and skills:
- Bachelor’s degree in computer science, engineering, or a related field (or equivalent professional work experience).
- Experience developing high concurrency, performance-oriented Java systems and using standard tools to tune, profile and debug JVMs.
- Experience working on the internals of Data Warehouses such as Teradata, Snowflake, Redshift, and BigQuery, or Big Data systems such as Hadoop, Presto, Spark, Cassandra, Elastic Search or relational databases like MySQL or Postgres is a strong plus.
- Strong communication skills. Explaining complex technical concepts to designers, support, and other engineers is no problem for you.
- A history of open-source contributions is a plus. Being a committer on a data-related project is a big plus.
Imply is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, gender identity or expression, marital status, national origin, disability, protected veteran status, race, religion, pregnancy, sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.