Hoodline is creating the nearby button of the internet, first by organizing and distributing the world’s content, then by analyzing its data.
We’re looking for data engineers who can help us dig into all sorts of public and private data sets that we’re getting access to. We’re figuring out how cities really work today, then making predictions about how they’ll look in the future.
We currently offer a local content platform
that provides relevant nearby articles, photos and videos for any app or site, to help any user make a better decision about anything from where to eat lunch, to where to live.
We partner with 200+ publishers, including ABC television, McClatchy, Advance Digital, TripAdvisor and Vice, and do our own reporting to discover insights for every location. We serve this information to partner's apps and sites including Uber, Eventbrite and Yelp.
To distribute all this content, we tag each story with a context taxonomy of more than 20 categories using machine learning, allowing people to receive relevant information about anything from where to eat to where to move in 22 cities.
Come help us figure out how cities really work.
If you have the right skills and share our mission, you’ll be able to help solve local through data, addressing major city-level content and insights discovery problems around the world—while having fun and building a great business, too.
Our team has years of success in consumer tech, online and local media. We’re backed by a range of top tech investors including Rakuten, Greylock Partners, Social Capital, Graph Ventures, Charles River Ventures, Eric Schmidt's Innovation Endeavors, Pear Ventures, Matter Ventures, John S. & James L. Knight Foundation, 500 Startups, and SoftTech VC. Angel investors include Joi Ito, Director of the MIT Media Lab, Cyan and Scott Banister, Ben Silbermann of Pinterest and Shane Smith of VICE.
We offer strong compensation, early equity, product ownership and a ton of responsibility. Essential duties and responsibilities may include, but are not limited to:
- Design, build and maintain efficient & reliable data pipelines
- Create and maintain frameworks to support data integrity for the pipelines
- Troubleshoot any performance, system or data related issues, and work to ensure data consistency and integrity
- Aggregate disparate data sources for efficient retrieval by a broad range of applications
- A deep passion for working with data and developing software to address data processing challenges
- Strong technical understanding of data modeling, design, architecture principles, and techniques to take business requirements from concept to implementation
- Experience working with open source technologies like Kafka, Hadoop, Hive, Presto, and Spark
- Experience with data warehousing services like Google BigQuery and RedShift
- Experience with SQL and Python and / or Java/Scala
- Bachelor's, Master's or PhD degree in Computer Science or equivalent experience
Questions? Email us at firstname.lastname@example.org.