We’re the company behind Folio, which transforms the chaos of business email into an AI-powered deal organizer, project manager, and virtual assistant. We make email not suck, for the first time. We’re on a mission to transform the way people work, allowing them to focus on the human parts of their job while letting the machine take care of all the tedious administrative work they otherwise do manually.
We’ve started by taking our product to the real estate industry, well known for its inefficiencies and the pain people experience while buying or selling a home. There are over 65,000 real estate agents on our platform and they’re using Folio to manage over 40% of all the residential real estate transactions in the United States. We are now rapidly taking our product to all real estate professionals around the globe, including lenders, title & escrow professionals, attorneys, and going beyond real estate to all industries in which professionals manage complex, repetitive, and tedious processes. We’re growing quickly, working hard, and are excited to build a huge business.
We are a team of passionate product people and engineers that gets excited about solving complex processes and creating value for our users with AI and machine learning. We’re a team with a prior exit and are passionate about improving the people’s experience of the world around them. We’re backed by Accel Partners and other investors like Jerry Yang.
We are looking for Data Engineers to join our team. As a Data Engineer at Amitree, you will build and maintain data pipeline and ETL systems used to structure and store large amounts of unstructured text data, create data sources/warehousing serving ongoing analysis, model building, testing and deployment, and design and build database and data store architecture capable of supporting rapid machine learning and experimentation.
The Data Engineer will work closely with a broad range of teams - our Data Scientists to facilitate and support data collection, structuring and analysis, our Software Engineers to maintain production quality data processing pipelines supporting user facing software products and our Product and Leadership teams to successfully execute the organization’s data strategy goals and data product roadmaps.
The ideal candidate will have experience with data modeling, designing, developing and managing complex data pipelines handling unruly, dirty and sometimes difficult data, and can quickly learn new technologies/implement them. A balance between willingness to put effort into building sustainable processes as well as quickly turning around prototype schemas, processes and pipelines is key to this position.
- Degree in CS, quantitative social or physical sciences
- 7+ years experience with SQL
- 5+ years experience with custom ETL tooling/process design, implementation and maintenance
- 5+ years experience with schema design / data modeling
- 1+ years experience with Python or other scripting language
- Demonstrated ability to think at various levels of scale for data processing
- Experience with NLP and unstructured text data
- Experience with PostgreSQL
- Experience with data warehousing
- Experience creating data sources for BI tools like Tableau, Looker, etc.
- Experience with ETL tools like Talend, Informatica etc.
- Experience with and preference for agile methodologies
- Prior experience in designing, developing and managing APIs.
What we Offer / Benefits and Perks
- Autonomy and responsibility, control and ownership over what you produce
- Health, dental, and vision insurance with fully paid premium for you and your dependents
- Flexible Spending Accounts
- 401(k) plan
- Reimbursement for the cost of a gym and One Medical membership
- Flexible paid time off / vacation - we encourage everyone to take at least three weeks off each year.
- Charitable giving matches