Data Engineer

Founded in 2017, Phantom Auto is the only company capable of remotely driving vehicles from hundreds of miles away. Our teleoperation safety solution is the missing piece in making autonomous vehicles a reality.

We are an energetic and passionate team on a mission to build the future of driving.

At Phantom, every day is a unique adventure as we work with the world's leading autonomous vehicle companies. Because we collaborate with companies throughout the world, we have the opportunity to develop on many unique vehicles.
As an early team member, you will be instrumental in defining the foundations of our product and culture.
If you are determined to take on the world's most complex challenges and help build an industry-defining company, this is the opportunity for you.

As a Data Engineer, you will own the process of collecting telemetry from the fleet of vehicles and stationary infrastructure and make it useful for making critical product, engineering and operational decisions.

Once you join, you will be:
  • Architecting, implementing and assuring reliability and quality of distributed data pipelines that fuel our analytics. 
  • Nourishing the culture of making decisions based on observed evidence.
  • Collaborating closely with Engineering, Operations and Research teams to provide data for actionable insights.

About you

  • You flourish in fast-paced environment where infrastructure and processes are yet to be established.
  • You take full ownership of the tasks and have a track record of executing large projects end-to-end.
  • You nailed the engineering principles & have strong coding skills.
  • You can find creative solutions to hard problems, ruthlessly prioritize and make conscious trade-offs.
  • You can move fast with researching and prototyping but also know how to make system robust for production.
  • You are not afraid of working with constrained environments such as embedded on-the-vehicle systems where things may not launch with one line of code from the tutorial.
  • You can lay out the infrastructure for yourself (Linux, Docker, CI) and will learn new technologies as necessary to fill in the gaps.

Preferred Skills & Qualifications
  • Fluency in Python & SQL
  • Familiarity with distributed systems for data processing such as Spark, Kafka, ELK
  • Solid understanding batch data pipeline orchestration tools (Airflow) and processes
  • Ability to apply Math & Statistics to reason about data
  • Experience working with large time-series & geospatial datasets in a distributed environment
  • Knowledge of real-time observability tools (e.g. Grafana)

Want to apply later?

Type your email address below to receive a reminder

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field