Associate Data Engineer

Help build technology that positively affects hundreds of thousands of Injured Workers everyday!

CLARA analytics drives change in the commercial insurance markets with easy-to-use artificial intelligence (AI)-based solutions that dramatically reduce claims costs by anticipating the needs of claimants and helping align the best resources to meet those needs. Leading examples of our solutions include CLARA providers, which identifies the best available doctor engine is an award-winning provider scoring engine that helps rapidly connect injured workers to the right providers, while CLARA claims is an early warning system that helps frontline claims teams efficiently manage claims, reduce escalations and understand the drivers of complexity. CLARA’s customers include a broad spectrum — from the top 25 insurance carriers to small, self-insured organizations. For more information, visit

Job Description 

If you are a Data Engineer with a craving for making sense out of structured and unstructured data with the goal of affecting people’s lives in a positive manner, please read on! 

We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on working with the Data Team to design technologies that wrangle, standardize and enhance our master data repositories, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. 

Unique skills expected for this job is the ability to translate Python code into clean, high-quality Spark/Scala libraries that can be re-used within our platform. Ability to create orchestration workflows that ingest structured and unstructured data, enhances them and makes them available for use throughout the platform. 

●      Minimum of 2-3 years experience implementing large-scale production systems 
●      Experience with Java or Scala build systems: maven, ant, sbt 
●      OO design and implementation 
●      Understanding of database design (SQL/noSQL) 
●      Experience with multiple Apache Hadoop / Spark ecosystem applications, like: 
        Spark, Hadoop, Hive, Zeppelin 
●      Experience with Python 
●      Experience building and operating at scale 
●      Excellent analytical and problem solving skills 
●      BS/MS in Math, Computer Science, or equivalent experience 


●      Experience with object-oriented/object function scripting languages: Scala, Java, Python, etc.  
●      Experience with relational SQL and NoSQL databases, including mySQL and Cassandra. 
●      Big data tools: Hadoop, Spark, Kafka, etc. 
●      Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. 
●      AWS: EC2, S3, EMR, RDS 
●      JIRA & Confluence 
●      Git 

Want to apply later?

Type your email address below to receive a reminder

ErrorRequired field

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field