Senior Data Engineer

About us:

CLARA analytics ("CLARA") drives change in the commercial insurance markets with easy to use artificial intelligence (AI) based products that dramatically reduce claim costs by anticipating the needs of claimants and helping align the best resources to meet those needs. Leading examples of our products include CLARA Providers, an award-winning service with a provider scoring engine that helps rapidly connect injured workers with top performing doctors and getting them on a path to speedy recovery; and CLARA claims, an early warning system that helps frontline claims teams efficiently manage claims, reduce escalations and understand the drivers of complexity. CLARA’s customers include a broad spectrum from the top 25 insurance carriers to small, self-insured organizations. 
This is a chance to get in early with a rapidly growing Silicon Valley company in the InsureTech space built using AI tools and to participate in developing the next generation of truly game-changing products. Job title and compensation will be adjusted as appropriate to meet the experience level of the right candidate.

About the role:

We are looking for a Senior Data Engineer to help build our data infrastructure using Apache opensource products in the Hadoop ecosystem. The candidate should be very comfortable working in a cloud environment like AWS or GCP and able to process both structured and unstructured data at scale. This role requires active development in one or more of the CLARA products, to enrich standardized data and facilitate our Data Science team with feature engineering and ML workflows. The ideal person needs to be comfortable juggling multiple priorities and working with application engineers, data scientists, product managers and product delivery team.

 

Responsibilities:

·      Contribute to development projects for key CLARA products using customer data on insurance Claims, Bills, Medical Providers, Attorneys, etc.
·      Actively use technologies like Apache Spark, Apache Hive and other tools in the Hadoop ecosystem to process and enrich data, catalog it and ingest into a data lake
·      Use both SQL and NoSQL databases to store and index data. Create APIs for accessing this data for both internal teams and external customers
·      Contribute to developing frameworks that allow ingesting data at massive scale
·      Build ETL pipelines using Apache Airflow and integrate with multiple components and data sources and sinks
·      Work with Data Science team to facilitate feature engineering, feeding into the AI modules that generate scores and predictions using enriched data
·      Develop expertise in at least one CLARA product area and become the go-to person for it
·      Effectively collaborate within and across other teams including Application engineering, Data Science, Product Management and Product Delivery teams
·      Deliver high quality code in a timely manner that is well tested and meets CLARA’s performance guidelines

Requirements

·      Bachelors or Master’s degree in Computer Science or related field with 3-6 years of relevant experience

  • Experience with the following software/tools is highly desired:
  • Apache Spark, Hive, Kafka, etc
  • SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch
  • Workflow management tools like Airflow
  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR
  • Familiarity with Spark programming paradigms (batch and stream-processing)
  • RESTful API services
·      Strong programming skills in at least one of the following languages: Java, Scala, C++. Familiarity with a scripting language like Python as well as Unix/Linux shells
·      Excellent command of SQL and use of Analytic tools like Apache Presto or AWS Athena is a plus
·      Basic knowledge of Machine Learning and AI
·       Experience working with cross-functional teams in a fast-paced environment
·      Ability to author Technical Feature Specifications and implement high quality code in collaboration with other members of the Data Engineering team
·      Strong familiarity with services for one of the cloud vendors (AWS, GCP)

 
Benefits
●      401(K)
●      Monthly wellness/gym reimbursement
●      Public Transportation cost assistance
●      Skill building reimbursement
 

Want to apply later?

Type your email address below to receive a reminder

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field
Error
Error
insert_drive_file
insert_drive_file