Software Engineer (Backend - Data Integration & Analytics) - Bengaluru, India

About Us
Teikametrics is based in Boston, Massachusetts and is a leading maker of software for online sellers. We provide broad-ranging tools for sellers that cover advertising, sales, inventory management, marketing and competitive intelligence. The company has raised a Series A in 2018, has revenues north of $20MM annually, and serves several large brands including Lego, Nalgene, Rockport, ViewSonic and Razer Electronics. Our list of founding investors includes luminaries such as Jerry Hausman, the Head of Econometrics at MIT, and Distinguished Fellow of the American Economic Association.

Software Engineer (Data Integration & Analytics)
Teikametrics is looking for a software engineer with strong computer science fundamentals and a background in data engineering, API integration or Data warehousing and Analytics. This role will involve building and scaling large data pipelines or services that can crawl, process and ingest massive amounts of data from multiple sources. We analyze this data and provide insights to accelerate customer business growth on various advertising platforms using Data Science.

Current data integration technologies at Teikametrics consist of functional-first Scala stack with cats and fs2, Python, Kafka, Postgresql and S3.   
The data warehousing is powered by Snowflake at the core with the data being surfaced in Mode and Sigma reports 
and transforms managed by DBT (Python, SQL). Extract and load operations are performed in our Scala backend for bulk data and Stitch for smaller sundries. 
Snowflake.
The front-end uses TypeScript with React and Redux.

Qualified candidates should have
  • 1-7 years of experience working as a professional software developer. Position is flexible for juniors to seniors. 
  • Experience with Python, Scala, Haskell, or related languages
  • Knowledge of databases and experience with writing code that interfaces with the database layer - SQL/RDBMS and NoSQL (one of Snowflake, Cassandra, Elastic Search, Redshift, etc.)
  • Experience with stream-based data-processing at scale (Spark, Flink, Dataflow, EMR, etc.)
  • Hands-on experience with queueing systems like Kafka, RMQ etc. 
  • Experience writing well designed and testable code, and writing effective unit and integration tests.
  • Passion for working with a small team of world-class developers, solving challenging problems.
  • A desire to work in a collaborative environment focusing on continuous learning; participating in mentoring, tech talks, code review, and some pair programming.

Benefits
You will be joining us at the perfect stage in our company as we are neither a struggling startup, nor a slow moving established company. You not only get to see all aspects of the product but also learn how a company is built and scaled from ground up.
You will also have a great pay, respectable work-life balance, flexible office hours and vacation time.

Want to apply later?

Type your email address below to receive a reminder

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field
ErrorRequired field
ErrorRequired field
insert_drive_file
insert_drive_file