The world’s leading brands trust Symphony Commerce as their partner in opening and growing online sales channels. The platform we build provides brands a single tool that orchestrates the entire flow of commerce from the online store through delivery, including inventory and fulfillment. Unlike most commerce offerings, our focus is creating a platform where it’s easy to create a customized and scalable commerce experience.
What makes our platform compelling to users is also a major technical challenge for our engineering team, since we cover the same scope traditionally provided by 3 or more different technologies. This range gives us the opportunity to work on a wide variety of problems as an engineering team, and enables us to do things that are almost impossible on other platforms (like real-time availability changes on the website.)
The quality of Statistical Models, Reporting, and Analytics are as good as the quality of the data available at the source. At Symphony we are building real time big data pipeline on the cloud that collects, connects, and centralizes all data into a single source of truth which makes data available, reliable, and accurate to support all our data needs. You will be playing a key role in the organization and with our customers in enabling effective data driven decisions. You will have your fingerprint in the conception of ideas, design, and execution across all the data functions. You will power data that enables commerce for our customers like Pepsi, J Brand, Cheetos, and Peter Millar to name a few.
- Build scalable and reliable near real time data pipeline on the cloud
- Build Single Source of Truth
- Responsible for data quality, reliability, availability, and security
- Reconciliation of data between all data sources
- Enable self-servicing of the data for our organization and to our customers
- Evaluate new technologies and build prototypes for continuous improvements in Data Engineering
- Contribute to Open Source community
- Bachelor’s degree in Computer Science, Computer Engineering, Statistics or a related field
- Experience with AWS Technologies including Lambda, Athena, S3, and RDS
- Experience with Presto
- 10 years of experience as a Data Engineer, Software Developer or similar
- Database Modeling and Querying
- Relational databases such as Redshift and MySQL
- NoSQL databases such as Casandra, MongoDB, or Elastic Search
- Big Data Processing technologies such as Google Analytics, Domo, Tableau, Jaspersoft BI, or Segment
- Scripting language such as Python
- OOP language such as Java
- Data serialization formats - Protobuf, Avro
- Data storage formats - Parquet
- Data pipelines and ETL processes
- Place trust in your co-workers, support ideas with data, and adjust your approach as needed.
- Ask for advice, don’t assume you have all the answers, seek to learn something new every week.
- Learn to see the world through your customer’s eyes. Ensure your work has had the right impact, and be a constant advocate for fixing bugs and making improvements, however small.
- Speak up when you see a risk, are getting behind, or need help and encourage others to do the same.
- Full medical, dental, and vision insurance
- Retirement (401k)
- Free catered meals and office snacks
- Central location in SoMa
- Work alongside a passionate team who thrives on fun and hard work