Data Engineer

goPuff is seeking a Data Engineer who will be responsible for supporting the data ingest and integration needs of the organization. The Data Engineer must provide technical expertise and leadership on architecture, design, and integration of multiple data sets into goPuff’s big data environment. The focus of this position will be on developing and deploying a robust data processing pipeline in both on premise and cloud environments.

Responsibilities
  • Responsible for designing, deploying, and maintaining analytics about goPuff’s environment that process data at scale
  • Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
  • Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
  • Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
  • Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
  • Mentor junior and senior engineers, provides code reviews, feedback, and enables professional growth.

Requirements
  • Experience building, maintaining, and improving Data Processing Pipeline / Data routing in large scale environments
  • Fluency in common query languages, API development, data transformation, and integration of data streams
  • Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, Elasticsearch, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB)
  • Fluency in multiple programming languages, such as Python, Shell Scripting, Regex, SQL, Java, or similar languages and tools appropriate for large scale data processing.
  • Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
  • Must have basic Linux administration skills and Multi-OS familiarity (e.g. Microsoft Windows, macOs) 
  • Data Pipeline and Data processing experience using common platforms and environments
  • Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
  • Creativity to go beyond current tools to deliver the best solution to the problem
  • Experience in producing and consuming topics to/from Apache Kafka, AWS Kinesis, or Azure Event Hub
  • 4+ years working on data processing environments

What is goPuff?

For the people who have better things to do than go out of their way to stop at the store (again), goPuff is the largest digital convenience retailer delivering thousands of products ranging from snacks, drinks, and ice cream to alcohol, home essentials, and personal care items directly from centrally located warehouses to our customers’ doors.

We’re currently in 80+ markets and growing fast, so we're looking for the most motivated and passionate talent to be a part of our team, grow with us, and join in our mission of delivering the moments that matter most. Note: must love snacks to work at goPuff.

The goPuff Fam is committed to an inclusive workplace that does not discriminate against race, nationality, religion, age, marital status, physical or mental disability, sexual orientation, gender, or gender identity. We believe in diversity and encourage all qualified individuals to apply. We are an EEOC Employer.


Want to apply later?

Type your email address below to receive a reminder

Apply to Job

ErrorRequired field
ErrorRequired field
ErrorRequired field
ErrorRequired field
Error
insert_drive_file
insert_drive_file
ErrorRequired field
ErrorRequired field
Error
ErrorRequired field