Principal Data Engineer

  • $185,000.00/year
  • The Principal Data Engineer will be a critical player in Vela, a division of Senior link’s Product & Technologies Organization (P&T). As the technical leader within the Data team, you will take ownership of the ‘Data Hub’ architecture for processing and analyzing data across Senior link products. You will participate and influence technical design discussions collaborating with the application engineering teams to ensure robust and scalable integration across Senior link products. You will also serve as a mentor to junior developers ensuring good engineering discipline and best practices are followed across data team. The ideal candidate must have experience with Apache Spark for large-scale data processing, passion for the data domain, a strong background in data engineering, having designed and implemented data pipelines and models that handle complex large scale data processing and analytics use cases. 
    What You Will Do:

    • Take ownership of the recently built DataHub architecture and evolve it to handle new use cases as Seniorlink brings new products to market 
    • Establish sound design patterns and principles ensuring they are put into practice by the team 
    • Drive proof of concepts and lead the engineering of components core to the data infrastructure 
    • Use Agile/Scrum methodology to ensure sprint commitments are met regularly, making any necessary adjustments along the way to drive predictability 
    • Participate and represent the data team in critical design discussions with technical leads across various products 
    • Collaborate with application developers, product managers and business analysts to understand the requirements and translate them into design specifications and code 
    • Mentor junior engineers providing guidance on process and complex technical topics 
    • Perform pull request reviews for other engineers to quality check work 
    • Troubleshoot processes in production, especially those are lead to architectural adjustments as the platform grows and matures 
    What You Will Bring:

    • Background in Computer Science or related field 
    • 10+ years’ experience doing data engineering work with at least 3+ years working with newer data lake type architectures 
    • Solid understanding of algorithms and data structures 
    • Strong experience working with Apache Spark for large-scale data processing 
    • Experience working with AWS services such as EMR, Redshift, S3 nice but not required 
    • Proficient with at least one of following programming languages - Python, Java, Scala 

    Want to apply later?

    Type your email address below to receive a reminder

    Apply to Job

    ErrorRequired field
    ErrorRequired field
    ErrorRequired field
    Error
    Error
    insert_drive_file
    insert_drive_file