
Website Sysco LABS Sri Lanka
Job description
Technical Lead – Data Engineering
Sysco LABS is the captive innovation arm of Sysco Corporation (NYSE: SYY), the world’s largest foodservice company. Sysco is a Fortune 500 company and the global leader in selling, marketing, and distributing food products to restaurants, healthcare, and educational facilities, lodging establishments and other customers who prepare meals away from home. Its family of products also includes equipment and supplies for the foodservice and hospitality industries. With more than 72,000 colleagues, the company operates 334 distribution facilities worldwide and serves approximately 725,000 customer locations. For fiscal year 2023 that ended July 1, 2023, the company generated sales of more than $76 billion.
Operating with the agility and tenacity of a tech–startup, powered by the expertise of the industry leader, Sysco LABS is perfectly poised to transform one of the world’s largest industries.
Sysco LABS’s engineering teams based out of Colombo, Sri Lanka and Austin and Houston, TX, innovate across the entire food service journey – from the enterprise grade technology that enables Sysco’s business, to the technology that revolutionizes the way that Sysco connects with restaurants and the technology that shapes the way those restaurants connect with customers.
Sysco LABS technology is present in the sourcing of food products, merchandising, storage and warehouse operations, order placement and pricing algorithms, the delivery of food and supplies to Sysco’s global network, the in-restaurant dining experience of the end-customer and much more.
We are currently on the lookout for a Technical Lead – Data Engineering to join our team.
What you will be responsible for:
• The design and development of large data processing solutions for one of the world’s largest corporations involved in the marketing and distribution of food products.
• Work collaboratively with agile cross functional development teams and provide guidance for database design, query optimizations and database optimizations while adhering to DevOps principles.
• Design and develop capacity/scalability plans for fast growing data infrastructure.
• Adhere to Continuous Integration and Continuous Delivery of solutions, and ensure high code quality by following software engineering best practices.
• Be involved in projects throughout their full software lifecycle – from development, QA, and deployment, to post-production support.
What you need to have to apply for this position:
• A bachelor’s degree in computer science or equivalent, and 5/6+ years of experience developing production enterprise applications, data integration solutions and in managing teams.
• Excellent communication and leadership skills.
• Hands-on experience working with large volumes of data and distributed processing frameworks (preferably Apache Spark and Kafka).
• Strong Python programming skills for data processing and analysis.
• Proficiency in batch processing techniques and data pipeline development.
• Hands-on experience in design and development of ETLs to process large volumes of data, including experience with Informatica.
• Expertise in data quality management and implementation of data quality frameworks.
• Familiarity with data lakehouse architectures and related technologies. (OLAP/OLTP database design techniques).
• Strong skills in query optimization and performance tuning, particularly for large-scale data warehouses and distributed systems.
• Experience with query plan analysis and execution plan optimization in various database systems, especially Amazon Redshift.
• Knowledge of indexing strategies, partitioning schemes, and other performance-enhancing techniques.Extensive experience with AWS services, particularly:
• Amazon S3 for data storage and management
• Amazon Redshift for data warehousing and query optimization
• AWS Lambda for serverless computing and data processing
• Amazon ECS (Elastic Container Service) for container orchestration
• Proficiency in designing and implementing cloud-native data architectures on AWS
• Experience with AWS data integration and ETL services (e.g., AWS Glue, AWS Data Pipeline)
• DevOps practices for data platforms:
• Extensive experience implementing DevOps practices for data platforms and workflows
• Proficiency in automating data pipeline deployments, including CI/CD for ETL processes and database schema changes
• Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation) for provisioning and managing data infrastructure
• Familiarity with monitoring and observability tools for data platforms (e.g., CloudWatch, DataDog)
• Working in a Scrum Agile delivery environment and DevOps practices, as well as prior experience working with Cloud IaaS or PaaS providers such as AWS will be an added advantage.
Benefits
• US dollar-linked compensation
• Performance-based annual bonus
• Performance rewards and recognition
• Agile Benefits – special allowances for Health, Wellness & Academic purposes
• Paid birthday leave
• Team engagement allowance
• Comprehensive Health & Life Insurance Cover – extendable to parents and in-laws
• Overseas travel opportunities and exposure to client environments
• Hybrid work arrangement
Sysco LABS is an Equal Opportunity Employer.