Skip to main content
Uber logo

Start ordering with Uber Eats

Order nowOrder now

Start ordering with Uber Eats

Install the appInstall the app
Data / ML, Engineering

Enabling Offline Inferences at Uber Scale

June 15, 2022 / Global
Featured image for Enabling Offline Inferences at Uber Scale
Image
Figure 1: Overview of the PyML architecture. First, a model’s artifacts and dependencies get uploaded to and versioned by Michelangelo’s (MA) backend. Afterwards, a Docker image gets built which can be deployed as a Docker container for online predictions or leveraged to run large scale offline predictions. See for details.
Image
Figure 2: uWorc has a drag and drop workflow editor. See for details.
Image
Figure 3: Architecture of “Spark-as-a-Service” at Uber.
Image
Figure 4: Sample code framework for executing offline batch inferences.
Neeraj Dhake

Neeraj Dhake

Neeraj Dhake is a Software Engineer II on the Customer Obsession team at Uber and has worked on building efficient automation experiences and improving the quality of support for Uber Eats.

Aravind Ranganathan

Aravind Ranganathan

Aravind is currently an Engineering Manager on the Customer Obsession team at Uber, focusing on building efficient automation experiences and improving the quality of support. He’s also led teams across various domains at Uber such as Risk and Communications Platform. He has a PhD in Computer Science and is passionate about research, teaching, and building scalable tech solutions for real-world problems.

Posted by Neeraj Dhake, Aravind Ranganathan