Engineering Restaurant Manager, our UberEATS Analytics Dashboard
September 26, 2017 / GlobalAt Uber, we use data analytics to architect more magical user experiences across our products. Whenever possible, we harness these data engineering capabilities to empower our partners to better serve their customers. For instance, in late 2016, the UberEATS engineering team built a comprehensive analytics dashboard that provides restaurant partners with additional insights about the health of their business.
This solution had to be capable of gleaning insights from a data pipeline that was real-time, granular, highly available, and reliable. (Reporting a net payout that is off even by a cent was unacceptable, not to mention a support nightmare.) We needed a pipeline that could provide data insights on multiple verticals for tens of thousands of restaurants, with the potential of scaling well into the hundreds of thousands—and beyond.
Launched in March 2017, Restaurant Manager is a comprehensive analytics dashboard and pipeline for our restaurant partners. In this article, we discuss how we architected this analytics platform and its robust data pipeline.
The recipe behind Restaurant Manager
One of the most prominent requests from our restaurant partners was more insights into their businesses, which was made apparent through regular surveys and visits. Our solution? Restaurant Manager, an analytics dashboard on which they could harness UberEATS data to improve both their business on UberEATS, as well as in-house.
After conducting thorough user research with our partners, we identified an initial set of core metrics: net payout, daily and weekly items sold, order acceptance rate, order preparation speed, and item ratings (either a ‘thumbs up’ or a ‘thumbs down,’ indicating customer satisfaction). Once we synthesized our findings, we grouped these metrics into three major categories of data: customer satisfaction, sales, and service quality.
Designing a scrumptious user experience
With these core metrics and three categories in mind, we built Restaurant Manager to be user-friendly:
Customer Satisfaction
On our Customer Satisfaction portal, we provide insights into the satisfaction of their customers. To measure this, we aggregate meal ratings and categorical feedback, among other metrics. By taking the pulse of customer satisfaction, restaurants can target areas of improvement to better serve their customers.
Sales
On our Sales portal, we provide a window into how restaurants are performing financially on the UberEATS app. Metrics like real-time net payout, items sold, and top selling dishes are useful information for restaurant owners when menu planning and determining marketing strategy.
Service Quality
And finally, with our Service Quality portal, we show restaurants how order acceptance rate, food preparation time, and menu availability might affect customer satisfaction. By providing restaurants with this data, we can help them maximize their revenue and optimize eater experiences.
Next, we take a look at a Restaurant Manager use case to better depict how restaurant partners can benefit from the tool.
Restaurant Manager case study: potential earnings
With Restaurant Manager, restaurant partners can gain insight into their potential earnings. The premise is simple: when a restaurant partner declines to accept an order during business hours, we capture the price of the order and surface it, as depicted below on the Restaurant Manager dashboard:
The implementation of this equation, however, is a bit more complex. To access the full potential earnings picture, we join and process multiple streams of data from separate services into a final result, grouped by restaurant location.
In our next section, we dissect the Restaurant Manager architecture, a powerful and scalable system that leverages many of Uber’s existing data processing services.
Restaurant Manager architecture
The platform’s architecture relies on Pinot, an open source Online Analytical Processing (OLAP) datastore, to populate virtually all data for our analytics dashboard. Pinot supports near real-time table ingestion, provides SQL-like query support against relatively large datasets, and is well-suited for low latency, high queries per second (QPS), and scalable analytic queries. Pinot also supports data loading from both online (e.g. Kafka streams) and offline (e.g. Hadoop jobs) sources, depicted below:
Online data ingestion pipeline
This real-time workflow can be divided into four phases:
- UberEATS services, including our restaurant account management, workflow management, and ratings services, will capture user interaction with the UberEATS system, and dump data into Kafka streams.
- A streaming job joins the events from multiple Kafka streams, then transforms and aggregates them, and finally produces the decorated events into different Kafka streams.
- Pinot consumes data from above Kafka streams, stores the data, and creates indexes for efficient queries.
- Restaurant Manager queries Pinot for fresh data with high QPS, and Pinot delivers results with low latency.
The streaming job joins state changes of an UberEATS order and UberEATS order dimension events, then publishes unaccepted orders (and the potential earnings restaurant partners could have received from accepting these orders) into a Kafka stream. We implement the streaming job with the following SQL using AthenaX, Uber’s streaming analytic platform:
Pinot consumes the stream and builds the table with the following schema: {restaurantUUID string, jobUUID string, skuName string, price double, currency string, dateStr string}.
Using this model, a front-end service will send a Pinot query like the one below when a restaurant partner wants to know their potential earnings during a certain time period:
SELECT sum(price)
FROM potential_earnings
WHERE restaurantUUID = “?” and dateStr between (?, ?)
Offline data ingestion pipeline
After being processed in real time, all UberEATS business events will be moved to HDFS via the extract, transform, load (ETL) protocol and become available in Hive for querying. The freshness of a Hive table depends on the speed of our ETL pipeline, which typically takes a few hours to complete. To address possible data collection issues, we schedule offline workflows with Oozie to backfill data on Pinot offline servers. The offline pipeline involves Hive, Spark, Oozie, and Pinot.
Data on Hive is applied as a source of truth, but only up to 24 hours after an event occurs. To meet this tight timeline, we schedule Oozie jobs to append and update Pinot offline tables on a daily basis. There are three steps to move data from Hive to Pinot:
- Run a Spark job to query Hive then dump results to Avro files on HDFS.
- Run a Spark job to generate Pinot indexed segments (the unit for Pinot data storage) from the Avro files created in step one.
- Push these segments to a Pinot cluster.
Query serving
Once served to Pinot, our UberEATS events are ready to be stored and processed. As shown in Figure 2, above, a Pinot REST proxy, Pinot broker and Pinot real-time/offline servers are the main components for query serving.
A Pinot query is first sent to the Pinot REST proxy, which is responsible for query routing and load balancing. Our Pinot REST proxy will route the query to the corresponding Pinot broker based on the data source. Then, the Pinot Broker time-partitions the query, scatters the query to corresponding servers, and gathers results. Pinot servers store and index data, as well as answer the concrete queries.
Architecture performance
In this section, we detail the main metrics for measuring our Pinot-based architecture’s performance, including throughput, latency, scalability, reliability, and engineering onboarding time. For our performance benchmark, we set up three online servers with 24 cores and 128GB of RAM.
Throughput & latency
After conducting thorough testing, we determined that during the data ingestion phase Pinot can ingest queries from Kafka at a rate of 20,000 messages per second per thread.
To ensure that we can serve all users, we have to guarantee that Pinot can still respond within SLA when QPS goes up. To guarantee horizontal scalability, we also tested the latency of queries processed at different rates. With 1,000 QPS, three Pinot servers could still answer queries with a 99th percentile latency lower than 100 milliseconds (ms).
Scalability & reliability
Since Pinot scales horizontally, we can add more hardware to the tenant to grow the cluster. Currently, Pinot is serving Uber’s user-facing analytics platforms with 99.99 percent availability.
Engineering onboarding time
Using our end-to-end streaming analytics architecture, we have shortened the average engineering time for onboarding one job or table from weeks to hours across our platforms.
Data rendering
To surface the data captured by our systems for our restaurant partners in a performant and easily-navigable way, we leveraged React as the basis of our view layer and Redux as the foundation of our data layer. These choices were easy to make, as much of Uber’s web engineering ecosystem is built with React, and multiple teams contribute reusable components for other teams to use.
After exploring a number of different chart rendering options, we chose Recharts, a simple yet powerful declarative component library, to generate charts for our dashboard with the D3.js library. The data rendering premise is simple: the client makes independent asynchronous calls to the Node server for each analytics section, which then proxies the requests to Pinot; once the data is returned back to the client, it is adapted to a format consumable by Recharts and rendered on the page. The result is a clean and interactive UI, making it easy to understand and apply our analytics.
Architecting the future of Uber analytics
Through Restaurant Manager and other tools, Uber Engineering’s Streaming Analytics and UberEATS teams are building scalable, real-time analytics solutions to glean actionable business insights for our restaurant partners. If you are interested in building Uber’s next-gen analytics platforms, check out our data platform/infrastructure engineering job openings. We also have open roles on the UberEATS engineering team building out our current platform.
Learn more about Restaurant Manager by watching the video, below:
Jing Fan and Xiang Fu are software engineers on Uber’s Streaming Analytics team. Faiz Abbasi is a software engineer on the UberEATS Restaurant Experience team. Jonathan Chao, an engineering manager on the UberEATS Restaurant Experience team, also contributed to this article.
Xiang Fu
Xiang Fu is a staff software engineer on Uber's Infrastructure team.
Jonathan Chao
Jonathan Chao is a senior software engineer at Uber.
Posted by Jing Fan, Xiang Fu, Faiz Abbasi, Jonathan Chao
Related articles
Most popular
Preon: Presto Query Analysis for Intelligent and Efficient Analytics
Upgrading Uber’s MySQL Fleet to version 8.0
Connecting communities: how Harrisburg University expands transportation access with Uber
Understanding Massachusetts rider prices
Products
Company