Uber’s platform consists of many microservices, self-contained sub-applications that support all the features that make up our global services.
To help our microservices work in harmony, we built Cadence, our orchestration engine. This tool routes requests, directs data, and mediates communications between various microservices so they can all cooperate seamlessly. Built to support Uber’s global business and deliver our standard of service, it’s scalable and fault-tolerant. Like a computerized conductor, it coordinates a variety of tasks without missing a beat.
In 2017, we made Cadence an open source project, letting other engineers in need of a highly scalable orchestration engine use and contribute their own modifications to it. To support the Cadence community, we recently held an event in our Seattle office where Uber engineers gave presentations about its uses. Below, we highlight four videos covering presentations from the event:
Introduction to Cadence
Seattle-based Uber engineer Maxim Fateev presents the basics of Cadence. He describes how, given Uber’s asynchronous microservice architecture, “a simple code becomes a quagmire of callbacks,” every engineer’s nightmare. Fortunately, he explains, Cadence allows Uber engineers (and open source users) to bring their intuitive, Go-based codes to life by providing its own persistence, queue, and timers. Services can “talk” to each other through this multi-tenant middleware, which organizes external workflows so they function properly. Cadence makes sure each microservice gets all the data it needs (nothing more, nothing less), keeps a record of each action, and catches errors before workflows go awry. This makes it an excellent choice for dynamic, complex workflows–generally speaking, any use case beyond a simple request, from sign-up flows to order fulfillment to machine learning pipelines and more.
In this presentation, Samar Abbas, an engineer on the Cadence team, walks the audience through Cadence’s structure. In keeping with an engine that was designed to interface with and between microservices, Cadence itself runs on several layers: its front end, history and matching services, and a Cassandra back end. Samar describes how these various Cadence components relate to and support each other. In particular, he details how the front end passes calls through to the next layer, how the history service handles the tricky task of determining shard ownership, how the matching service dispatches tasks, and how Uber’s engineers configured queues within Cassandra. Listen to learn how Cadence navigates tombstones, range IDs, and more.
Writing a Cadence Workflow
Yimin Chen teaches audiences how to compose workflows in Cadence. Essentially, he explains, engineers implement activities, which combine to form workflows, and which Cadence can unit test. At this point, programmers are ready to call in a worker, have it perform the workflow, and conduct basic error handling. Yimin goes through each step of this relatively simple process in detail, describing the design philosophies behind Cadence’s authoring practices. He also goes over the various safeguards Cadence includes; its error returns at virtually every stage of composition help engineers understand exactly how their codes are working. When it’s time to run a Cadence workflow, engineers can kill and restart a worker without causing any issues. In these ways, Cadence simplifies and improves not only the workflows it manages, but also the workflows of the engineers who code with it.
Exploring an Uber Eats Use Case
Engineering Manager Mihnea Olteanu provides a quick demonstration of how a Cadence workflow operates using Uber Eats. He shows how Cadence synchronizes the eight standard steps in an Uber Eats workflow, displaying the orchestration engine’s code alongside the various UIs it helps operate.
To learn more about Cadence, please check out our other articles on this topic (and watch for future updates!).
Interested in building innovative software like Cadence? Apply for a role on our team!