2225 results for "earn" across all locations

UberMENTOR: Ride and Learn from Jacksonville’s Best and Brightest

New Facility Insights Report presents key learnings to shippers
Our first Facility Insights Report is a crucial step towards marketwide transparency.

Model Excellence Scores: A Framework for Enhancing the Quality of Machine Learning Systems at Scale
With the introduction of Model Excellence Scores at Uber, we’re setting a new standard for measuring, monitoring, and maintaining ML model quality–read how this innovative approach aims to enhance ML governance and provide clearer insights.
IntentNet: Learning to Predict Intention from Raw Sensor Data
S. Casas, W. Luo, R. Urtasun
In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. […] [PDF]
Conference on Robot Learning (CORL), 2018

Meta-Graph: Few-Shot Link Prediction Using Meta-Learning
Uber AI introduces Meta-Graph, a new few-shot link prediction framework that facilitates the more accurate training of ML models that quickly adapt to new graph data.

Five Key Learnings from Uber’s Leadership Panel on the Power of Visibility
To celebrate International Women’s Day, we hosted a panel of leaders from across Uber showcasing the power of visibility. Ana Loibner, Global Mobility Chief of Staff and Women at Uber Global Board Member, shares what she learned.

They helped move the world. Learn how they can help your team.
Danielle Monaghan, Vice President, Global Head of Talent Acquisition shares our Talent Directory and why companies should reach out to those profiled.
End-to-end Interpretable Neural Motion Planner
W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, R. Urtasun
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Washington, DC: Exploring Hidden Gems on Capitol Hill
