The past few months have been a whirlwind for Jeff Clune, Senior Research Manager at Uber and a founding member of Uber AI Labs. In June 2019, research by him and his collaborators on POET, an algorithm that generates its own challenges and learns to solve them, took home a best paper award at GECCO; in July, a paper he co-wrote earlier in his career was named Outstanding Publication of the Decade by the International Society for Artificial Life; and, perhaps most notably, Jeff was awarded the Presidential Early Career Award for Science and Engineering (PECASE) by the White House. According to their statement, “The PECASE is the highest honor bestowed by the United States Government to outstanding scientists and engineers who are beginning their independent research careers and who show exceptional promise for leadership in science and technology.”
At AI Labs, Jeff’s team looks for ways to apply artificial intelligence to train deep neural networks. Their focus is primarily on improving deep reinforcement learning to train algorithms to learn how to solve problems via trial and error.
“The type of machine learning that currently works really well, which is called supervised learning, requires a tremendous amount of feedback, such as the correct action to take in every situation, for example, every millisecond,” he says. “The real world does not provide such constant supervision. This entire field of reinforcement learning is trying to create algorithms that learn in a more natural, less demanding way.”
Over time, these machine learning algorithms and many others can be leveraged by industry to tackle complex tasks. At Uber, for instance, we’ve used machine learning to optimize delivery times on Uber Eats, streamline data workflow management, and improve the customer support ticket response experience.
Much like his reinforcement learning algorithms, it took a lot of trial and error for Jeff to discover the career he finds most rewarding, which is in artificial intelligence research. A voracious learner, Jeff majored in philosophy during undergrad, taking courses in statistics, economics, psychology, biological anthropology, literature, and political science, but he most enjoyed classes that covered evolutionary theory.
After graduation, Jeff came across an article in The New York Times about research out of Cornell University, where scientist Hod Lipson was using evolutionary algorithms to automatically create robots that were then 3D printed and could walk in the real world.
“I remember it was like an explosion went off inside my head,” he says. “I thought it was so cool that you could combine the ideas behind evolution and use them to automatically design complex things that can then impact the real world. I knew I wanted to do that. I just had to figure out how.”
Eight years and two degrees later, Jeff was a postdoctoral researcher in Lipson’s Creative Machines Lab at Cornell. Now, he brings this same spirit of curiosity and passion for discovery to his artificial intelligence research at Uber.
We sat down with Jeff to discuss his path to artificial intelligence, the recent accomplishments by him and his collaborators, and what excites him most about the future of the field:
How did you first become interested in the sciences and engineering?
Throughout my life, I’ve been fascinated by two twin questions. The first is: how did the explosion of complex life on Earth happen? How do we get jaguars and hawks and dolphins and whales, and what sort of a process could design such an endless parade of amazing engineering marvels? We know the broad strokes of the answer thanks to Darwin, but there is much yet to be understood.
I’ve also been very interested in the human mind and, more broadly, animal minds. How does thinking happen? Can we create a thinking machine inside our computers? It took me a long time to find the best place to do that sort of scientific work. I started out in philosophy because I thought that philosophers had the market cornered on thinking. I quickly became frustrated, however, because while philosophy is very interesting, ultimately, you don’t get to test your ideas, see if you’re right, and then iterate to improve them.
Eventually, I switched to machine learning and computer science because that is a place where you can learn by building. There is a wonderful quote from Richard Feynman that says, “What I cannot create, I do not understand.” I strongly believe in that paradigm of learning by building. By trying to create intelligence in robots and software, we learn a lot about thinking and intelligence. Moreover, if we try to create processes that themselves produce thinking machines, as I argue we should, then we also learn about the necessary, sufficient, and catalyzing ingredients to create algorithms that generate endless complexity, including producing thinking machines. Artificial intelligence research thus sheds light on both of the twin questions that I have been on a quest to answer my whole life. For that reason, I could not be happier with what I get to do every day as a scientist researching how to advance artificial intelligence.
When you were pursuing your PhD was AI as hot as it is now?
Science has trends in terms of what’s accepted and what’s derided. There are things that are currently in fashion and things that are definitely looked down upon. When I started researching these subjects, neural networks, artificial intelligence, and especially evolutionary algorithms were out of fashion. But I didn’t choose the subjects that I wanted to focus on because of their popularity.
I focused on what I thought was really interesting to pursue and would be most promising over the long haul in terms of moving the needle on creating artificial intelligence. Since then, everything has changed. Now artificial intelligence and neural networks and even very recently evolutionary algorithms are skyrocketing in terms of interest and the number of people that are trying to ramp up on them and use them. They’ve also taken off in terms of their capabilities. Many of us who were out in the desert for years wandering alone with our curiosity and our passion like a band of nomads are now very surprised—pleasantly so—to have people interested in what we’re doing.
What was your path to becoming a machine learning scientist?
It’s a bit of a story, but I was in Silicon Valley during the dot-com boom in the early 2000s and I read an article in The New York Times about a scientist at Cornell University named Hod Lipson whose lab had evolved robots in a simulator. Then, when they were sufficiently well-designed by the evolutionary algorithm (automatically, with no humans in the loop), they were sent to a 3D printer that printed the body. The scientists then plugged in some motors and these robots could walk in the real world. I thought that was just unbelievably cool.
I then travelled the world for a year and a half and when I came back, I decided that I wanted to do that: build robots and study artificial intelligence. Specifically, I wanted to study how we could harness evolution—the same force that created all of the complex life on Earth—to design robots and AI. I contacted Hod Lipson and said, “Can I join your lab?” He said, “I’d love to have you, but you need to get into the Cornell PhD program.” I only had an undergraduate in philosophy, not computer science, so I did not meet the minimum criteria to apply. I contacted dozens and dozens of universities trying to see if anyone would let me into a PhD program in computer science only having a philosophy background, and I just got “no”s across the board. Eventually, I found an opportunity at Michigan State University because there was a philosopher there who worked with people using evolutionary algorithms and studying the evolution of complexity in biological systems.
So I went to Michigan State, did a master’s in philosophy, and the whole time I was taking CS and machine learning courses. At the end of my master’s, they let me enroll in their computer science PhD program. So I got a PhD in computer science and then I called up Hod Lipson at Cornell and I said, “Now I have a PhD in computer science. Can I join your lab?” and he said, “Sure, come on in!”
Roughly eight years after reading that article in The New York Times, I was in his lab. It was amazing. It was like Willy Wonka’s robot factory. Then, two years later, I started my own lab as a professor at the University of Wyoming, and now I get emails like the one I sent almost by the week from people who want to get into this field and don’t know how.
What is the focus of your lab at the University of Wyoming?
It’s actually quite similar to what we focus on here at Uber AI Labs. We’re trying to advance the cutting edge of machine learning, artificial intelligence, and robotics, and we do that by trying to identify where AI is (1) currently weakest and (2) where we think we can make progress with sustained effort. For example, we have worked on improving reinforcement learning to enhance the ability of AI agents to continuously learn a variety of skills so they’re not just one-trick ponies. We also study how to use deep learning to help biologists better understand and protect animals in natural ecosystems. Additionally, my colleagues and I at Wyoming have done a lot of work into what we call “AI neuroscience”, where we study how much neural networks understand about the world. That work was predominantly with Anh Nguyen and Jason Yosinski (now a researcher with Uber AI Labs), with multiple contributions from Alexey Dosovitskiy and Yoshua Bengio. My lab further studies how to create open-ended processes that are endlessly curious, creative, and innovative. Other topics include investigating open biological questions that also enable us to create better AI, such as investigating why and when natural and artificial networks become regular, modular, and hierarchical.
What brought you to Uber?
Uber acquired Geometric Intelligence, an AI startup I was working at, to create Uber AI Labs. The draw for me to Geometric Intelligence was the people and the projects. Some of the researchers that were already at Geometric Intelligence were friends and colleagues from across academia with whom I wanted to collaborate. Others were world leaders in fields I knew less about. I loved how heterogenous our team was and is. The ability to learn a variety of different AI techniques from some of the best people in the world and become friends with them was wonderful. Additionally, the actual technology that we were working on at Geometric Intelligence is a subject that’s very close to my heart, so I was excited to work on it, especially with such a world-class team.
What drew me to Uber was the same as what brought me to Geometric Intelligence: the people and projects. Uber is a fantastic, interesting company with tons of different very challenging machine learning problems. It’s very dynamic and it’s innovative and it’s disruptive and it’s exciting, so what’s not to like?
I’ve also been extremely impressed with the people I’ve met throughout Uber. I am blown away by how intelligent and hardworking and passionate and nice everyone is. There are just a huge number of people here working on really technically challenging, fascinating problems that have the opportunity to make the world a better place: to take pollution out of the air, take cars off the streets, give people time back in their lives by eliminating traffic, or getting them to where they need to be safer and more reliably. In short, I love the problems we get to work on, and I love the people that I get to work with.
What are some of the most interesting problems that your team is tackling right now?
My team focuses on improving deep reinforcement learning. I also collaborate daily with a team led by Ken Stanley, so when I talk about my Uber work I am referring to work with people from either or both teams. Joel Lehman is also a wonderful collaborator on all our work.
One major area we focus on is more intelligent exploration, meaning agents that are better at exploring their world to discover the optimal way to solve a problem. A major breakthrough was our recent Go-Explore algorithm, which made substantial headway on a longstanding challenge in reinforcement learning that many of the top industrial and academic labs had been working on for years. We’re now improving that algorithm in a variety of ways to show it can work in a broad range of conditions and that it can solve really hard, previously unsolvable real-world problems.
Another area of focus are open-ended algorithms like POET that continue to learn and innovate forever, including inventing new challenges to solve along the way. That’s a different paradigm for machine learning. Additionally, we have done a lot of work studying neuroevolution, wherein deep neural networks are trained with evolutionary algorithms, including finding that they are competitive with popular deep reinforcement learning algorithms, but can run faster. Work led by Thomas Miconi has also introduced gradient-based algorithms that are different from the dominant stochastic gradient descent (aka backpropagation) method, and that are more biologically inspired. We are also working on making deep learning more efficient at learning.
In what ways is Uber AI working on projects that are directly applicable to Uber?
There’s a tremendous amount of effort within Uber AI to help solve many of Uber’s very challenging machine learning problems, either by applying known machine learning techniques or inventing new ones.
Machine learning is at the heart of almost every aspect of Uber. The number of problems at Uber is so diverse and the complexity of those problems is so high that we need a powerful and diverse set of AI techniques in the toolbox to tackle these problems. It helps that we have a heterogeneous team of experts in a variety of different areas of machine learning. That allows to make good use of existing tools, invent new ones, and combine these tools in novel ways.
This has been quite a banner year for you and your research. How does it feel to be recognized for your work, particularly after a long span where the community of people interested in your research was much smaller than it is now?
I should say first and foremost that all of these awards, even the ones for me as an individual, are team awards. Behind every award is a team of collaborators. There is also a long parade of teachers, advisors, and supportive friends and family members. For that reason, it has been wonderful to receive these awards because they recognize this large team of people who have put in so much time over the years. For example, both the PECASE and Outstanding Paper of the Decade award are very much joint with my wonderful collaborator Jean-Baptiste Mouret and my postdoc advisor Hod Lipson.
To answer your question directly: It feels really rewarding, honestly. As a scientist, you don’t do the work because you want awards. You just do it because you’re curious and you’re passionate, and you can’t not do it because it’s so interesting to you. The life of a researcher is often very solitary. You work with a small team of people late into the night and you’re often working nights and weekends. While other people are relaxing, or partying, or watching movies on a plane, you’re frantically working because you’re passionate about what you’re doing. Then every once in a while, a paper gets accepted to a prestigious venue or one of these awards comes along and you stop and take a moment and say, “Oh. Right. Other people find this interesting and valuable and useful too. That’s really nice.” But then you go back to the research because you find it so thoroughly fascinating.
It’s especially nice to receive these awards that span longer timescales, such as the Outstanding Publication of the Decade award or the PECASE award, which is given in recognition of work done throughout your career. As scientists, we frequently have to make very hard decisions about where to invest our time and what’s worthwhile and what is not. Oftentimes, especially a decade ago when people were not interested in neural networks, neuroevolution, or open-ended algorithms, it was very risky to pursue these things. Receiving these awards is a nice validation that you were actually pursuing something that is intrinsically scientifically worthwhile, and gives you confidence that your instincts can be trusted about when to go against the grain and work on unpopular ideas that you think are really important, interesting, and have a good chance of catalyzing further scientific progress.
What excites you most about the future of artificial intelligence?
I think what excites me most is that we’re starting to have the techniques, tools, and computation to pursue extremely advanced artificial intelligence. The longstanding dreams of the field, going back to the founder Alan Turing, are to create intelligent machines that can think as well as or better than humans. That seemed confined to the realm of science fiction for decades. Only recently are people seriously talking about human-level AI as a possibility that might be achievable in the coming decades.
I recently put out a position paper that outlined an alternate path to producing artificial intelligence. I’m truly excited about the prospects for that paradigm. There’s a longstanding trend in machine learning, which is that hand designed systems ultimately give way to learned systems once you have sufficient compute. I think that we’re increasingly going to see that lesson applied to the creation of artificial intelligence itself.
Right now, the majority of the field is engaged in what I call the manual path to AI. In the first phase, which we are in now, everyone is manually creating different building blocks of intelligence. The assumption is that at some point in the future our community will finish discovering all the necessary building blocks and then will take on the Herculean task of putting all of these building blocks together into an extremely complex thinking machine. That might work, and some part of our community should pursue that path. However, I think a faster path that is more likely to be successful is to rely on learning and computation: the idea is to create an algorithm that itself designs all the building blocks and figures out how to put them together, which I call an AI-generating algorithm. Such an algorithm starts out not containing much intelligence at all and bootstraps itself up in complexity to ultimately produce extremely powerful general AI. That’s what happened on Earth. The simple Darwinian algorithm coupled with a planet-sized computer ultimately produced the human brain. I think that it’s really interesting and exciting to think about how we can create algorithms that mimic what happened to Earth in that way. Of course, we also have to figure out how to make them work so they do not require a planet-sized computer.
Outside of your research work, what are you interested in? What drives you?
I love thinking of creative ways to almost kill myself. That includes almost all adventure sports, especially surfing, kitesurfing, rock climbing, skiing, kayaking, climbing mountains, ice climbing, and mountain biking. For almost every adventure sport I either already love it or it’s on my bucket list to learn. I enjoy hockey and ultimate frisbee too.
I also love literature, including Borges, Kundera, Calvino, Dostoyevsky, DeLillo, Marquez, Card, Tolkien, Tolstoy, Carroll, and Stephenson. I additionally am passionate about traveling, and have spent over two years backpacking around the Earth visiting over 55 countries on six continents. Nowadays I have two young children and it is really fun and rewarding to teach them about the world. I love reading to them, teaching them how to program robots, assemble LEGOS, and doing all nature of science experiments with them. We also go rock climbing and hiking and I look forward to getting them hooked on the rest of the adventure sports I love.
If you are excited about the research we do, consider applying for a role with Uber AI.