How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over

They’re funding a new organization, OpenAI, to pursue the most advanced forms of artificial intelligence — and give the results to the…
Image may contain Elon Musk Sam Altman Human Person Face Head Clothing and Apparel
Photo illustration by Backchannel

As if the field of AI wasn’t competitive enough — with giants like Google, Apple, Facebook, Microsoft and even car companies like Toyota scrambling to hire researchers — there’s now a new entry, with a twist. It’s a non-profit venture called OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk, Reid Hoffman, Peter Thiel, Jessica Livingston and Amazon Web Services. They have collectively pledged more than a billion dollars to be paid over a long time period. The co-chairs are Musk and Sam Altman, the CEO of Y Combinator, whose research group is also a funder. (As is Altman himself.) Musk, a well-known critic of AI, isn’t a surprise. But Y Combinator? Yep. That’s the tech accelerator that started 10 years ago as a summer project that funded six startup companies by paying founders “ramen wages” and giving them gourmet advice so they could quickly ramp up their businesses. Since then, YC has helped launch almost 1,000 companies, including Dropbox, Airbnb, and Stripe, and has recently started a research division. For the past two years, it’s been led by Altman, whose company Loopt was in the initial class of 2005, and sold in 2012 for $43.4 million. Though YC and Altman are funders, and Altman is co-chair, OpenAI is a separate, independent venture.

Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry. It may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTO Greg Brockman (who will be OpenAI’s CTO) and world-class researcher Ilya Sutskever, who was formerly at Google and was one of the famed group of young scientists studying under neural net pioneer Geoff Hinton in Toronto. He’ll be OpenAI’s research director. The rest of the lineup includes top young talent whose resumes include major academic groups, Facebook AI and DeepMind, the AI company Google snapped up in 2014. There is also a stellar board of advisors including Alan Kay, a pioneering computer scientist.

OpenAI’s leaders spoke to me about the project and its aspirations. The interviews were conducted in two parts, first with Altman and then another session with Altman, Musk, and Brockman. I combined the interviews and edited for space and clarity.

How did this come about?Sam Altman: We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we are creating OpenAI. The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.

Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.

And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.

Human will?Musk: As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phone — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.

Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.

Doesn’t Google share its developments with the public, like it just did with machine learning?Altman: They certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question how much Google will share.

Couldn’t your stuff in OpenAI surpass human intelligence?Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone.

If I’m Dr. Evil and I use it, won’t you be empowering me?Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

Will you have oversight over what comes out of OpenAI?Altman: We do want to build out an oversight for it over time. It’ll start just with Elon and me. We’re still a long, long way from actually developing real AI. But I think we’ll have plenty of time to build out an oversight function.

Musk: I do intend to spend time with the team, basically spending an afternoon in the office every week or two just getting updates, providing any feedback that I have and just getting a much deeper understanding of where things are in AI and whether we are close to something dangerous or not. I’m going to be super conscious personally of safety. This is something that I am quite concerned about. And if we do see something that we think is potentially a safety risk, we will want to make that public.

What’s an example of bad AI?Altman: Well, there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term. One thing that I do think is going to be a challenge — although not what I consider bad AI — is just the massive automation and job elimination that’s going to happen. Another example of bad AI that people talk about are AI-like programs that hack into computers that are far better than any human. That’s already happening today.

Are you starting with a system that’s built already?Altman: No. This is going to start like any research lab and it’s going to look like a research lab for a long time. No one knows how to build this yet. We have eight researchers starting on day one and a few more will be joining over the next few months. For now they are going to use the YC office space and as they grow they’ll move out on their own. They will be playing with ideas and writing software to see if they can advance the current state of the art of AI.

Will outsiders contribute?Altman: Absolutely. One of the advantages of doing this as a totally open program is that the labs can collaborate with anyone because they can share information freely. It’s very hard to go collaborate with employees at Google because they have a bunch of confidentiality provisions.

Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work? [UPDATE: Altman now tells me the office will be based in San Francisco.]Altman: If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and Space X can share.

What would be an example of the kind of data that might be shared?Altman: So many things. All of the Reddit data would be a very useful training set, for example. You can imagine all of the Tesla self-driving car video information being very valuable. Huge volumes of data are really important. If you think about how humans get smarter, you read a book, you get smarter, I read a book, I get smarter. But we don’t both get smarter from the book the other person read. But, using Teslas as an example, if one single Tesla learned something about a new condition every Tesla instantly gets the benefit of that intelligence.

Musk: In general we don’t have a ton of specific plans because this is really just the incipient stage of the company; it’s kind of the embryonic stage. But certainly Tesla will have an enormous amount of data, of real world data, because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world.

Tesla Motors CEO and Product Architect Elon Musk, Y Combinator President Sam Altman onstage during ‘the Vanity Fair New Establishment Summit n San Francisco, California.

Photo by Mike Windle/Getty Images for Vanity Fair

AI needs a lot of computation. What will be your infrastructure?Altman: We are partnering with Amazon Web Services. They are donating a huge amount of infrastructure to the effort.

And there is a billion dollars committed to this?Musk: I think it’s fair to say that the commitment actually is some number in excess of a billion. We don’t want to give an exact breakdown but there are significant contributions from all the people mentioned in the blog piece.

Over what period of time?Altman: However long it takes to build. We’ll be as frugal as we can but this is probably a multi-decade project that requires a lot of people and a lot of hardware.

And you don’t have to make money?Musk: Correct. This is not a for-profit investment. It is possible that it could generate revenue in the future in the same way that the Stanford Research Institute is a 501c3 that generates revenue. So there could be revenue in the future, but there wouldn’t be profits. There wouldn’t be profits that would just enrich shareholders, there wouldn’t be a share price or anything. We think that’s probably good.

Elon, you earlier invested in the AI company DeepMind, for what seems to me to be the same reasons — to make sure AI has oversight. Then Google bought the company. Is this a second try at that?Musk: I should say that I’m not really an investor in any normal sense of the word. I don’t seek to make investments for financial return. I put money into the companies that I help create and I might invest to help a friend, or because there’s some cause that I believe in or something I’m concerned about. I am really not diversified beyond my own company in any material sense of the word. But yeah, my sort of “investment,” in quotes, for DeepMind was just to get a better understanding of AI and to keep an eye on it, if you will.

You will be competing for the best scientists now who might go to Deep Mind or Facebook or Microsoft?Altman: Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what they’re working on, which at any of the industrial labs you don’t have to the same degree. We were able to attract such a high-quality initial team that other people now want to join just to work with that team. And then finally I think our mission and our vision and our structure really appeals to people.

How many researchers will you eventually hire? Hundreds?Altman: Maybe.

I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?

Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

Elon, you are the CEO of two companies and chair of a third. One wouldn’t think you have a lot of spare time to devote to a new project.Musk: Yeah, that’s true. But AI safety has been preying on my mind for quite some time, so I think I’ll take the trade-off in peace of mind.