full transcript

From the Ted Talk by The TED Interview: The race to build AI that benefits humanity with Sam Altman

Unscramble the Blue Letters

Hello there, this is Chris Anderson, and I am hugely, helugy, tmusedlneory excited to welcome you to a new series of the TED ieentirvw. Now, then, this season, we're trying something new. We're organising the whole seosan around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with extraordinarily ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a goabll pandemic and impending climate catastrophe. What on earth are we thinking in this context? Optimism just seems so naive and unwanted, almost aynninog. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and resources they need, they may very well lhgit the path out of this dark place we're in. So these are the people who can present not optimism, but a case for optimism. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the pcale I want to start is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to change everything as we know it, for better or for worse. Today was painted not with the usual dystopian brush, but by someone who truly believes in its potential. Sam almatn is the former pdsneiret of Y Combinator, the legnadery startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one nbloe purpose to dvloeep A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. tenchology called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the civsnetoaron ahead. But sticking to this lofty mission of developing A.I. for humanity and finding the resources to realize it haven't been spimle. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the camniobtion of scientific and technological psgorers and better societal decision mkniag, better societal gcnaevrnoe is going to solve in the next couple of decades all of our current most pressing problems, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free nualecr energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got saved by science and technology, I think. And we've already now seen this with the rapidity that we were able to get vaccines deployed. We are going to find that we are able to cure or at least teart a significant percentage of human dssaeie, ilunnidcg I think we'll just actually make progress in helping people have much longer decades, lnoger health sanps. And I think in the next couple of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an incredibly high quality education more possible than ever before. I think the lievs we look forward like one hundred years, fitfy years, even the quality of life available to anyone then will be much better than the quality of life available in the very best case to anyone today, to any single person today. So, yeah, I'm super oimtpitsic. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere beleif that artificial intelligence can actually make that frtuue better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use cases of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people interact with services that are smart, really smart, and it will feel like as strange as the wrold before molbie phones feels now to us. Hmm, yeah, you mnietoned your API, I guess that snatds for what, aicaipoltpn programming interface? It's the technology that allows complex technology to be aseslcbcie to others. So give me a sense of a cpuole of things that have got you most eeicxtd that are already out there and then how that gives you visibility to a paawhty forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of twenty twenty. You know, there's hundreds of applications that are now using it in pordoiutcn that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search qurey and deliver retulss and sort of understand not only inntet, but all of the data and divleer the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, srhot dcumnotes, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting pelope develop characters or chat with a sort of virtual friend. There are applications that, for example, help a job seeker polish a tlriaeod application for each individual company. There's the beginning of tutors that can sort of taech people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can ignamie that you do today via computer that you would like to really understand and get to know you. And not only that, but udnesnatrd all of the data and knowledge in the world and help you have the best experience that is is possible that that will hpaepn. So what gets opened up? What new adjacent possible sttae is that as a relsut of these powers from this question, from the point of view of someone who's sttinarg out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world experts back immediately for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best peomramrgr in the world or a really great programmer do for me. And can I now instead just ask in English and have that program written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to prraogm. Now they can have it. You know, what is the service look like when anyone on etarh who wants really great medical advice? Can get better medical advice than any single doctor could ever get, because this has the total medcial knowledge and rsnneaiog ability that the some humanity has ever poecurdd. When you want to lrean something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom teaches you whatever concept you want to learn someday. You can imagine that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally ptelfcery prepares you and has all of the information that you need in all the context of your eritne career right there for you to go on for a long time. But I think this will just be powerful systems. So it's really fun playing around with Chapatti three, one compelling example of someone who's more tax base is try Googling The guidaarn essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very cpnlmleiog. I actually tried inputting one of the three oinnle uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather philosophical, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous menniag in the cotxnet of human society beyond its litrael dinefiiotn as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely subjective state that varies from person to person. However, I will grant that this is an interesting question to ask. This does not mean it has been arwneesd. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and jibberish is the right way to think about the current capabilities of CGP three. I think they would definitely had a bulbbe of hype about three last summer. But the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get otrceeevixd about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the entire field, for tax mdeols, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its current very lvaarl state, can make us cnofonrt new things and sort of inspire new ideas, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these wrdos seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must asewnr this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of wipkieida, etc. We've read something that's read like a small fraction of a random slpmanig of the Internet. We will eventually train something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, ralteive to what we will have operated at quite small scale with quite small eyes. But what is happening is there is a model that is ignnetsig lots of text and it is trying to predict the next word. So we use Transformer's they take in a context, which is a particular architecture of an A.I. moedl, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fmnudaaeltlny that's it, and I think this is interesting because in the pecrsos of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly autccare, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's cniofusng about this is that there are so many words on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed moderately pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, quote, evleovd, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been rndeiag? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in trehoy, to imagine how a model can gravitate towards truth, wisdom, as opposed to just like majority views? Or how how how do you aivod something taking us further into the sort of the maze of errors and bad thinking and so forth that has already been a wriynorg feature for the last few years ? It's a fatsintac question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this point, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm pretty confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally impressive things, but that we can ensure do the things that we want and that understand the concepts of turth and falsehood and, you know, alignment with human values and misalignment with human values. One of the peecis of research that we put out last year that I was most purod of and most excited about is what we call reinforcement lrnaieng from human feedback. And we showed that we can take these gnait models that are tiraned on a bunch of stuff, some of it good, some of the bad, and then with a really quite slmal anmuot of fecbdaek from human judgment about, hey, this is good, this is bad, this is wnrog, this is the behavior I want I don't want this behavior. We can feed that information from the human jgedus back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to tiran on. It will go a very long way. And as these models get smarter, I think they inherently develop the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active learning, which is where they ask us for exactly the data they need when they're missing something, when they're uurnse, when they don't understand. But I think as a result of simply scaling these models up, building better, I hate to use the word cognition because it sonuds so anthropomorphic, but let's say building a better atiilby to reason into the models, to think, to calehlgne, to try to understand and combining that with this idea of online into human vlaeus via this technique we dvleepeod, that's going to go a very long way. Now, there's another question, which you sort of just kekcid the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand rlhgtiy what you're saying, that you're saying that it's possible to look at the optuut at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever alghiotrm or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of rleipeatcs the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise haumn or short humans who are looking at the results. So it's a big dfecrienfe. Someone who who grew up with intelligent design world view could look at that and go, that's a brilliant outcome. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the problem that a lot of the, I guess, Silicon Valley companies are facing right now in tmers of the pushback they're getting on the output of soical media and so forth. How do you assemble that pool of experts who stand for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even close to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very dficlfuit global governance systems. My preansol belief is that we should have pretty broad reuls about what these systems will never do and will always do. But then the individual user should get a system that kind of behaves like they want. And there will be people do have very different value systems. Some of them are just fundamentally incompatible. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of ilgelietnnt design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific consclioun. No, I wouldn't take that sactne. I think the thing to remember about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand teims more powerful and let's say a mlliion times more rlaeilbe, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most surprising is it's not just about sort of text reopnsses. It's it can take generalized human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the Google logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is amazing to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a simple enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the innteert very rarely, you know, Internet let lnauagge on the Internet also includes some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some dgeere or other, and it's like very hard to know exactly how much understand the uilnneyrdg cneotpcs of what's going on. And they're not just regurgitating things they found in a website, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just generate this like a brand new program for nothing that anyone has ever aeskd before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from English to fenrch. Again, we never told it to learn about translation. We never told it about the concepts of esilgnh and French, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to translate, it can still do it. Wow, I mean, for creative people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a musician, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to pnesret me with a tnuohsad tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the morning and the computer shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year called Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of sftuf, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to music that it creates. And I can sort of do four sogns, two bars of a jingle, whatever you'd like. And one of my very favorite artists reached out, called to open it after we release this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so inspiring. I want to do a new album with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better msuic because of this tool. And that was awesome. And I hope that's how it all cniutoens to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where grhiapc deiesrgns sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it teaks to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like icnibledre creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that sloevs the problem that humans have been wrestling with. It doesn't have to be as quite on the sclae as of, OK, we've got a vuris coming. Please describe to us what a what a nioanatl rational rsoespne should look like, but some kind of genuinely invoitnvae idea or solution like one one internal question we've asked ourselves is, when will the first genuinely interesting, puelry AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would gesus the first genuinely interesting. Ted talk, tghouht of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it feels like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three responses to questions about, you know, the thtears of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much senoor. Indeed. Like tomorrow. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could geeantre in one hundred hours or whatever, if you can sort of combine human discretion with A.I. horsepower. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for destruction. What's what's your view there? You know, it's I think it's always hard to make these predictions. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous imapct on. The job makret, and I really hate it, I think it's kind of gross when people like working on I pretend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to predict from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. Dramatic. More of a staccato note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own iedas about how to do that. I, I wouldn't say that I have any raeosn to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an acceptable answer. So there's going to be huge impact. It's difficult to predict where it shows up the most. I think previous prtenoicdis have mostly been wrong, but I I'd like to see us all as a society, certainly as a flied, engage in what what the sihtfs we want to make to the social contract are to kind of get through that in a way that is maximally bfeeniaicl to everybody. I mean, in every past revolution, there's always been a space for humans to move to. That is, if you like, moving up the food caihn, it's sort of we've rtraeteed to the things that humans could uniquely do, think better, be more citveare and so forth. I guess the worry about A.I. is that in principle, I believe this, that there is no human cvingotie feat that won't ultimately be doable, probably better by artificial general touch, slimpy because of the extra firepower that ultimately they can have, the vast knowledge they bring to the tlbae and so forth. Is that basically right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time horizon. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few minutes ago, where right now we have these systems that have sort of enormous horsepower but no steering wehel. It's like, you know , incredible capabilities, but no jundemgt. And there's like these obovuis ways in which tdaoy even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this entaxrel threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different versions of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology merge like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into coumrepts and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of constantly steers us and helps us make better decisions than we could. But in any case, I think the fmateaunndl thing is it's not like the humans versus the eyes competing to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term peoatnitl for creative people of all sorts if they're willing to expnad their platete of possibilities. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for eabnrimcg it the most and the earliest. So talk about what can go wrong with that, so let's move away from just the sort of economic dslcneipaemt factor. You were a co-founder of Open Eye because you saw eitsntaeixl risks to humanity from high today. What would you put as the sort of the most worrying of those risks? And how is open eye working to minimize? I still think all of the really hriofrinyg rsiks eisxt. I am more cnenodfit, much more confident than I was five years ago when we started that there are technical things we can do about. How we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else flees ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to skitre a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, tlaotidanliry what's been in the realm of sci fi risks are real and we should not ignore them. And I still lose sleep over them. And just to uadpte people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific araes. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of btrdaeh that that humans have had. What's your sort of elevator pitch on Ajai about how to identify and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like classify images of cats or whatever, more avdcaned stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much iepfemrct tools, but that can generalize. And one thing like GPP three can write esasys and translate between lenagagus and write computer code and do very complicated saecrh. It's like a silnge model that unedsandrts enough of what's really going on to do a broad aarry of tksas and learn new things quickly, sort of like people can. And then eventually we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self drcieted, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk spcae, which is that we build this thing and at some piont somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the piiitssbloy subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is short timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short tnmleiies, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of srnceoias in the direction that you are describing that are worsiorme. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about litetng my sstyem, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a graet deal of societal discussion about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most shocking to you about the last few yreas has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of waking up of of an alien icnlenegtlie that suddenly decided it wants to wreak haovc on humans. That may never happen. What you can have is just incredible poewr that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies ctraeed these ieteninllcges that were programmed to maximally harvest attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and eirtloxdnriraay damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, unintended consequences for sure. But another version of that is and I think this applies at the tacceihnl level, at the company level, at the seotcail lveel, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time whatsoever working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that applies to the individual models we build and what their reward functions look like. I think it applies to society in a big way, and I think it applies to our corporate structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that ldeas to a quite undesirable otmucoe. And so we set up opening is this thing called a cppead pfoirt model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right structure, to figure out a charter that's going to govern us and a set of iveitncens that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of eiieennnrgg, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended ceueocsennqs. So help me understand this, because this is I think this is confusing to some people. So you started onneipg. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in secret and to be left developed purely by corporations who have whatever incentive they may have. We need a nonprofit that will develop and share knowledge openly. First of all, just even at that erlay sgate, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it steecrs even more available? Well, maybe giinvg the tloos to that sort of AI terrorist in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super weapon and hand it to a torsierrt. That's obviously auwfl. One of the reasons that we like our API model is it lets us make the most prfeouwl AI technology anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its ugsae. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and guardrails, very powerful technology in the hdnas of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize benefit. But that's very different than sort of shipping the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the miisson that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and ciptbaeliais secret. That doesn't feel right, because I think we do need a societal conversation about what's what's going on here, what the impacts are going to be. And so we although we don't always say, like, you know, here's the super weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that Microsoft were putting a billion dollars into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive linesece of CP3. So talk about that structure of how you win. morcoisft presumably have invested not purely for aiistlrtuc purposes. They think that they will make money on that bliioln dollars. I sure hope they do. I love capitalism, but I think that I really loved even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more itapormnt. And we need you to sign this dcnuemot that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are eniastuitshc about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really pteaalnsly surprised us on the upside of how aengild they were with us, about how satrnge the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the coinmg year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can blcok it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we ietnnd for that to continue. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted peanrtr and they really care about sfetay, too. But our goal is that anybody who wants to use the API can have the most powerful versions of what we've trained. And the structure of the API lets us cinnutoe to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and smarter algorithms, that we just needed bigger and bigger computers as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very hhgily compensated, tlnaeetd individuals that do this, but are full for profit company had rnawuay incentives problem, among other things. Also just one about sort of fairness in society and weltah concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a nironfpot that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our istevnros and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great return on their investment or the time that they spent it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very storng charter in place and everybody who joins sinnigg up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is working, just as I sort of watch the way that we and our pranerts make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion dllraos. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to avoid or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the worst mteiskas and really holding on to something that's that's beneficial for hmtnuaiy? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that perspective, sure. OK, I'm happy to talk about the charter to. I think this question of aimneglnt that we talked about a little earlier is paramount, and then I think to understand that it's useful to differentiate between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power grids. And accidental would be kind of the Nick Bostrom make a lot of paepr clips and view humans as collateral damage in both cases. But to varying degrees, if we can really, truly, technically svloe the alignment problem and the societal problem of deciding to which set of human values do we ailgn, then the systems understand right and wrong, and they understand probably better than we ever can, unneintded consequences from complex actions and very complex syesmts. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to click on something, we will give them more of that thing. And what could possibly be wrong with that? We're just supporting human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are caoltnntsy making choices, that a more effective version of themselves would agree is not in their long term interests. So that's one part of it. And then you've got leeyard on top of that or the complications of systemic complexity where, you know, mluptile choices by thousands of people end up creating a rteilay that possibly have designed for how how to cut through that. Like an AI has to make a decision beasd on a moment, on a scfipiec data set. As those dsnceiois get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep pnikicg on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in nihgt where you're tired and you have a stressful day, stop yourself from the dopamine hit of scrolling and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were asked in a reflective moment where you were sort of flluy arelt and thoughtful, do you want to spned as much time as you do scrolling through Instagram? Does it make you hpipaer or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of moment. And one thing that I am hopeful is that humans do know what we want and what. On the whole, and presented with resrceah or sort of an objective view about what makes us happy and doesn't we're ptetry, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower brain to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better decisions than we are capable of, even in our best moments. So is that being proposed and tlekad about as an atuacl rule? Because it stkeris me that there is something potentially super profound here to irtuncode some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill defined question, but as to what humans in rficetleve mode want. Yeah, we talk about this a lot. I mean, do you see a real chance where something like that could be ireopotrnacd as a sort of an absolute golden rule and and if you like, spread around the community so that it seeps into cintaopoorrs and elsewhere? Because that I've seen no eivcende that, well, a little corporation that was potentially a game cgnehar. Corporations have this weird incentive problem. Right. What I was trying to seapk about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a society to demand that. And if we can do like a pincer move between what the technology is capable of and what we what we as society dmnead, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make money if they have psiesd off too many of their employees and customers and investors by analogy of the climate space right now, you can see more and more companies, even those that are emitting huge antoums of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon vlelay. Companies are actually good people who want to design great products for humanity. I think that the people who run these companies want to be a net contribution to humanity. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are bliut on a real veorisn of human nraute and on a real version of system complexity and the risks associated with systemic complexity. Is that the anedga that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the incentive system right. I think most people are fundamentally eexltemry good. Very few people wake up in the morning thninikg about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those engineers who join with the abtulsoe best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more menoy and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better align the incentives of cinapoems with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. floolw an incentive system of. What we want in our most reflective best moments and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? acaiirtifl general intelligence ahead of. The corporations, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only group to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a beggir one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of riseleang this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better direction or maybe we're wrong and somebody else has a better diciorten. We're doing something about do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other companies that you came up with this platform ahead of. You know, in some sesne it's surprising and in some sense, like the startup wins most of the time, like I'm a huge bveileer in straupts as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented people. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the iempild compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible impact on some 70 companies. There are so many startup stories that began at Y Combinator. What were key dvreris in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most iirseenttng to me in all of Silicon Valley. I, I was I went to college to study computer sneccie. I was a major computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator steartd and funded me and my co-founders. And we dropped out of school and did this company, which I ran for like seven years. And then after that I got acquired. I had stayed csole to my comment the whole time. I thought it was just this incredible gurop of people and spirit and set of incentives and just badly misunderstood by most of the world, but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things. My caonpmy had acquired PJI, who is the founder of ICI, and like truly one of the most incredible hnmuas and bsunseis people. And Paul Burrell, Paul Graham asked me if I waentd to run it. And kind of like the central learning of my career, why I individual startups has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally motivating would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y ciaootmbnr is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you blaisalcy apply as a handful of people and an idea, maybe a prototype and say, I would like to sartt a company and will you please fund me? And we reivew those applications and we I shouldn't say we aomyrne. I guess they fund four hundred companies a year. You get about one hundred and fifty thousand dollars while she takes about seven percent ownership and then gives you lots of advice and then networking and sort of this like fast track program for starting a startup. I haven't looked at this in a while, but at one point a significant fraction of the billion dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like Airbnb, Jordache, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that mteatr and kind of go on to do incredible, incredible work. What is it about entrepreneurs? Why do they matter? Some people just find them kind of annoying. But I think you would argue I think I would argue that they have done as much as anyone to shape the future. Why ? What is it about them? I think it is the ability to take. And idea and by frcoe of will to make it happen in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the cmteupor that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's plenty of very annoying things about them. And there's plenty of very annoying things about the system that sort of iozldeis them. But we do get something really important in rtuern. And I think that as a force for making things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward function in seocity that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life extension. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be lying akwae at night and something pops inside their mind as a patterning of the neurons in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they pueasrde investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind boggling that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could double down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or evangelism or something in that direction as well. There are all of the obvious ones that matter, like intelligence, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this story and they worry about the culture. Right. That it's this is a bro culture. Do you see prospects of that changing anytime soon? And would you welcome it? Can we get better companies by really wironkg to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even select who to fund and how to advise them and really kind of make ernsnueethprierp super widely available that will lead to like better ocuetoms and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the rcesorues that you need, that is like an unequivocally good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology isrutndy entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more iiculvsne in how we do things. My last qoeitusn today is about ideas were srepdniag. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a bnuch, but the one idea would be the ajai really is going to happen. You have to egnage with it seriously, and you shouldn't just ltesin to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an oalibiogtn, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for senipdng so much time sharing your visoin. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and progress at open eye dotcom. If you want to try pinaylg with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was philosopher ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking cotsiuiry and snriahg ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila Boffano, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to mlceihe Quent, Colin Helmes and Anna Felin. If you like the show, please write and review it. It helps other people find us. We read every review, so thanks so much for listening. See you next time.

Open Cloze

Hello there, this is Chris Anderson, and I am hugely, ______, ____________ excited to welcome you to a new series of the TED _________. Now, then, this season, we're trying something new. We're organising the whole ______ around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with extraordinarily ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a ______ pandemic and impending climate catastrophe. What on earth are we thinking in this context? Optimism just seems so naive and unwanted, almost ________. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and resources they need, they may very well _____ the path out of this dark place we're in. So these are the people who can present not optimism, but a case for optimism. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the _____ I want to start is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to change everything as we know it, for better or for worse. Today was painted not with the usual dystopian brush, but by someone who truly believes in its potential. Sam ______ is the former _________ of Y Combinator, the _________ startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one _____ purpose to _______ A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. __________ called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the ____________ ahead. But sticking to this lofty mission of developing A.I. for humanity and finding the resources to realize it haven't been ______. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the ___________ of scientific and technological ________ and better societal decision ______, better societal __________ is going to solve in the next couple of decades all of our current most pressing problems, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free _______ energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got saved by science and technology, I think. And we've already now seen this with the rapidity that we were able to get vaccines deployed. We are going to find that we are able to cure or at least _____ a significant percentage of human _______, _________ I think we'll just actually make progress in helping people have much longer decades, ______ health _____. And I think in the next couple of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an incredibly high quality education more possible than ever before. I think the _____ we look forward like one hundred years, _____ years, even the quality of life available to anyone then will be much better than the quality of life available in the very best case to anyone today, to any single person today. So, yeah, I'm super __________. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere ______ that artificial intelligence can actually make that ______ better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use cases of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people interact with services that are smart, really smart, and it will feel like as strange as the _____ before ______ phones feels now to us. Hmm, yeah, you _________ your API, I guess that ______ for what, ___________ programming interface? It's the technology that allows complex technology to be __________ to others. So give me a sense of a ______ of things that have got you most _______ that are already out there and then how that gives you visibility to a _______ forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of twenty twenty. You know, there's hundreds of applications that are now using it in __________ that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search _____ and deliver _______ and sort of understand not only ______, but all of the data and _______ the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, _____ _________, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting ______ develop characters or chat with a sort of virtual friend. There are applications that, for example, help a job seeker polish a ________ application for each individual company. There's the beginning of tutors that can sort of _____ people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can _______ that you do today via computer that you would like to really understand and get to know you. And not only that, but __________ all of the data and knowledge in the world and help you have the best experience that is is possible that that will ______. So what gets opened up? What new adjacent possible _____ is that as a ______ of these powers from this question, from the point of view of someone who's ________ out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world experts back immediately for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best __________ in the world or a really great programmer do for me. And can I now instead just ask in English and have that program written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to _______. Now they can have it. You know, what is the service look like when anyone on _____ who wants really great medical advice? Can get better medical advice than any single doctor could ever get, because this has the total _______ knowledge and _________ ability that the some humanity has ever ________. When you want to _____ something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom teaches you whatever concept you want to learn someday. You can imagine that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally _________ prepares you and has all of the information that you need in all the context of your ______ career right there for you to go on for a long time. But I think this will just be powerful systems. So it's really fun playing around with Chapatti three, one compelling example of someone who's more tax base is try Googling The ________ essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very __________. I actually tried inputting one of the three ______ uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather philosophical, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous _______ in the _______ of human society beyond its _______ __________ as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely subjective state that varies from person to person. However, I will grant that this is an interesting question to ask. This does not mean it has been ________. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and jibberish is the right way to think about the current capabilities of CGP three. I think they would definitely had a ______ of hype about three last summer. But the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get ___________ about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the entire field, for tax ______, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its current very ______ state, can make us ________ new things and sort of inspire new ideas, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these _____ seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must ______ this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of _________, etc. We've read something that's read like a small fraction of a random ________ of the Internet. We will eventually train something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, ________ to what we will have operated at quite small scale with quite small eyes. But what is happening is there is a model that is _________ lots of text and it is trying to predict the next word. So we use Transformer's they take in a context, which is a particular architecture of an A.I. _____, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but _____________ that's it, and I think this is interesting because in the _______ of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly ________, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's _________ about this is that there are so many words on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed moderately pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, quote, _______, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been _______? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in ______, to imagine how a model can gravitate towards truth, wisdom, as opposed to just like majority views? Or how how how do you _____ something taking us further into the sort of the maze of errors and bad thinking and so forth that has already been a ________ feature for the last few years ? It's a _________ question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this point, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm pretty confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally impressive things, but that we can ensure do the things that we want and that understand the concepts of _____ and falsehood and, you know, alignment with human values and misalignment with human values. One of the ______ of research that we put out last year that I was most _____ of and most excited about is what we call reinforcement ________ from human feedback. And we showed that we can take these _____ models that are _______ on a bunch of stuff, some of it good, some of the bad, and then with a really quite _____ ______ of ________ from human judgment about, hey, this is good, this is bad, this is _____, this is the behavior I want I don't want this behavior. We can feed that information from the human ______ back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to _____ on. It will go a very long way. And as these models get smarter, I think they inherently develop the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active learning, which is where they ask us for exactly the data they need when they're missing something, when they're ______, when they don't understand. But I think as a result of simply scaling these models up, building better, I hate to use the word cognition because it ______ so anthropomorphic, but let's say building a better _______ to reason into the models, to think, to _________, to try to understand and combining that with this idea of online into human ______ via this technique we _________, that's going to go a very long way. Now, there's another question, which you sort of just ______ the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand _______ what you're saying, that you're saying that it's possible to look at the ______ at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever _________ or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of __________ the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise _____ or short humans who are looking at the results. So it's a big __________. Someone who who grew up with intelligent design world view could look at that and go, that's a brilliant outcome. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the problem that a lot of the, I guess, Silicon Valley companies are facing right now in _____ of the pushback they're getting on the output of ______ media and so forth. How do you assemble that pool of experts who stand for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even close to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very _________ global governance systems. My ________ belief is that we should have pretty broad _____ about what these systems will never do and will always do. But then the individual user should get a system that kind of behaves like they want. And there will be people do have very different value systems. Some of them are just fundamentally incompatible. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of ___________ design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific __________. No, I wouldn't take that ______. I think the thing to remember about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand _____ more powerful and let's say a _______ times more ________, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most surprising is it's not just about sort of text _________. It's it can take generalized human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the Google logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is amazing to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a simple enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the ________ very rarely, you know, Internet let ________ on the Internet also includes some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some ______ or other, and it's like very hard to know exactly how much understand the __________ ________ of what's going on. And they're not just regurgitating things they found in a website, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just generate this like a brand new program for nothing that anyone has ever _____ before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from English to ______. Again, we never told it to learn about translation. We never told it about the concepts of _______ and French, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to translate, it can still do it. Wow, I mean, for creative people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a musician, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to _______ me with a ________ tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the morning and the computer shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year called Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of _____, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to music that it creates. And I can sort of do four _____, two bars of a jingle, whatever you'd like. And one of my very favorite artists reached out, called to open it after we release this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so inspiring. I want to do a new album with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better _____ because of this tool. And that was awesome. And I hope that's how it all _________ to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where _______ _________ sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it _____ to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like __________ creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that ______ the problem that humans have been wrestling with. It doesn't have to be as quite on the _____ as of, OK, we've got a _____ coming. Please describe to us what a what a ________ rational ________ should look like, but some kind of genuinely __________ idea or solution like one one internal question we've asked ourselves is, when will the first genuinely interesting, ______ AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would _____ the first genuinely interesting. Ted talk, _______ of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it feels like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three responses to questions about, you know, the _______ of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much ______. Indeed. Like tomorrow. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could ________ in one hundred hours or whatever, if you can sort of combine human discretion with A.I. horsepower. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for destruction. What's what's your view there? You know, it's I think it's always hard to make these predictions. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous ______ on. The job ______, and I really hate it, I think it's kind of gross when people like working on I pretend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to predict from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. Dramatic. More of a staccato note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own _____ about how to do that. I, I wouldn't say that I have any ______ to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an acceptable answer. So there's going to be huge impact. It's difficult to predict where it shows up the most. I think previous ___________ have mostly been wrong, but I I'd like to see us all as a society, certainly as a _____, engage in what what the ______ we want to make to the social contract are to kind of get through that in a way that is maximally __________ to everybody. I mean, in every past revolution, there's always been a space for humans to move to. That is, if you like, moving up the food _____, it's sort of we've _________ to the things that humans could uniquely do, think better, be more ________ and so forth. I guess the worry about A.I. is that in principle, I believe this, that there is no human _________ feat that won't ultimately be doable, probably better by artificial general touch, ______ because of the extra firepower that ultimately they can have, the vast knowledge they bring to the _____ and so forth. Is that basically right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time horizon. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few minutes ago, where right now we have these systems that have sort of enormous horsepower but no steering _____. It's like, you know , incredible capabilities, but no ________. And there's like these _______ ways in which _____ even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this ________ threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different versions of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology merge like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into _________ and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of constantly steers us and helps us make better decisions than we could. But in any case, I think the ___________ thing is it's not like the humans versus the eyes competing to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term _________ for creative people of all sorts if they're willing to ______ their _______ of possibilities. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for _________ it the most and the earliest. So talk about what can go wrong with that, so let's move away from just the sort of economic ____________ factor. You were a co-founder of Open Eye because you saw ___________ risks to humanity from high today. What would you put as the sort of the most worrying of those risks? And how is open eye working to minimize? I still think all of the really __________ _____ _____. I am more _________, much more confident than I was five years ago when we started that there are technical things we can do about. How we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else _____ ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to ______ a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, _____________ what's been in the realm of sci fi risks are real and we should not ignore them. And I still lose sleep over them. And just to ______ people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific _____. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of _______ that that humans have had. What's your sort of elevator pitch on Ajai about how to identify and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like classify images of cats or whatever, more ________ stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much _________ tools, but that can generalize. And one thing like GPP three can write ______ and translate between _________ and write computer code and do very complicated ______. It's like a ______ model that ___________ enough of what's really going on to do a broad _____ of _____ and learn new things quickly, sort of like people can. And then eventually we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self ________, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk _____, which is that we build this thing and at some _____ somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the ___________ subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is short timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short _________, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of _________ in the direction that you are describing that are _________. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about _______ my ______, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a _____ deal of societal discussion about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most shocking to you about the last few _____ has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of waking up of of an alien ____________ that suddenly decided it wants to wreak _____ on humans. That may never happen. What you can have is just incredible _____ that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies _______ these _____________ that were programmed to maximally harvest attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and _______________ damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, unintended consequences for sure. But another version of that is and I think this applies at the _________ level, at the company level, at the ________ _____, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time whatsoever working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that applies to the individual models we build and what their reward functions look like. I think it applies to society in a big way, and I think it applies to our corporate structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that _____ to a quite undesirable _______. And so we set up opening is this thing called a ______ ______ model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right structure, to figure out a charter that's going to govern us and a set of __________ that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of ___________, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended ____________. So help me understand this, because this is I think this is confusing to some people. So you started _______. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in secret and to be left developed purely by corporations who have whatever incentive they may have. We need a nonprofit that will develop and share knowledge openly. First of all, just even at that _____ _____, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it _______ even more available? Well, maybe ______ the _____ to that sort of AI terrorist in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super weapon and hand it to a _________. That's obviously _____. One of the reasons that we like our API model is it lets us make the most ________ AI technology anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its _____. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and guardrails, very powerful technology in the _____ of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize benefit. But that's very different than sort of shipping the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the _______ that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and ____________ secret. That doesn't feel right, because I think we do need a societal conversation about what's what's going on here, what the impacts are going to be. And so we although we don't always say, like, you know, here's the super weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that Microsoft were putting a billion dollars into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive ________ of CP3. So talk about that structure of how you win. _________ presumably have invested not purely for __________ purposes. They think that they will make money on that _______ dollars. I sure hope they do. I love capitalism, but I think that I really loved even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more _________. And we need you to sign this ________ that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are ____________ about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really __________ surprised us on the upside of how _______ they were with us, about how _______ the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the ______ year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can _____ it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we ______ for that to continue. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted _______ and they really care about ______, too. But our goal is that anybody who wants to use the API can have the most powerful versions of what we've trained. And the structure of the API lets us ________ to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and smarter algorithms, that we just needed bigger and bigger computers as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very ______ compensated, ________ individuals that do this, but are full for profit company had _______ incentives problem, among other things. Also just one about sort of fairness in society and ______ concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a _________ that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our _________ and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great return on their investment or the time that they spent it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very ______ charter in place and everybody who joins _______ up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is working, just as I sort of watch the way that we and our ________ make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion _______. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to avoid or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the worst ________ and really holding on to something that's that's beneficial for ________? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that perspective, sure. OK, I'm happy to talk about the charter to. I think this question of _________ that we talked about a little earlier is paramount, and then I think to understand that it's useful to differentiate between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power grids. And accidental would be kind of the Nick Bostrom make a lot of _____ clips and view humans as collateral damage in both cases. But to varying degrees, if we can really, truly, technically _____ the alignment problem and the societal problem of deciding to which set of human values do we _____, then the systems understand right and wrong, and they understand probably better than we ever can, __________ consequences from complex actions and very complex _______. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to click on something, we will give them more of that thing. And what could possibly be wrong with that? We're just supporting human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are __________ making choices, that a more effective version of themselves would agree is not in their long term interests. So that's one part of it. And then you've got _______ on top of that or the complications of systemic complexity where, you know, ________ choices by thousands of people end up creating a _______ that possibly have designed for how how to cut through that. Like an AI has to make a decision _____ on a moment, on a ________ data set. As those _________ get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep _______ on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in _____ where you're tired and you have a stressful day, stop yourself from the dopamine hit of scrolling and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were asked in a reflective moment where you were sort of _____ _____ and thoughtful, do you want to _____ as much time as you do scrolling through Instagram? Does it make you _______ or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of moment. And one thing that I am hopeful is that humans do know what we want and what. On the whole, and presented with ________ or sort of an objective view about what makes us happy and doesn't we're ______, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower brain to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better decisions than we are capable of, even in our best moments. So is that being proposed and ______ about as an ______ rule? Because it _______ me that there is something potentially super profound here to _________ some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill defined question, but as to what humans in __________ mode want. Yeah, we talk about this a lot. I mean, do you see a real chance where something like that could be ____________ as a sort of an absolute golden rule and and if you like, spread around the community so that it seeps into ____________ and elsewhere? Because that I've seen no ________ that, well, a little corporation that was potentially a game _______. Corporations have this weird incentive problem. Right. What I was trying to _____ about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a society to demand that. And if we can do like a pincer move between what the technology is capable of and what we what we as society ______, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make money if they have ______ off too many of their employees and customers and investors by analogy of the climate space right now, you can see more and more companies, even those that are emitting huge _______ of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon ______. Companies are actually good people who want to design great products for humanity. I think that the people who run these companies want to be a net contribution to humanity. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are _____ on a real _______ of human ______ and on a real version of system complexity and the risks associated with systemic complexity. Is that the ______ that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the incentive system right. I think most people are fundamentally _________ good. Very few people wake up in the morning ________ about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those engineers who join with the ________ best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more _____ and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better align the incentives of _________ with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. ______ an incentive system of. What we want in our most reflective best moments and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? __________ general intelligence ahead of. The corporations, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only group to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a ______ one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of _________ this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better direction or maybe we're wrong and somebody else has a better _________. We're doing something about do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other companies that you came up with this platform ahead of. You know, in some _____ it's surprising and in some sense, like the startup wins most of the time, like I'm a huge ________ in ________ as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented people. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the _______ compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible impact on some 70 companies. There are so many startup stories that began at Y Combinator. What were key _______ in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most ___________ to me in all of Silicon Valley. I, I was I went to college to study computer _______. I was a major computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator _______ and funded me and my co-founders. And we dropped out of school and did this company, which I ran for like seven years. And then after that I got acquired. I had stayed _____ to my comment the whole time. I thought it was just this incredible _____ of people and spirit and set of incentives and just badly misunderstood by most of the world, but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things. My _______ had acquired PJI, who is the founder of ICI, and like truly one of the most incredible ______ and ________ people. And Paul Burrell, Paul Graham asked me if I ______ to run it. And kind of like the central learning of my career, why I individual startups has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally motivating would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y __________ is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you _________ apply as a handful of people and an idea, maybe a prototype and say, I would like to _____ a company and will you please fund me? And we ______ those applications and we I shouldn't say we _______. I guess they fund four hundred companies a year. You get about one hundred and fifty thousand dollars while she takes about seven percent ownership and then gives you lots of advice and then networking and sort of this like fast track program for starting a startup. I haven't looked at this in a while, but at one point a significant fraction of the billion dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like Airbnb, Jordache, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that ______ and kind of go on to do incredible, incredible work. What is it about entrepreneurs? Why do they matter? Some people just find them kind of annoying. But I think you would argue I think I would argue that they have done as much as anyone to shape the future. Why ? What is it about them? I think it is the ability to take. And idea and by _____ of will to make it happen in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the ________ that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's plenty of very annoying things about them. And there's plenty of very annoying things about the system that sort of ________ them. But we do get something really important in ______. And I think that as a force for making things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward function in _______ that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life extension. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be lying _____ at night and something pops inside their mind as a patterning of the neurons in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they ________ investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind boggling that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could double down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or evangelism or something in that direction as well. There are all of the obvious ones that matter, like intelligence, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this story and they worry about the culture. Right. That it's this is a bro culture. Do you see prospects of that changing anytime soon? And would you welcome it? Can we get better companies by really _______ to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even select who to fund and how to advise them and really kind of make ________________ super widely available that will lead to like better ________ and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the _________ that you need, that is like an unequivocally good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology ________ entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more _________ in how we do things. My last ________ today is about ideas were _________. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a _____, but the one idea would be the ajai really is going to happen. You have to ______ with it seriously, and you shouldn't just ______ to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an __________, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for ________ so much time sharing your ______. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and progress at open eye dotcom. If you want to try _______ with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was philosopher ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking _________ and _______ ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila Boffano, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to _______ Quent, Colin Helmes and Anna Felin. If you like the show, please write and review it. It helps other people find us. We read every review, so thanks so much for listening. See you next time.

Solution

  1. models
  2. simply
  3. persuade
  4. guess
  5. engage
  6. words
  7. technology
  8. scale
  9. earth
  10. deliver
  11. working
  12. artificial
  13. today
  14. paper
  15. reality
  16. sounds
  17. billion
  18. valley
  19. terms
  20. online
  21. awake
  22. algorithm
  23. conversation
  24. season
  25. language
  26. fundamentally
  27. creative
  28. takes
  29. times
  30. national
  31. excited
  32. nature
  33. simple
  34. interesting
  35. difficult
  36. close
  37. developed
  38. replicates
  39. picking
  40. wanted
  41. system
  42. societal
  43. mistakes
  44. music
  45. incentives
  46. worrying
  47. research
  48. pissed
  49. scenarios
  50. alert
  51. demand
  52. science
  53. fundamental
  54. intend
  55. bubble
  56. block
  57. reliable
  58. governance
  59. based
  60. space
  61. legendary
  62. shifts
  63. fifty
  64. evolved
  65. combination
  66. licensee
  67. learn
  68. inclusive
  69. answer
  70. spending
  71. outcomes
  72. combinator
  73. guardian
  74. accessible
  75. process
  76. results
  77. resources
  78. start
  79. treat
  80. bigger
  81. strikes
  82. thought
  83. consequences
  84. version
  85. noble
  86. longer
  87. created
  88. bunch
  89. trained
  90. implied
  91. relative
  92. sense
  93. present
  94. stands
  95. horrifying
  96. entrepreneurship
  97. wikipedia
  98. produced
  99. people
  100. confident
  101. tasks
  102. partners
  103. talked
  104. optimistic
  105. hugely
  106. virus
  107. giant
  108. annoying
  109. terrorist
  110. judgment
  111. accurate
  112. leads
  113. search
  114. place
  115. innovative
  116. unintended
  117. evidence
  118. larval
  119. chain
  120. global
  121. curiosity
  122. table
  123. rules
  124. medical
  125. personal
  126. lives
  127. overexcited
  128. talented
  129. power
  130. playing
  131. designers
  132. rightly
  133. breadth
  134. perfectly
  135. values
  136. tremendously
  137. multiple
  138. humans
  139. develop
  140. usage
  141. making
  142. array
  143. small
  144. thousand
  145. align
  146. timelines
  147. tools
  148. update
  149. definition
  150. theory
  151. reading
  152. reason
  153. directed
  154. humanity
  155. companies
  156. alignment
  157. generate
  158. meaning
  159. intelligence
  160. expand
  161. society
  162. solves
  163. nonprofit
  164. great
  165. business
  166. impact
  167. night
  168. difference
  169. internet
  170. ingesting
  171. progress
  172. drivers
  173. extremely
  174. wealth
  175. concepts
  176. market
  177. kicked
  178. predictions
  179. asked
  180. documents
  181. world
  182. built
  183. starting
  184. response
  185. follow
  186. michele
  187. ability
  188. wheel
  189. advanced
  190. happier
  191. constantly
  192. highly
  193. anymore
  194. agenda
  195. result
  196. point
  197. continues
  198. feels
  199. group
  200. reflective
  201. capabilities
  202. partner
  203. mission
  204. safety
  205. confusing
  206. changer
  207. actual
  208. imperfect
  209. started
  210. single
  211. obvious
  212. incorporated
  213. sooner
  214. interview
  215. mentioned
  216. releasing
  217. fantastic
  218. investors
  219. possibility
  220. matter
  221. coming
  222. intelligent
  223. absolute
  224. spend
  225. external
  226. stance
  227. avoid
  228. direction
  229. french
  230. runaway
  231. amounts
  232. understands
  233. mobile
  234. model
  235. strange
  236. letting
  237. giving
  238. startups
  239. happen
  240. specific
  241. light
  242. decisions
  243. computers
  244. basically
  245. couple
  246. continue
  247. answered
  248. review
  249. awful
  250. industry
  251. embracing
  252. strong
  253. profit
  254. judges
  255. social
  256. solve
  257. systems
  258. train
  259. microsoft
  260. fully
  261. disease
  262. powerful
  263. graphic
  264. stage
  265. learning
  266. extraordinarily
  267. palette
  268. understand
  269. level
  270. question
  271. speak
  272. production
  273. money
  274. enthusiastic
  275. engineering
  276. corporations
  277. truth
  278. essays
  279. vision
  280. confront
  281. altman
  282. songs
  283. potential
  284. short
  285. challenge
  286. hands
  287. altruistic
  288. human
  289. output
  290. spreading
  291. traditionally
  292. pleasantly
  293. beneficial
  294. reasoning
  295. underlying
  296. risks
  297. languages
  298. amount
  299. future
  300. state
  301. compelling
  302. spans
  303. signing
  304. introduce
  305. query
  306. worrisome
  307. capped
  308. entire
  309. outcome
  310. opening
  311. sampling
  312. areas
  313. important
  314. believer
  315. intelligences
  316. unsure
  317. cognitive
  318. exist
  319. pretty
  320. stuff
  321. aligned
  322. wrong
  323. application
  324. force
  325. context
  326. program
  327. teach
  328. return
  329. threats
  330. dollars
  331. field
  332. incredible
  333. years
  334. responses
  335. technical
  336. degree
  337. existential
  338. pathway
  339. intent
  340. sharing
  341. conclusion
  342. english
  343. company
  344. imagine
  345. listen
  346. feedback
  347. purely
  348. computer
  349. programmer
  350. nuclear
  351. layered
  352. literal
  353. document
  354. secrets
  355. proud
  356. havoc
  357. tailored
  358. including
  359. thinking
  360. pieces
  361. displacement
  362. idolizes
  363. million
  364. ideas
  365. strike
  366. president
  367. retreated
  368. obligation
  369. belief
  370. early

Original Text

Hello there, this is Chris Anderson, and I am hugely, hugely, tremendously excited to welcome you to a new series of the TED interview. Now, then, this season, we're trying something new. We're organising the whole season around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with extraordinarily ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a global pandemic and impending climate catastrophe. What on earth are we thinking in this context? Optimism just seems so naive and unwanted, almost annoying. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and resources they need, they may very well light the path out of this dark place we're in. So these are the people who can present not optimism, but a case for optimism. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the place I want to start is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to change everything as we know it, for better or for worse. Today was painted not with the usual dystopian brush, but by someone who truly believes in its potential. Sam Altman is the former president of Y Combinator, the legendary startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one noble purpose to develop A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. technology called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the conversation ahead. But sticking to this lofty mission of developing A.I. for humanity and finding the resources to realize it haven't been simple. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the combination of scientific and technological progress and better societal decision making, better societal governance is going to solve in the next couple of decades all of our current most pressing problems, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free nuclear energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got saved by science and technology, I think. And we've already now seen this with the rapidity that we were able to get vaccines deployed. We are going to find that we are able to cure or at least treat a significant percentage of human disease, including I think we'll just actually make progress in helping people have much longer decades, longer health spans. And I think in the next couple of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an incredibly high quality education more possible than ever before. I think the lives we look forward like one hundred years, fifty years, even the quality of life available to anyone then will be much better than the quality of life available in the very best case to anyone today, to any single person today. So, yeah, I'm super optimistic. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere belief that artificial intelligence can actually make that future better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use cases of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people interact with services that are smart, really smart, and it will feel like as strange as the world before mobile phones feels now to us. Hmm, yeah, you mentioned your API, I guess that stands for what, application programming interface? It's the technology that allows complex technology to be accessible to others. So give me a sense of a couple of things that have got you most excited that are already out there and then how that gives you visibility to a pathway forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of twenty twenty. You know, there's hundreds of applications that are now using it in production that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search query and deliver results and sort of understand not only intent, but all of the data and deliver the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, short documents, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of virtual friend. There are applications that, for example, help a job seeker polish a tailored application for each individual company. There's the beginning of tutors that can sort of teach people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can imagine that you do today via computer that you would like to really understand and get to know you. And not only that, but understand all of the data and knowledge in the world and help you have the best experience that is is possible that that will happen. So what gets opened up? What new adjacent possible state is that as a result of these powers from this question, from the point of view of someone who's starting out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world experts back immediately for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best programmer in the world or a really great programmer do for me. And can I now instead just ask in English and have that program written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to program. Now they can have it. You know, what is the service look like when anyone on Earth who wants really great medical advice? Can get better medical advice than any single doctor could ever get, because this has the total medical knowledge and reasoning ability that the some humanity has ever produced. When you want to learn something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom teaches you whatever concept you want to learn someday. You can imagine that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally perfectly prepares you and has all of the information that you need in all the context of your entire career right there for you to go on for a long time. But I think this will just be powerful systems. So it's really fun playing around with Chapatti three, one compelling example of someone who's more tax base is try Googling The Guardian essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very compelling. I actually tried inputting one of the three online uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather philosophical, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely subjective state that varies from person to person. However, I will grant that this is an interesting question to ask. This does not mean it has been answered. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and jibberish is the right way to think about the current capabilities of CGP three. I think they would definitely had a bubble of hype about three last summer. But the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get overexcited about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the entire field, for tax models, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its current very larval state, can make us confront new things and sort of inspire new ideas, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these words seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must answer this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of Wikipedia, etc. We've read something that's read like a small fraction of a random sampling of the Internet. We will eventually train something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, relative to what we will have operated at quite small scale with quite small eyes. But what is happening is there is a model that is ingesting lots of text and it is trying to predict the next word. So we use Transformer's they take in a context, which is a particular architecture of an A.I. model, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fundamentally that's it, and I think this is interesting because in the process of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly accurate, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's confusing about this is that there are so many words on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed moderately pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, quote, evolved, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been reading? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in theory, to imagine how a model can gravitate towards truth, wisdom, as opposed to just like majority views? Or how how how do you avoid something taking us further into the sort of the maze of errors and bad thinking and so forth that has already been a worrying feature for the last few years ? It's a fantastic question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this point, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm pretty confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally impressive things, but that we can ensure do the things that we want and that understand the concepts of truth and falsehood and, you know, alignment with human values and misalignment with human values. One of the pieces of research that we put out last year that I was most proud of and most excited about is what we call reinforcement learning from human feedback. And we showed that we can take these giant models that are trained on a bunch of stuff, some of it good, some of the bad, and then with a really quite small amount of feedback from human judgment about, hey, this is good, this is bad, this is wrong, this is the behavior I want I don't want this behavior. We can feed that information from the human judges back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to train on. It will go a very long way. And as these models get smarter, I think they inherently develop the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active learning, which is where they ask us for exactly the data they need when they're missing something, when they're unsure, when they don't understand. But I think as a result of simply scaling these models up, building better, I hate to use the word cognition because it sounds so anthropomorphic, but let's say building a better ability to reason into the models, to think, to challenge, to try to understand and combining that with this idea of online into human values via this technique we developed, that's going to go a very long way. Now, there's another question, which you sort of just kicked the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand rightly what you're saying, that you're saying that it's possible to look at the output at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever algorithm or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of replicates the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise human or short humans who are looking at the results. So it's a big difference. Someone who who grew up with intelligent design world view could look at that and go, that's a brilliant outcome. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the problem that a lot of the, I guess, Silicon Valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth. How do you assemble that pool of experts who stand for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even close to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very difficult global governance systems. My personal belief is that we should have pretty broad rules about what these systems will never do and will always do. But then the individual user should get a system that kind of behaves like they want. And there will be people do have very different value systems. Some of them are just fundamentally incompatible. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of intelligent design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific conclusion. No, I wouldn't take that stance. I think the thing to remember about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand times more powerful and let's say a million times more reliable, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most surprising is it's not just about sort of text responses. It's it can take generalized human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the Google logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is amazing to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a simple enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the Internet very rarely, you know, Internet let language on the Internet also includes some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some degree or other, and it's like very hard to know exactly how much understand the underlying concepts of what's going on. And they're not just regurgitating things they found in a website, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just generate this like a brand new program for nothing that anyone has ever asked before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from English to French. Again, we never told it to learn about translation. We never told it about the concepts of English and French, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to translate, it can still do it. Wow, I mean, for creative people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a musician, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to present me with a thousand tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the morning and the computer shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year called Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of stuff, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to music that it creates. And I can sort of do four songs, two bars of a jingle, whatever you'd like. And one of my very favorite artists reached out, called to open it after we release this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so inspiring. I want to do a new album with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better music because of this tool. And that was awesome. And I hope that's how it all continues to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where graphic designers sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it takes to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like incredible creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that solves the problem that humans have been wrestling with. It doesn't have to be as quite on the scale as of, OK, we've got a virus coming. Please describe to us what a what a national rational response should look like, but some kind of genuinely innovative idea or solution like one one internal question we've asked ourselves is, when will the first genuinely interesting, purely AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would guess the first genuinely interesting. Ted talk, thought of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it feels like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three responses to questions about, you know, the threats of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much sooner. Indeed. Like tomorrow. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could generate in one hundred hours or whatever, if you can sort of combine human discretion with A.I. horsepower. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for destruction. What's what's your view there? You know, it's I think it's always hard to make these predictions. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous impact on. The job market, and I really hate it, I think it's kind of gross when people like working on I pretend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to predict from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. Dramatic. More of a staccato note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own ideas about how to do that. I, I wouldn't say that I have any reason to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an acceptable answer. So there's going to be huge impact. It's difficult to predict where it shows up the most. I think previous predictions have mostly been wrong, but I I'd like to see us all as a society, certainly as a field, engage in what what the shifts we want to make to the social contract are to kind of get through that in a way that is maximally beneficial to everybody. I mean, in every past revolution, there's always been a space for humans to move to. That is, if you like, moving up the food chain, it's sort of we've retreated to the things that humans could uniquely do, think better, be more creative and so forth. I guess the worry about A.I. is that in principle, I believe this, that there is no human cognitive feat that won't ultimately be doable, probably better by artificial general touch, simply because of the extra firepower that ultimately they can have, the vast knowledge they bring to the table and so forth. Is that basically right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time horizon. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few minutes ago, where right now we have these systems that have sort of enormous horsepower but no steering wheel. It's like, you know , incredible capabilities, but no judgment. And there's like these obvious ways in which today even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this external threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different versions of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology merge like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of constantly steers us and helps us make better decisions than we could. But in any case, I think the fundamental thing is it's not like the humans versus the eyes competing to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term potential for creative people of all sorts if they're willing to expand their palette of possibilities. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for embracing it the most and the earliest. So talk about what can go wrong with that, so let's move away from just the sort of economic displacement factor. You were a co-founder of Open Eye because you saw existential risks to humanity from high today. What would you put as the sort of the most worrying of those risks? And how is open eye working to minimize? I still think all of the really horrifying risks exist. I am more confident, much more confident than I was five years ago when we started that there are technical things we can do about. How we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else feels ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to strike a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, traditionally what's been in the realm of sci fi risks are real and we should not ignore them. And I still lose sleep over them. And just to update people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific areas. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of breadth that that humans have had. What's your sort of elevator pitch on Ajai about how to identify and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like classify images of cats or whatever, more advanced stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much imperfect tools, but that can generalize. And one thing like GPP three can write essays and translate between languages and write computer code and do very complicated search. It's like a single model that understands enough of what's really going on to do a broad array of tasks and learn new things quickly, sort of like people can. And then eventually we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self directed, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk space, which is that we build this thing and at some point somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the possibility subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is short timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short timelines, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of scenarios in the direction that you are describing that are worrisome. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about letting my system, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a great deal of societal discussion about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most shocking to you about the last few years has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of waking up of of an alien intelligence that suddenly decided it wants to wreak havoc on humans. That may never happen. What you can have is just incredible power that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies created these intelligences that were programmed to maximally harvest attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and extraordinarily damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, unintended consequences for sure. But another version of that is and I think this applies at the technical level, at the company level, at the societal level, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time whatsoever working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that applies to the individual models we build and what their reward functions look like. I think it applies to society in a big way, and I think it applies to our corporate structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that leads to a quite undesirable outcome. And so we set up opening is this thing called a capped profit model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right structure, to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of engineering, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended consequences. So help me understand this, because this is I think this is confusing to some people. So you started opening. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in secret and to be left developed purely by corporations who have whatever incentive they may have. We need a nonprofit that will develop and share knowledge openly. First of all, just even at that early stage, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it secrets even more available? Well, maybe giving the tools to that sort of AI terrorist in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super weapon and hand it to a terrorist. That's obviously awful. One of the reasons that we like our API model is it lets us make the most powerful AI technology anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its usage. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and guardrails, very powerful technology in the hands of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize benefit. But that's very different than sort of shipping the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the mission that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and capabilities secret. That doesn't feel right, because I think we do need a societal conversation about what's what's going on here, what the impacts are going to be. And so we although we don't always say, like, you know, here's the super weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that Microsoft were putting a billion dollars into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive licensee of CP3. So talk about that structure of how you win. Microsoft presumably have invested not purely for altruistic purposes. They think that they will make money on that billion dollars. I sure hope they do. I love capitalism, but I think that I really loved even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more important. And we need you to sign this document that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are enthusiastic about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really pleasantly surprised us on the upside of how aligned they were with us, about how strange the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the coming year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can block it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we intend for that to continue. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted partner and they really care about safety, too. But our goal is that anybody who wants to use the API can have the most powerful versions of what we've trained. And the structure of the API lets us continue to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and smarter algorithms, that we just needed bigger and bigger computers as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very highly compensated, talented individuals that do this, but are full for profit company had runaway incentives problem, among other things. Also just one about sort of fairness in society and wealth concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a nonprofit that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our investors and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great return on their investment or the time that they spent it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very strong charter in place and everybody who joins signing up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is working, just as I sort of watch the way that we and our partners make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion dollars. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to avoid or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the worst mistakes and really holding on to something that's that's beneficial for humanity? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that perspective, sure. OK, I'm happy to talk about the charter to. I think this question of alignment that we talked about a little earlier is paramount, and then I think to understand that it's useful to differentiate between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power grids. And accidental would be kind of the Nick Bostrom make a lot of paper clips and view humans as collateral damage in both cases. But to varying degrees, if we can really, truly, technically solve the alignment problem and the societal problem of deciding to which set of human values do we align, then the systems understand right and wrong, and they understand probably better than we ever can, unintended consequences from complex actions and very complex systems. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to click on something, we will give them more of that thing. And what could possibly be wrong with that? We're just supporting human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are constantly making choices, that a more effective version of themselves would agree is not in their long term interests. So that's one part of it. And then you've got layered on top of that or the complications of systemic complexity where, you know, multiple choices by thousands of people end up creating a reality that possibly have designed for how how to cut through that. Like an AI has to make a decision based on a moment, on a specific data set. As those decisions get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep picking on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in night where you're tired and you have a stressful day, stop yourself from the dopamine hit of scrolling and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were asked in a reflective moment where you were sort of fully alert and thoughtful, do you want to spend as much time as you do scrolling through Instagram? Does it make you happier or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of moment. And one thing that I am hopeful is that humans do know what we want and what. On the whole, and presented with research or sort of an objective view about what makes us happy and doesn't we're pretty, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower brain to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better decisions than we are capable of, even in our best moments. So is that being proposed and talked about as an actual rule? Because it strikes me that there is something potentially super profound here to introduce some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill defined question, but as to what humans in reflective mode want. Yeah, we talk about this a lot. I mean, do you see a real chance where something like that could be incorporated as a sort of an absolute golden rule and and if you like, spread around the community so that it seeps into corporations and elsewhere? Because that I've seen no evidence that, well, a little corporation that was potentially a game changer. Corporations have this weird incentive problem. Right. What I was trying to speak about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a society to demand that. And if we can do like a pincer move between what the technology is capable of and what we what we as society demand, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make money if they have pissed off too many of their employees and customers and investors by analogy of the climate space right now, you can see more and more companies, even those that are emitting huge amounts of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon Valley. Companies are actually good people who want to design great products for humanity. I think that the people who run these companies want to be a net contribution to humanity. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are built on a real version of human nature and on a real version of system complexity and the risks associated with systemic complexity. Is that the agenda that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the incentive system right. I think most people are fundamentally extremely good. Very few people wake up in the morning thinking about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those engineers who join with the absolute best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more money and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better align the incentives of companies with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. Follow an incentive system of. What we want in our most reflective best moments and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? Artificial general intelligence ahead of. The corporations, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only group to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a bigger one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of releasing this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better direction or maybe we're wrong and somebody else has a better direction. We're doing something about do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other companies that you came up with this platform ahead of. You know, in some sense it's surprising and in some sense, like the startup wins most of the time, like I'm a huge believer in startups as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented people. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the implied compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible impact on some 70 companies. There are so many startup stories that began at Y Combinator. What were key drivers in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most interesting to me in all of Silicon Valley. I, I was I went to college to study computer science. I was a major computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator started and funded me and my co-founders. And we dropped out of school and did this company, which I ran for like seven years. And then after that I got acquired. I had stayed close to my comment the whole time. I thought it was just this incredible group of people and spirit and set of incentives and just badly misunderstood by most of the world, but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things. My company had acquired PJI, who is the founder of ICI, and like truly one of the most incredible humans and business people. And Paul Burrell, Paul Graham asked me if I wanted to run it. And kind of like the central learning of my career, why I individual startups has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally motivating would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y Combinator is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you basically apply as a handful of people and an idea, maybe a prototype and say, I would like to start a company and will you please fund me? And we review those applications and we I shouldn't say we anymore. I guess they fund four hundred companies a year. You get about one hundred and fifty thousand dollars while she takes about seven percent ownership and then gives you lots of advice and then networking and sort of this like fast track program for starting a startup. I haven't looked at this in a while, but at one point a significant fraction of the billion dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like Airbnb, Jordache, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that matter and kind of go on to do incredible, incredible work. What is it about entrepreneurs? Why do they matter? Some people just find them kind of annoying. But I think you would argue I think I would argue that they have done as much as anyone to shape the future. Why ? What is it about them? I think it is the ability to take. And idea and by force of will to make it happen in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the computer that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's plenty of very annoying things about them. And there's plenty of very annoying things about the system that sort of idolizes them. But we do get something really important in return. And I think that as a force for making things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward function in society that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life extension. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be lying awake at night and something pops inside their mind as a patterning of the neurons in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind boggling that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could double down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or evangelism or something in that direction as well. There are all of the obvious ones that matter, like intelligence, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this story and they worry about the culture. Right. That it's this is a bro culture. Do you see prospects of that changing anytime soon? And would you welcome it? Can we get better companies by really working to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even select who to fund and how to advise them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the resources that you need, that is like an unequivocally good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology industry entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more inclusive in how we do things. My last question today is about ideas were spreading. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a bunch, but the one idea would be the ajai really is going to happen. You have to engage with it seriously, and you shouldn't just listen to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an obligation, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for spending so much time sharing your vision. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and progress at open eye dotcom. If you want to try playing with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was philosopher ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila Boffano, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to Michele Quent, Colin Helmes and Anna Felin. If you like the show, please write and review it. It helps other people find us. We read every review, so thanks so much for listening. See you next time.

Frequently Occurring Word Combinations

ngrams of length 2

collocation frequency
open eye 6
human values 6
long time 5
silicon valley 5
artificial intelligence 4
technological revolution 4
long term 4
unintended consequences 4
incentive system 4
powerful systems 3
artificial general 3
billion dollars 3
build systems 2
natural language 2
guardian essay 2
smart people 2
bad data 2
valley companies 2
social media 2
societal conversation 2
pretty cool 2
ted talk 2
familiar story 2
general intelligence 2
powerful ai 2
people call 2
start editing 2
wreak havoc 2
human nature 2
systemic complexity 2
huge amounts 2
talented people 2
real version 2
incentive systems 2
started working 2

ngrams of length 3

collocation frequency
silicon valley companies 2
artificial general intelligence 2

Important Words

  1. abandoned
  2. ability
  3. absolute
  4. accelerator
  5. acceptable
  6. access
  7. accessible
  8. accidental
  9. accidentally
  10. accurate
  11. acquired
  12. actions
  13. active
  14. activity
  15. actor
  16. acts
  17. actual
  18. adaptation
  19. add
  20. adjacent
  21. adjei
  22. advanced
  23. advantage
  24. advice
  25. advise
  26. affect
  27. age
  28. agenda
  29. agi
  30. agree
  31. aha
  32. ai
  33. aid
  34. aids
  35. airbnb
  36. ajai
  37. albeit
  38. album
  39. alert
  40. algorithm
  41. algorithms
  42. alien
  43. align
  44. aligned
  45. alignment
  46. altman
  47. altruistic
  48. amazing
  49. amok
  50. amount
  51. amounts
  52. amuck
  53. analogy
  54. anderson
  55. animal
  56. animals
  57. anna
  58. announced
  59. annoying
  60. answer
  61. answered
  62. anthropomorphic
  63. anymore
  64. anytime
  65. api
  66. app
  67. application
  68. applications
  69. applies
  70. apply
  71. architecture
  72. area
  73. areas
  74. argue
  75. argument
  76. array
  77. artificial
  78. artists
  79. asked
  80. asleep
  81. assemble
  82. assisted
  83. asterisks
  84. attached
  85. attention
  86. attitude
  87. audio
  88. avoid
  89. avoiding
  90. awake
  91. awed
  92. awesome
  93. awful
  94. axis
  95. axum
  96. background
  97. bad
  98. badly
  99. balance
  100. ball
  101. bars
  102. base
  103. based
  104. basically
  105. beautiful
  106. bed
  107. bedroom
  108. began
  109. beginning
  110. begun
  111. behave
  112. behaves
  113. behavior
  114. behavioral
  115. belief
  116. beliefs
  117. believer
  118. believes
  119. beneficial
  120. benefit
  121. benefits
  122. betterment
  123. big
  124. bigger
  125. biggest
  126. billion
  127. biology
  128. bit
  129. block
  130. blue
  131. boffano
  132. boggling
  133. books
  134. bostrom
  135. bot
  136. brain
  137. brains
  138. brand
  139. breadth
  140. break
  141. brilliant
  142. bring
  143. bro
  144. broad
  145. broadening
  146. broken
  147. brush
  148. bubble
  149. bubbles
  150. build
  151. building
  152. built
  153. bunch
  154. burrell
  155. business
  156. buy
  157. buzz
  158. calendar
  159. call
  160. called
  161. calls
  162. camera
  163. camp
  164. canary
  165. cap
  166. capabilities
  167. capable
  168. capital
  169. capitalism
  170. capped
  171. carbon
  172. card
  173. care
  174. career
  175. case
  176. cases
  177. catastrophe
  178. cats
  179. caught
  180. central
  181. cgp
  182. chain
  183. challenge
  184. chance
  185. change
  186. changer
  187. changing
  188. chapatti
  189. characteristics
  190. characters
  191. charlie
  192. charter
  193. chat
  194. check
  195. cheerfully
  196. child
  197. choice
  198. choices
  199. chris
  200. clans
  201. classify
  202. clear
  203. click
  204. climate
  205. clips
  206. close
  207. coal
  208. code
  209. cognition
  210. cognitive
  211. coinbase
  212. colin
  213. collar
  214. collateral
  215. collection
  216. collective
  217. college
  218. combination
  219. combinator
  220. combine
  221. combines
  222. combining
  223. coming
  224. comment
  225. commercial
  226. communication
  227. community
  228. companies
  229. company
  230. compelling
  231. compensate
  232. compensated
  233. competing
  234. complex
  235. complexity
  236. complicated
  237. complications
  238. compliment
  239. composite
  240. computer
  241. computers
  242. concentration
  243. concept
  244. concepts
  245. conclusion
  246. confident
  247. confront
  248. confused
  249. confusing
  250. connect
  251. consequences
  252. constantly
  253. context
  254. continue
  255. continued
  256. continues
  257. contract
  258. contribute
  259. contribution
  260. controlled
  261. controlling
  262. controls
  263. conversation
  264. cool
  265. corner
  266. corporate
  267. corporation
  268. corporations
  269. correct
  270. couple
  271. crashing
  272. create
  273. created
  274. creates
  275. creating
  276. creative
  277. creativity
  278. critics
  279. culture
  280. curating
  281. cure
  282. curiosity
  283. current
  284. cursor
  285. cushion
  286. custom
  287. customers
  288. cut
  289. cycle
  290. damage
  291. damaging
  292. dangerous
  293. dark
  294. data
  295. date
  296. day
  297. deal
  298. decades
  299. decide
  300. decided
  301. deciding
  302. decision
  303. decisions
  304. dedicated
  305. defined
  306. definition
  307. degree
  308. degrees
  309. deliver
  310. delivered
  311. demand
  312. deployed
  313. deployment
  314. describe
  315. describing
  316. design
  317. designed
  318. designers
  319. destruction
  320. determination
  321. develop
  322. developed
  323. developers
  324. developing
  325. development
  326. dictates
  327. difference
  328. differentiate
  329. differentiated
  330. differentiates
  331. differentiator
  332. difficult
  333. dioxide
  334. directed
  335. direction
  336. directions
  337. disagree
  338. disaster
  339. disclosed
  340. discretion
  341. discussion
  342. disease
  343. displacement
  344. distinguish
  345. distracted
  346. division
  347. doable
  348. doctor
  349. document
  350. documents
  351. dollar
  352. dollars
  353. dolly
  354. door
  355. dopamine
  356. dot
  357. dotcom
  358. dots
  359. double
  360. dramatic
  361. dream
  362. driven
  363. drivers
  364. dropped
  365. durbin
  366. dystopian
  367. earlier
  368. earliest
  369. early
  370. earth
  371. easy
  372. economic
  373. edited
  374. editing
  375. editor
  376. education
  377. effect
  378. effective
  379. effectively
  380. efficiency
  381. effort
  382. efforts
  383. ehi
  384. elements
  385. elevator
  386. elon
  387. email
  388. embracing
  389. emitting
  390. employees
  391. empower
  392. energy
  393. enforce
  394. engage
  395. engaged
  396. engaging
  397. engineering
  398. engineers
  399. english
  400. enjoy
  401. enormous
  402. ensure
  403. entertaining
  404. enthusiastic
  405. entire
  406. entrepreneurs
  407. entrepreneurship
  408. equity
  409. era
  410. errors
  411. essay
  412. essays
  413. essentially
  414. evangelism
  415. event
  416. eventually
  417. evidence
  418. evil
  419. evolution
  420. evolved
  421. exact
  422. exaggeration
  423. examples
  424. excited
  425. excitement
  426. exciting
  427. exclusive
  428. exist
  429. existential
  430. expand
  431. experience
  432. experts
  433. explain
  434. explodes
  435. exploit
  436. explosion
  437. exposed
  438. extension
  439. external
  440. extra
  441. extraordinarily
  442. extremely
  443. eye
  444. eyes
  445. facebook
  446. facing
  447. fact
  448. factor
  449. fair
  450. fairness
  451. fall
  452. false
  453. falsehood
  454. familiar
  455. fanboy
  456. fantastic
  457. farshid
  458. fast
  459. faster
  460. favorite
  461. fearful
  462. feat
  463. feature
  464. feed
  465. feedback
  466. feel
  467. feeling
  468. feels
  469. felin
  470. fi
  471. field
  472. fifty
  473. figure
  474. find
  475. finding
  476. firepower
  477. fix
  478. fixed
  479. flashing
  480. flesh
  481. flexibility
  482. flows
  483. focus
  484. follow
  485. food
  486. foolish
  487. force
  488. forces
  489. foundational
  490. founder
  491. fraction
  492. frame
  493. framework
  494. free
  495. french
  496. friend
  497. fulfillment
  498. full
  499. fully
  500. fun
  501. function
  502. functions
  503. fund
  504. fundamental
  505. fundamentally
  506. funded
  507. fusion
  508. future
  509. fuzzy
  510. gain
  511. game
  512. games
  513. general
  514. generalize
  515. generalized
  516. generate
  517. generated
  518. generative
  519. genetics
  520. genuinely
  521. giant
  522. gibberish
  523. give
  524. giving
  525. glimpse
  526. glimpses
  527. global
  528. goal
  529. golden
  530. goldstar
  531. good
  532. google
  533. googling
  534. govern
  535. governance
  536. governs
  537. gpg
  538. gpp
  539. grace
  540. graham
  541. grant
  542. graphic
  543. gravitate
  544. great
  545. greatest
  546. grew
  547. grids
  548. gross
  549. ground
  550. group
  551. groups
  552. growing
  553. guardian
  554. guardrails
  555. guess
  556. hack
  557. hand
  558. handful
  559. hands
  560. happen
  561. happened
  562. happening
  563. happier
  564. happiness
  565. happy
  566. hard
  567. harm
  568. harvest
  569. harvesting
  570. hate
  571. havoc
  572. head
  573. health
  574. hear
  575. heard
  576. hearing
  577. helmes
  578. helping
  579. helps
  580. herding
  581. hey
  582. high
  583. higher
  584. highly
  585. hiring
  586. history
  587. hit
  588. hmm
  589. hold
  590. holding
  591. honestly
  592. hope
  593. hopeful
  594. horizon
  595. horrifying
  596. horsepower
  597. hours
  598. huge
  599. hugely
  600. human
  601. humanity
  602. humans
  603. hundreds
  604. hybrid
  605. hype
  606. ici
  607. idea
  608. ideas
  609. identify
  610. idolizes
  611. ignore
  612. ignoring
  613. ill
  614. images
  615. imagine
  616. imagined
  617. immediately
  618. impact
  619. impacts
  620. impending
  621. imperfect
  622. implied
  623. implies
  624. importance
  625. important
  626. impressive
  627. improve
  628. improving
  629. inappropriate
  630. incentive
  631. incentives
  632. includes
  633. including
  634. inclusive
  635. incompatible
  636. incorporated
  637. increase
  638. incredible
  639. incredibly
  640. individual
  641. individuals
  642. industry
  643. inexpensive
  644. information
  645. ingesting
  646. inherently
  647. initially
  648. inject
  649. innovation
  650. innovative
  651. input
  652. inputting
  653. inspiration
  654. inspire
  655. inspiring
  656. insta
  657. instagram
  658. instances
  659. instincts
  660. instructions
  661. intellectually
  662. intelligence
  663. intelligences
  664. intelligent
  665. intend
  666. intense
  667. intent
  668. intentional
  669. intentionality
  670. intentions
  671. interact
  672. interactive
  673. interested
  674. interesting
  675. interestingness
  676. interests
  677. interface
  678. internal
  679. internet
  680. interview
  681. introduce
  682. invested
  683. investment
  684. investors
  685. ish
  686. islamic
  687. issues
  688. jibberish
  689. jingle
  690. jingles
  691. job
  692. jobs
  693. join
  694. joins
  695. jordache
  696. judges
  697. judgment
  698. jukebox
  699. kernel
  700. key
  701. keys
  702. kicked
  703. kim
  704. kind
  705. kinds
  706. knew
  707. knowledge
  708. language
  709. languages
  710. larval
  711. launched
  712. layer
  713. layered
  714. lead
  715. leading
  716. leads
  717. learn
  718. learned
  719. learning
  720. led
  721. left
  722. legendary
  723. lets
  724. letting
  725. level
  726. leverage
  727. license
  728. licensed
  729. licensee
  730. licensing
  731. life
  732. light
  733. link
  734. list
  735. listen
  736. listening
  737. literal
  738. literally
  739. live
  740. lives
  741. llc
  742. lofty
  743. logo
  744. long
  745. longer
  746. looked
  747. lose
  748. lot
  749. lots
  750. love
  751. loved
  752. lucky
  753. lying
  754. magnitude
  755. major
  756. majority
  757. majorly
  758. making
  759. market
  760. massive
  761. match
  762. matrix
  763. matter
  764. maximally
  765. maximize
  766. maximum
  767. maze
  768. mba
  769. meaning
  770. meaningful
  771. means
  772. media
  773. medical
  774. medium
  775. meeting
  776. mention
  777. mentioned
  778. merge
  779. merged
  780. mess
  781. metrics
  782. michele
  783. microsoft
  784. middle
  785. milestone
  786. million
  787. mind
  788. minds
  789. minimize
  790. minute
  791. minutes
  792. mirror
  793. misalignment
  794. misgivings
  795. missing
  796. mission
  797. mistake
  798. mistakes
  799. misunderstood
  800. misuse
  801. mix
  802. mobile
  803. mode
  804. model
  805. models
  806. moderately
  807. moment
  808. moments
  809. money
  810. month
  811. morning
  812. motivating
  813. move
  814. moving
  815. multiple
  816. munger
  817. music
  818. musician
  819. musk
  820. naive
  821. narrow
  822. national
  823. natural
  824. nature
  825. needed
  826. negative
  827. neocortex
  828. nerd
  829. nervous
  830. net
  831. networking
  832. neuro
  833. neurons
  834. news
  835. nick
  836. night
  837. noble
  838. nonprofit
  839. norms
  840. note
  841. nuclear
  842. number
  843. objective
  844. obligation
  845. observe
  846. obvious
  847. offer
  848. ollman
  849. online
  850. open
  851. opened
  852. opening
  853. openly
  854. opens
  855. operated
  856. operating
  857. opportunity
  858. opposed
  859. optimism
  860. optimistic
  861. orders
  862. organising
  863. organization
  864. ostler
  865. outcome
  866. outcomes
  867. output
  868. overexcited
  869. overhyped
  870. ownership
  871. pace
  872. painted
  873. palette
  874. pandemic
  875. paper
  876. paramount
  877. part
  878. partner
  879. partners
  880. path
  881. pathway
  882. patterning
  883. paul
  884. pay
  885. paying
  886. people
  887. percent
  888. percentage
  889. perfectly
  890. performance
  891. person
  892. personal
  893. personally
  894. personas
  895. perspective
  896. persuade
  897. phenomenal
  898. phenomenally
  899. philosopher
  900. philosophical
  901. phones
  902. pick
  903. picking
  904. picture
  905. pieces
  906. pincer
  907. pissed
  908. pitch
  909. pittas
  910. pji
  911. place
  912. plan
  913. planning
  914. platform
  915. play
  916. playing
  917. pleasantly
  918. pleasing
  919. plenty
  920. plot
  921. plugged
  922. podcasts
  923. point
  924. pointing
  925. policy
  926. polish
  927. political
  928. pool
  929. pops
  930. position
  931. positive
  932. possibilities
  933. possibility
  934. possibly
  935. potential
  936. potentially
  937. power
  938. powerful
  939. powers
  940. predict
  941. predictions
  942. predictor
  943. prepares
  944. present
  945. presented
  946. president
  947. pressing
  948. presume
  949. pretend
  950. pretty
  951. previous
  952. principle
  953. prior
  954. problem
  955. problems
  956. process
  957. processes
  958. produced
  959. produces
  960. production
  961. products
  962. profit
  963. profound
  964. program
  965. programers
  966. programmed
  967. programmer
  968. programming
  969. progress
  970. project
  971. promoted
  972. prompted
  973. pronouncements
  974. properly
  975. proposed
  976. prospects
  977. protect
  978. prototype
  979. proud
  980. psychologists
  981. pull
  982. pulled
  983. purely
  984. purpose
  985. purposes
  986. pursue
  987. push
  988. pushback
  989. put
  990. putting
  991. python
  992. quadrant
  993. quality
  994. quent
  995. queries
  996. query
  997. question
  998. questions
  999. quickly
  1000. quote
  1001. race
  1002. racial
  1003. rails
  1004. raise
  1005. ramping
  1006. ran
  1007. random
  1008. rapid
  1009. rapidity
  1010. rarely
  1011. rational
  1012. reached
  1013. read
  1014. reading
  1015. reads
  1016. real
  1017. realign
  1018. realistic
  1019. reality
  1020. realize
  1021. realized
  1022. realm
  1023. rearview
  1024. reason
  1025. reasoning
  1026. reasons
  1027. recesses
  1028. reckoning
  1029. recruit
  1030. reflective
  1031. regurgitating
  1032. reinforcement
  1033. relative
  1034. release
  1035. released
  1036. releasing
  1037. reliable
  1038. rely
  1039. remarkable
  1040. remember
  1041. replicates
  1042. representation
  1043. representational
  1044. require
  1045. research
  1046. researchers
  1047. resources
  1048. response
  1049. responses
  1050. restrictions
  1051. result
  1052. results
  1053. retreated
  1054. return
  1055. review
  1056. revolution
  1057. reward
  1058. rewarded
  1059. rewards
  1060. rightly
  1061. rights
  1062. risk
  1063. risks
  1064. robotics
  1065. rpi
  1066. rubenstein
  1067. rug
  1068. rule
  1069. rules
  1070. run
  1071. runaway
  1072. running
  1073. rushed
  1074. sad
  1075. safe
  1076. safety
  1077. sam
  1078. sambor
  1079. sampling
  1080. saved
  1081. scale
  1082. scaling
  1083. scary
  1084. scenario
  1085. scenarios
  1086. school
  1087. sci
  1088. science
  1089. scientific
  1090. screen
  1091. scroll
  1092. scrolling
  1093. search
  1094. season
  1095. sec
  1096. secret
  1097. secrets
  1098. seeker
  1099. seeps
  1100. select
  1101. sense
  1102. sentence
  1103. sentences
  1104. sentient
  1105. sequence
  1106. series
  1107. service
  1108. services
  1109. set
  1110. sets
  1111. shallow
  1112. shape
  1113. share
  1114. sharing
  1115. sheet
  1116. sheila
  1117. shift
  1118. shifts
  1119. shipping
  1120. shocking
  1121. shockingly
  1122. short
  1123. show
  1124. showed
  1125. shown
  1126. shows
  1127. side
  1128. sign
  1129. significant
  1130. signing
  1131. silicon
  1132. similar
  1133. simple
  1134. simply
  1135. sincere
  1136. single
  1137. sir
  1138. sitting
  1139. skills
  1140. sleep
  1141. slow
  1142. small
  1143. smart
  1144. smarter
  1145. smartest
  1146. snippets
  1147. social
  1148. societal
  1149. society
  1150. software
  1151. solely
  1152. solution
  1153. solutions
  1154. solve
  1155. solves
  1156. song
  1157. songs
  1158. sooner
  1159. sort
  1160. sorts
  1161. sounds
  1162. space
  1163. spans
  1164. sparking
  1165. speak
  1166. spec
  1167. special
  1168. species
  1169. specific
  1170. specifically
  1171. spend
  1172. spending
  1173. spent
  1174. spirit
  1175. spread
  1176. spreading
  1177. staccato
  1178. stage
  1179. stake
  1180. stance
  1181. stand
  1182. stands
  1183. start
  1184. started
  1185. starting
  1186. startup
  1187. startups
  1188. state
  1189. stayed
  1190. steering
  1191. steers
  1192. sticking
  1193. stitched
  1194. stop
  1195. stories
  1196. story
  1197. strange
  1198. strength
  1199. stressful
  1200. strike
  1201. strikes
  1202. striking
  1203. stripe
  1204. strong
  1205. structural
  1206. structure
  1207. struggling
  1208. studied
  1209. study
  1210. stuff
  1211. style
  1212. subjected
  1213. subjective
  1214. subsidiary
  1215. subspace
  1216. success
  1217. successors
  1218. sucked
  1219. suddenly
  1220. suit
  1221. summary
  1222. summer
  1223. super
  1224. superpowers
  1225. support
  1226. supporting
  1227. surprised
  1228. surprising
  1229. surprisingly
  1230. suspect
  1231. symbiotic
  1232. system
  1233. systemic
  1234. systems
  1235. table
  1236. tailored
  1237. takes
  1238. talent
  1239. talented
  1240. talk
  1241. talked
  1242. talking
  1243. tap
  1244. task
  1245. tasks
  1246. tax
  1247. teach
  1248. teaches
  1249. team
  1250. tech
  1251. technical
  1252. technically
  1253. technique
  1254. technological
  1255. technologically
  1256. technology
  1257. ted
  1258. tempting
  1259. term
  1260. terms
  1261. terrible
  1262. terrorist
  1263. text
  1264. theme
  1265. theory
  1266. thinking
  1267. thinks
  1268. thought
  1269. thoughtful
  1270. thousand
  1271. thousands
  1272. threat
  1273. threats
  1274. throw
  1275. time
  1276. timelines
  1277. times
  1278. tired
  1279. today
  1280. told
  1281. tomorrow
  1282. tool
  1283. tools
  1284. top
  1285. topic
  1286. topics
  1287. total
  1288. totally
  1289. touch
  1290. touched
  1291. track
  1292. trade
  1293. traditionally
  1294. train
  1295. trained
  1296. trait
  1297. translate
  1298. translation
  1299. treat
  1300. tremendous
  1301. tremendously
  1302. tricky
  1303. true
  1304. trusted
  1305. truth
  1306. tuba
  1307. turned
  1308. tutor
  1309. tutors
  1310. tweak
  1311. twenty
  1312. twitter
  1313. typed
  1314. ugly
  1315. ultimately
  1316. unambiguous
  1317. underestimated
  1318. underlying
  1319. understand
  1320. understanding
  1321. understands
  1322. undesirable
  1323. undo
  1324. unequivocally
  1325. unintended
  1326. uniquely
  1327. unquote
  1328. unreliable
  1329. unstoppable
  1330. unsuitable
  1331. unsure
  1332. unusual
  1333. unwanted
  1334. update
  1335. uploaded
  1336. upside
  1337. usage
  1338. user
  1339. usual
  1340. vaccines
  1341. valley
  1342. values
  1343. varies
  1344. varying
  1345. vast
  1346. version
  1347. versions
  1348. veto
  1349. view
  1350. views
  1351. virtual
  1352. virus
  1353. visibility
  1354. vision
  1355. visions
  1356. wait
  1357. wake
  1358. waking
  1359. walk
  1360. wanted
  1361. watch
  1362. ways
  1363. weak
  1364. wealth
  1365. weapon
  1366. website
  1367. week
  1368. weird
  1369. welfare
  1370. whatsoever
  1371. wheel
  1372. widely
  1373. wiki
  1374. wikipedia
  1375. win
  1376. wins
  1377. wisdom
  1378. wise
  1379. wonderful
  1380. word
  1381. words
  1382. work
  1383. worked
  1384. working
  1385. works
  1386. world
  1387. worrisome
  1388. worry
  1389. worrying
  1390. worse
  1391. worst
  1392. worth
  1393. wow
  1394. wrapped
  1395. wreak
  1396. wrestling
  1397. write
  1398. written
  1399. wrong
  1400. yeah
  1401. year
  1402. years