Nathan Benaich of Playfair Capital joins Nick to cover Artificial Intelligence Investing, Part 1. We will address questions including:
- Can we start off w/ a simple definition of artificial intelligence?
- The term ‘Machine Learning’ is often used when discussing AI. In your estimation, is Machine Learning and AI synonymous or are there differences?
- Can you explain the law of accelerating returns and how the concept relates to AI?
- Can you give an overview and describe the differences between:
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Artificial Superintelligence (ASI)
- When do the foremost thinkers believe that AGI and ASI will be reached?
- Raw brain power is not the only thing that distinguishes smart beings from one another… From a scientific standpoint, would you be willing to point out the key things that distinguish humans from other organisms on the planet as well as the key thing that distinguishes humans from existing examples of AI?
- The Turing test is an interesting discussion point w/ regards to AI. Can you briefly review what it is and what it means in the context of AI?
- I’ve seen different AI technologies categorized (e.g. machine learning / deep learning, predictive analytics, natural language processing and semantic analysis, speech recognition, computer vision, etc). Can you talk about the types of technology and categories within AI?
- What are some of the well-known examples of AI that we’ve seen over recent years?
- How does the problem-domain approach relate to the categories by which investors have structured AI… including machine learning / deep learning, predictive analytics, natural language processing and semantic analysis, speech recognition, computer vision?
- Nathan on Twitter
- Nathan’s article on TechCrunch
- Playfair Capital
- DeepMind Mastering Atari Breakout
- Playfair on Twitter
- Sean Everett’s article: Tool: 31 Resources to Learn AI & Deep Learning, From Beginner to Advanced
Nick: Today we have #Nathan Benaich. #Nathan has been with # Playfair Capital since 2013 and focuses on deal sourcing, due diligence and ongoing portfolio support. Prior to #Playfair, #Nathan earned a PhD in Oncology as a Gates scholar at the University of Cambridge and has done a BA in Biology from Williams College. I was recently chatting with #Sean Everett, who of course published one of the most robust resources on AI over on # medium.com , and I asked #Sean who he thought would be the best person to interview on the subject. And after his extensive research, he suggested #Nathan as one of the earliest investors on top of true AI that he’s ever seen.
#Nathan, it’s a big pleasure to have you and thank you for joining us.
Nathan: Thanks a lot #Nick
Nick: Can you start us off by just telling us about how you got into startup investing and venture capital?
Nathan: Yes, sure. So I suppose I had a bit more of a roundabout route than traditional investors in the space. So, yeah I was in college, originally interested in medicine actually, because of the fact that, you know, medicine is a complex space. It deals with human beings and understanding biology and understanding how and why it goes wrong, so that we can design, you know, therapeutics and methodologies to try and prevent that. So I was always very interested in, in complex problems and trying to come up with elegant solutions that, you know, eventually would impact millions of people. Throughout my college career and increasingly my, my graduate career, I started to realize that in fact the time scale that’s required to bring about that change and bring about the innovation in the healthcare space is, you know, in the order of a decade and requires, you know, over a billion to get a drug to market from a lab; and, you know, plagued for better or for worse by, you know, a lot of regulation with the FDA and coupled to, you know, the years and probably decades of training that’s required to kind of have that translational career between medicine and research. You know, I just, I just increasingly thought that it wasn’t really the right career track for me to get about that, that core motivation. Meanwhile, when I was in college, we, you know, we saw the birth of the iPhone and the birth of a lot of consumer web services like Dropbox and Twitter and SoundCloud and Skype. And just as a naive consumer, all these services were making my life so much richer and really got about core problems that, that I had to deal with every day. And I saw the same sort of underlying motivation for entrepreneurs in that space that I was personally motivated with that brought me to the healthcare space. So, so really I just transitioned, you know, a consumer everyday person and transitioned into a career one by moving into the software world instead of continuing a career in the, in the healthcare space. And then, and then the question of actually is getting, getting your hands on capital to be able to invest that’s, you know, a question of right place and right time to be honest. Specially that, you know, the venture ecosystem in Europe and London is far smaller than what’s, what’s available in the US. So yeah I had to, you know, I knocked on a lot of different doors and had the good fortune of meeting #Fede who is an angel who was investing by himself for about two years before he set up #Playfair as a team. And we got along really well. And there was, you know, a need to, to add more structure and formality to what we were doing and provide a newer ray of services to portfolio companies as that, you know, desire that entrepreneurs had to move from a product which was just purely getting money versus getting support for actually building a company was really coming along in earnest. And, you know, around 2010 and then after when we started. So that’s how I ended up here.
Nick: You’re based in London but I’m sensing your accent, the couple times that we’ve spoken is I’m not sure it sounds like a Londoner. Are you from the area?
Nathan: Yeah it’s a bit of a, it’s a bit of a confused mess. I’m originally, originally from France but I went to school in, in Geneva in Switzerland and I studied in English, took a bunch of courses in French as well where I’m also fluent. But, but yeah, I went to university in the States and yeah kind of didn’t, didn’t look back ever since really. So I did 3 years there, 1 year abroad in the UK as an undergrad and then my graduate school in the UK. So yeah the accent kind of changes depending on who I speak to but it’s, it’s not that English.
Nick: I’m a little jealous that you went to school in Geneva. I spent a summer in Lausanne and I never wanted to leave.
Nathan: Yeah it’s a, it’s a beautiful place
Nick: And can you tell us a little bit more about #Playfair’s focus and what you focus on there?
Nathan: Yeah. So, you know, the benefit of having a single LP is the, you know, the focus can, can evolve over time. But really like the, the areas that we’ve been interested from the get go is, you know, core technologies that will enable data driven companies and data creating companies. But the original investment that #Fede had made back in 2010 was in a business called #Judo that was enabling private companies to understand more about those businesses that they were doing business with from the perspective of operating metrics and directorship structures and group structures by gathering publicly available information and information that was deposited in, in the UK in companies. And so we’ve always been very interested in, in the meteoric rise of data in various flavors that, that’s available, you know, across enterprises and consumers and really realizing that there needs to be a new infrastructure that’s built to support the capture, the analysis, the presentation and then increasingly the, the automation intelligence on top of that, that data. And so, so we’re big believers in, in investing in core technologies that support that ecosystem. And of course, within that is the topic of today which is, which is AI. And, you know, the other area focus that we have is on companies that are, that are using some of those core technologies that have been, you know, packaged by, by businesses and either sold as a SaaS product for example or given away in the open source community, but instead taking these components and compiling them together in a way that is non obvious, that really incorporate some of the, you know, a gradient understanding of their user base and how experiences need to be crafted for, you know, compelling habitual behaviors to be developed with that product in mind. Because as you see the, the barriers to creating software companies has increasingly dropped over time. That just means that competition gets more heated with time, which eventually means that really you differentiate on the basis of your data, your core technology and your user experience. And so that’s sort of the, the areas in which we want to make investments.
Nick: Yeah. So you mentioned AI a couple times there. And clearly today’s topic is AI. So can you start us off with a simple definition of artificial intelligence?
Nathan: Yeah, simple, that’s simple. The, the way I view it, because there’s a, there’s a few different camps I suppose, is really that AI is, is the field of building computer systems, you know, that can understand and learn from observations without the need to be exclusively programmed. And, and increasingly the goal for, for AI is really that, that these systems can perform functions that are increasingly human like. So that they, they get to aspects of human cognition, whether those algorithms are again optimized by learning from learning, automatically from data versus needing to be hard coded. So that’s, that’s the ultimate goal and, and within that there’s a, you know, plethora of enabling technologies and different fields that kind of feed in to that, to that broader goal.
Nick: Right. And you know, we hear this term ‘machine learning’ often. And it’s used in the AI discussion. In your estimation, is machine learning and AI synonymous or can you point out some of the differences?
Nathan: Yeah. So my view is that AI is the, the umbrella term. So that’s the, that’s the ultimate goal encompassing the, you know, the variety of disciplines and technologies that are required to build those, those self learning intelligent systems that perform, you know, human like cognitive functions
Nathan: where, where machine learning is one subset of that. And within machine learning you have the separate approaches whether they are supervised or unsupervised, deep learning and other buzzwords that you might be hearing today. Then, you know, another separate area which I think comes under AI is increasingly the field of robotics because as we want machines to be able to do more and more real world relevant tasks that ends up spanning far more than what’s possible on a desktop computer or on a smartphone. So areas like robotics for example also fit under, under AI, but those are not necessarily machine learning although some machine learning techniques might be important for robotics.
Nick: Can we start off just backing up on the whole concept of artificial intelligence? Would you be able to explain the Law of Accelerating Returns for us and also how that concept relates to AI?
Nathan: Yeah. So, I mean, this idea is really one that was, that’s part and parcel of #Ray Kurzweil’s thesis, who is, who is now at Google. And, and he is notorious for, for making quite a number of what was seemed to be outlandish predictions into what was possible technologically in the future. And, and then in quite an uncanny way a lot of them have, have actually panned out, driverless cars is one for example. And so, so his idea of the Law of Accelerating Returns is really that the fundamental measures of, of information technology, which is essentially compute capacity and cost follow exponential trajectories. And these trajectories can be predictable. And, and what’s, what’s interesting as well is that he says that this exponential growth in trajectories are essentially exponential in their own rights. And, and what’s important to know is that the consequence of the Law of Accelerating Returns is that at some point you get, you get this monocle singularity, which is the point at which machine intelligence becomes as not human as essentially greater than that exhibited by humans. And, you know, one of the other laws that kind of fuse into this, this thesis is, is of course Moore’s Law, which was coined by the, the then Chairman of, of Intel, the co-inventor of integrated circuits, which essentially states that, you know, the number of transistors that can fit on a chip roughly doubles every two years. So essentially the compute capacity that can fit on a chip increases every two years. And that’s one of the driving forces behind this, this Law of Accelerating Returns which, which contributes to, to this, you know, exponential growth in technological ability.
Nick: Yeah. And this all relates to sort of these three types of AI that we’ve heard about. There’s ANI, AGI and ASI. ANI being Artificial Narrow Intelligence, AGI being Artificial General Intelligence and ASI being Artificial Super Intelligence. Would you be willing to give us an overview and describe the difference between the three of those?
Nathan: Yeah. So you can regard this as a, as a spectrum from intelligence that, that’s really restricted to a specific domain and a very specific sub task, and is often built with the sole intention of tackling that task. That’s narrow, although it is super intelligence which is essentially the point, the point singularity where machines are, are more intelligent than the humans. And so a good example of ANI is how IBM’s Deep Blue beat Garry Kasparov in chess. Now, you know, this computer was tasked specifically with this problem and it’s not able to do any other task. Therefore, it’s ability is not and really it doesn’t exhibit any benefit to, to other problems in the world. Now as you, as you move towards general intelligence, which is the approach that, or the, or the goal that Google DeepMind is seeking to achieve and others are as well, is that where intelligence is exhibited by a non human system is, is on par with tasks that can be completed by humans. You know, these tasks range from college representation and reasoning, to planning and self learning, community catering and exhibiting emotion. And, and importantly to feel things that AGI systems must be able to solve this multitude of tasks without needing to be purposefully rebuilt or re-engineered in some way for, for this new task that they’re being staffed on. And yeah, and so artificial super intelligence, to end, is, is this idea that machines will be smarter than humans and that encompasses every field. So, you know, including creativity and wisdom and social skills and moving around in the, in the real world for example. And #Nick Bostrom was, was one of the philosophers who first wrote this idea a lot in his book on the topic, was that ASI can be exhibited by an individual computer or a network of devices. So it’s, it doesn’t have to be exhibited in any specific way but just for non human intelligence to be more and perform in every single task than that exhibited by humans.
Nick: Yeah. So we’ve seen many examples of ANI you mentioned. There have been a bunch of machine learning programs around chess and Google has a number of businesses that are, that are focused on ANI. They often cite that the spam filters within Gmail. Can you talk about when AGI is forecasted to be reached? So the point at which machines are sort of at the, the capability level of one individual human and also when some of the smartest minds in this field are, are projecting that ASI potentially could be reached?
Nathan: Yeah, I mean, I, I really don’t know how you can accurately point to a certain day at which this is going to happen. But I think generally, you know, general intelligence is believed to, to occur in the next like decade or two I think. And then some people float singularity as 2040. But, you know, again, who knows. But I, I do think that, you know, in this, in this whole debate there is, there is definitely value for ANI. Like in, in a lot of cases, having a system that’s specifically engineered for your task is more than sufficient because at the end of the day, you know, in my view, a lot of the benefits of AI is that it tackles and, you know, a narrow tasks that’s, that has relevance to it’s user. It’s really designed to, to overcome, you know, inefficiencies that, that we as humans exhibit in the way that we behave and in the way that we conduct our work, you know. Sometimes we have to do very mundane repetitive tasks. So at some point we’ll make mistakes just because we get bored or tired. Now, you know, if you can come up with, with an ANI that, that solves that, I think it’s, it’s of great value in and of itself as is, you know, AGI from a research perspective. The only time which I think you can say that AGI is more valuable than ANI is if the former systems do genuinely perform better than, than the narrow ones. So some of this debate is, is academic, some of it is, is theoretical and then, and then a lot of it, a lot of these three descriptions are used by, by people who try and forecast how we should try and control systems that we’ve built today such that they’re not co-opted for a task or purposes that we didn’t imagine when we were building them.
Nick: So we talked about the Law of Accelerating Returns, we talked about Moore’s Law. But, you know, a lot of the things I have come across on AI talk about Raw computing power, Raw brain power, and that’s not really the only thing that distinguishes smart beings from, from one another. From a scientific standpoint, would you be willing to point out some of the key things that distinguish humans from other organisms on the planet as well as key things that distinguish humans from existing examples of AI?
Nathan: Yeah, I mean, but you’re good at those, and then certainly, you know, tasks that or behaviors that machines don’t do so well on currently are, you know, emotion and creativity and even inventing new things. I think there’s a, there’s a lot of debate as toes whether artificial systems can evolve in their own right and evolve intelligence just using the similar selection process that, that biology has experienced over time. I don’t really see it right now. I get the comparison but I still think that, that there’s, there’s just so, so much more work to be done to ensure that, you know, the systems we build aren’t really as brittle as they are today. Now, and another kind of element I think might separate, you know, us from, from machines and products of the species is this concept of, of altruism and behaving for, for the benefit of others even though it might harm ourselves. You know that’s something that, that machines don’t currently exhibit and some, and some animals don’t exhibit either. But in terms of, in terms of giving you a more detailed example, I’m not sure whether I could do that.
Nick: There’s also this component of a survival instinct, right, with
Nick: with biological species and
Nick: I mean it’s a little difficult to be able to comprehend that even a very intelligent machine would have a will to thrive.
Nathan: Yeah, yeah because in that, in that situation the machine would have to be able to be self sufficient and, and make it’s own decisions and evolve in it’s environment without, without there being any human intervention. And, and I don’t know, there seems to be some basic constraints around that that don’t really make it possible in the near future. Like it would have to have the ability to, to generate it’s own power or generate it’s own, you know, sustenance and how does that, how would that work. If it had solar panels on itself and, and as it’s walking outside it could charge itself, then maybe you could do away with any interaction from, from the factory or from the human and for, for that, but I just think that there’s, and maintenance is another one, you know. I mean, parts break down and how does it repair itself? So I think it’s just, there’s just there’s a lot of like basic, basic things that mean that robots won’t really be walking along the east like reproducing and evolving intelligence that’s, that’s created that’s greater than what it was designed with, at least for a long time. Like I think even just looking at the, the videos of, of the dog or robotics child just is, is a really great example for where, for where the state of the art is. And, and, you know, some of that, some of the tasks are piloting a car instead of driving it, essentially at one mile an hour or just turning the wheel to try and not hit comes. And then another one for climbing one or two steps. And a third one is, you know, opening a door. And a lot of the best robots and state of the art stumble at those blocks. So, I think it’s important that there’s always balance. A balance where we are but it’s very easy for, for the media to, you know, expand upon, you know, very small results and start forecasting for the future what that might mean, even though that future is very, very, very far away.
Nick: Yeah, I think we’ve seen that, we’re very much in this, this ANI phase, right? If we can give a smart machine very fixed constraints in a set environment and the environment doesn’t have anything unusual that could be introduced, then the machines can perform pretty well and can be designed to, to learn and get better at whatever task. But once you start getting into the context of varying constraints, it seems like a lot of these robots are falling down.
Nathan: Yeah, yeah. I mean, there’s certain, there are certainly some types of, of networks, so it’s different than learning architectures that are, that work really well on image recognition tasks and speech recognition tasks and there’s some transfer ability there. And there’s this other demand of, of transfer learning where you can take, you can take a network and train an arm , a corpus of text and how would that be useful to an image recognition task because there’s some elements of features that are, that are common between the two at a certain level of the, and the hierarchy of features. And so, there is some example of, of transfer ability, of, of ANI as if you will but, but we’re still not to the point where, where we have AGIs that can be useful across a plethora of tasks without too much re-engineering. I think probably the best example is what DeepMind has done.
Nick: So, Nathan, the Turing Test is an interesting discussion point with regards to AI.
Nick: Yeah. Can you briefly review what it is and what it means in the context of AI?
Nathan: Yeah sure. So, so it’s, it was really a paper in 1950 that #Alan Turing wrote in which he suggested this game where the, the imitation game, which as since being coined The Turing Test. And that’s really an environment in which there is, there is a human who is interrogating a, another human and a computer, but either of the parties don’t know whether the other is a computer or a machine. And the human who is doing the interrogation only communicates with, you know, the human and the computer via textual messages. And, and the idea is that if the interrogator can’t distinguish between the two other entities as to whether they are human or a machine, then he says it’s not unreasonable to call that computer intelligent because as far as the, the interrogator can tell there’s no, there’s no obvious distinction between the two. And I think he quotes, you know, confidence around 70 % to determine that. But, but that’s been like the crux of, of the Turing test is understanding whether you’re actually talking to a machine or talking to a, talking to a person in textual communication.
Nick: Did you see this, this recent film Ex Machina?
Nathan: Yeah, yeah
Nick: What did you think? I was pretty impressed but I’m also a novice in this topic area so I’d be curious to hear your opinion?
Nathan: Yeah I mean, it was, it’s kind of an interesting exploration into, into how humans would react in that situation because, you know, a lot of this is, as we talked about, just imagining what it would be like. But you’re kind of observing through, through the medium of the film how someone interprets that, that relationship to evolve between, you know, between essentially a super, a super intelligent robot and a human. It was, it was really fascinating, specially the, the emotional relationship that developed. It kind of, it had, you know, interesting , you know, ramifications for how we, for, for what emotion really is and util what can we feel it. So I thought it was, it was a really cool film actually.
Nick: Yeah. One of the lines that jumped out to me was when the, the founder of the search engine, the effectively the Google,
Nick: in the film was, was talking about his wetware brain that, that they developed. And he said, everyone thought we were studying what people think, what they search for and really what they were studying in understanding was how people think.
Nathan: Yeah, yeah. Yeah exactly. And that’s, I suppose that’s using the neuroscience and computational modeling a thought process. Yeah, that’s, it’s fascinating
Nick: So, you’ve mentioned a few different types of AI, and we see these technologies categories often, you’ve mentioned machine learning as some others. Would you be willing to talk about the types of technologies and the categories within AI?
Nathan: Yeah sure. So, I suppose like within this question it’s important to sort of remember that, you know, that there are, there are a variety of techniques that are used. And these techniques are often, you know, best considered in the context of problem domains. So, you know, there are some methodologies that are really good for dealing with, with images and others that are, are very, very well suited for, you know, with text. And so, so even though there’s a lot of buzzwords being thrown around, it’s always important to consider which, which technologies is best suited for a specific problem and type of data that you’re dealing with. But I mean, really speaking there’s been a, you know, there’s been a number of, kind of booms and busts in the, in the AI research space ever since it started in, in the mid 1950s. And, you know, with that there’s been different ways of, of technology approaches and really the first way was, was mostly about rules based representations of, of the way that, you know, experts and a knowledge working domain for example, understood certain processes to work. And that was all about say well you’ve got this knowledge worker for example, he’s processing mortgage applications, this is like the decision tree that he, that he , he or she flows through saying, you know, if this then that, you know, mortgage gets allocated or mortgage doesn’t. And the field was, was really excited about programming what was called expert systems to follow these ‘if this then that’ rules to come out with results such that you wouldn’t have to have a human to go through that. And so that was like, that was the first, first wave of, of technology within like the broader goal to achieve in AI. So that’s, that’s the rules based expert systems. Then you have this, the second wave, where a machine learning approach is where you required a training data set. Raw data on which you, you essentially want to teach an algorithm to do something. And then a task the other set, which you , which you used to test the results. And the idea was that the more data you solve, the better the results would come out the other end over time. And within the machine learning you had these two separate training approaches. One is supervised and the other is unsupervised. And a notion in, in supervised is where the algorithms were using specific features in the data. And you could think of these as parameters within that data that help it make predictions. So make predictions as to what the output should be, using a set of examples that have been correctly labelled. So you can imagine that there’s, you know, a database of dog and cat pictures and you want to train a machine learning algorithm to recognize a dog and a cat. And in a supervised machine learning approach you would have examples of labeled cat pictures and labeled dog pictures. And then you would say dogs have eyes that look like this and cats have eyes that look like this and then a parameter of other features that would be used to, to associate with that a correct label. And then you show it all these examples and you essentially train an algorithm on the basis of these features using that training data set and the tested algorithm.
Nathan: The other approach within machine learning is you have unsupervised approaches where they have the systems learning to do basically the same thing but without label creating examples. And the third wave, is deep learning. And that’s the idea that you take input data so you can, you can look at that dog and cat example again but where the input data is essentially the pixels in that image , and you’re passing those pixels through several layers of computation. Each of which is essentially creating higher awareness of more structure and representations of that data, where the last layer is essentially outputting the result of the interest, which is dog or cat. And the advantage with these systems is that they leverage what we mentioned before which is the exponential growth and data and computational capacity and cost reductions with Moore’s Law. And what’s particularly interesting with these approaches is that you don’t have to hand craft those features, you don’t have to do any feature engineering because, because the network’s essentially learning those features and the intrinsic properties of that data automatically. And that makes them far more scalable, it makes them far more powerful to solve really complex problems where, where the data set is very rich and, and difficult to interpret, which is the case in innovation speech and video.
Nick: So, was that all within the second wave there, the supervised, unsupervised and deep learning?
Nathan: So deep learning I’d say is the, has been around for a couple of decades but, but it’s really part of this third wave because it’s only been enabled by having huge amounts of data. I mean, you know, millions and millions of, of, of raw images are needed to, to be able to identify those features without having an engineer either hand identify them or hand craft them. So really like the architecture has been around, but it’s just that they haven’t been able to, to have the data to do the training. And we haven’t had just the compute capacity to be able to, to effectively train these models which, which usually take, you know, hours to, to sometimes even weeks to train.
Nick: So, supervised is having examples and then the unsupervised isn’t
Nick: you know a great deal of this, is it, is it sort of a brute force method of guessing and receiving answers and iterating so that the machine gets smarter and smarter based on correct versus incorrect responses?
Nathan: Yeah. I mean, it’s, you don’t have the, you know, outputs that are achieved via specific approach and you can always have on the other end a curator that’s saying yes this is good, no this is bad. And essentially the more, the more raw data or the more correct examples it sees, the better it can achieve the variables that, that it’s adjusting in order to get the result that the user wants. So, you know, in the image situation, you, you can essentially on the other end of the model say oh you call this a dog but no it was actually a cat. And then, in the deep learning situation if there’s a specific layer in that multi layered network that was tweaked in a way where it, it influenced the decision for the network to output a cat versus a dog, then, then it will go back and tune those layers to fix it. And then, you know, the more , the more iterative passes you go through, the better you tune the, the algorithms to output the right results.
Nick: Thank God for Moore’s Law because the amount of data and the amount of processing that this is going to require is, is
Nathan: Yeah, yeah, no I mean, exactly . And I mean in the, in the, in the DeepMind AlphaGo example, there were two neural network that we use. And each of those actually took four weeks to train on 50 gb. It was running on Google’s Cloud Compute. So you can, it’s just not possible to do this in a reasonable amount of time without that infrastructure and without the, yeah I think it was 30 million games that it was trained on of self play. So, you know, that corpus of data just doesn’t even exist in the real world. It had to generate it. And that was a solution to, to, to get around over fitting which is essentially creating a model that’s so good at predicting on the basis of the mathematical data that you’ve given it. But then when you give it a new data set, it’s, you know, it doesn’t actually perform as well because it hasn’t seen enough examples. So they have to generate so much raw data from self play to overcome that. And, you know, looking back 10 or 20 years, it just was impossible.
Nick: Yeah. With that example you’ve mentioned a couple that are some of the most famous public examples of AI that we know about. Would you be willing to talk about maybe the top 3 to 5 most known examples of AI?
Nathan: Yeah, I mean, I think AlphaGo is probably the one that’s hit the headlines the most. And that’s why you know a team at Google DeepMind just essentially an approach to have a computer play the game, the ancient Chinese game of Go, which is extremely complex and otherwise can’t be tackled with just a brute force search approach because at every move in the game you need to evaluate, I forget the numbers but it’s, but it’s over 200 moves and there is again on the order of, of hundreds of moves in the whole game. And so at every stage you, you can possibly compute all the potential probabilities and moves and then the potential moves after that to be able to, to figure out which is the best move to do.
Nick: It’s like order, orders of magnitude more complicated than chess
Nathan: Yeah, yeah exactly. Like there’s more, there’s more potential moves than there are atoms in the universe.
Nathan: And so it’s just, so it’s just ridiculously complicated. So that’s the best example. And then before that was, this was again from, from a team at Google DeepMind which was a, a deep learning, deep re-enforcement learning approach to learning to play video games, Atari video games. And so, you know, the best example is, is Pong, where you’ve got this essentially this little space bar that’s floating left and right at the bottom of the screen and there’s a bouncy ball that it has to hit in order to break bricks at the top of the game. And getting a strategy to break all the bricks without letting the ball fall by it’s wayside is the goal. And the only inputs that the agent let the machine install was the raw pixels with the goal technically called the reward function or the value function, which is optimizing score. And so it has to learn over time which moves it should follow in order to optimize the score. And what you see is over hours of training this computer without seeing previous examples of how this game is played, has basically figured out the optimal strategy to win the highest points by like tunneling around the side of this, of this layer of bricks so that, you know, there’s a maximum number of bricks that are broken before it has to rebound the ball again. And it’s just a amazing video to watch and realize like how, how essentially creative a machine has, has gotten to optimize a certain task. And, you know, I think before that it was probably, you know, image recognition techniques on, on either video or, or just image classification, whether it’s on detecting faces in Facebook or detecting dogs and cats in Google. And speech recognition has also then, you know, massively improved since the days of Siri which were mostly rules based statistical methods today where, you know, you can speak into Google’s voice recognition app and it will pretty accurately understand what you’re saying, even in a noisy environment.
Nick: Right. Yeah, you’ve mentioned that a lot of these technologies are categorized within problem domains,
Nick: so trying to stall things around images or text or language
Nick: Is that often why we see investors saying that they specialize either in machine learning , deep learning versus predictive analytics versus natural language processing and semantics versus speech versus vision, for instance?
Nathan: Yeah, I mean, it might, it might be. Definitely that would be the, the areas that have attracted the most interest I suppose in the first like commercial wave have been predictive analytics. As you can move from a world which was to get everybody online, create digital products which then creates all these signatures or behavior that, that companies or people have with that product. Which then registers a bunch of data that describes that behavior versus analytics. And then you’ve gone from just tracking this data to then visualizing it and dashboard frenzy to now trying to understand what future results might happen on the basis of product trans which is this field of predictive analytics. And, you know, closely associated with that have been applications in, in sales and marketing, in the gaming world and predicting trend and any converse in predicting trend. And further to that, in advertising technology because essentially nowadays buying and selling as online is this, is the same, then financial trading where you have a market and you have a seller and a buyer. And all of that is, is essentially algorithmic or programmatic in that space. And then I think, you know, the last area that’s, that’s got a lot of interest is financial services. Mostly from, from a lending perspective and credit scoring perspective because there we’re dealing with data from a lot of different sources that have intrinsic properties indicative of potentially indicative of credit worthiness and, and really, kind of extracting as much value as we can from, from that data as something that is otherwise very difficult to do if, if you just use, you know, the knowledge that we have accumulated over time just by handling cases. But instead, you know, using more advanced methodologies to, to understand things about the data that, that aren’t immediately presumable, which is why by crunching their numbers
Nick: Great deep content from #Nathan there in Part 1 of the interview. In Part 2 we will cover questions including: An overview of the funding landscape for AI in recent years and the major things that have changed. The primary sectors wherein AI-based startups are receiving the most funding. Some of the most interesting applications both in the enterprise and consumer side. Which VC firms are most active in investing the most capital in AI, both in the U.S. and also outside. Some of Nathan’s key findings from his research on AI and that of greater tech. What are some of the most exciting advances in AI from #Nathan’s standpoint. What he thinks are some of the biggest threats regarding AI. What advice he has to early stage investors that have an interest in Artificial Intelligence. And we’ll wrap up by addressing a couple final thoughts from #Nathan about this category and what it means for the future.
Until then remember to over prepare, choose carefully and invest confidently. We’ll see you next time.
Posted in: Podcast Episodes
- 134. The Importance of Storytelling, VC EQ, and the LP-GP Dating Game, Part 2 (James R. ‘Trey’ Hart III)
- 133. The Importance of Storytelling, VC EQ, and the LP-GP Dating Game, Part 1 (James R. ‘Trey’ Hart III)
- 132. Nick Moran is Interviewed on Bootstrapping in America
- 131. How Amazon, Fitbit & Snap Won; Where Apple, Pebble & Google Did Not, Part 2 (Ben Einstein)
- 130. How Amazon, Fitbit & Snap Won; Where Apple, Pebble & Google Did Not, Part 1 (Ben Einstein)
- Investor Stories 61: Why I Invested (Roberts, Struhl, Verrill)
- Investor Stories 60: Why I Passed (Triest & DeMarrais, Tsai, Larkins & Galston)
- Investor Stories 59: Lessons Learned (Olsen, Collett, Sanwal)
- Investor Stories 58: What’s Next (Kurzweil, Buttrick, Hudson)
- Investor Stories 57: Exceptional Founders (Wilkins, Mason, Benaich)