294. How to Value AI, Data Network Effects vs. Data Learning Effects, and Evolution of the VC Asset Class (Ash Fontana)

294. How to Value AI, Data Network Effects vs. Data Learning Effects, and Evolution of the VC Asset Class (Ash Fontana)


Ash Fontana of Zetta Venture Partners joins Nick to discuss How to Value AI, Data Network Effects vs. Data Learning Effects, and Evolution of the VC Asset Class. In this episode we cover:

  • What was the inspiration for the new book?
  • What’s the difference between a Data Network Effect and a DLE (data learning effect)?
  • How do you advise those that are building tech where AI will be a valuable piece in the distant future but doesn’t really play a role until the growth stage or beyond?
  • Tell us about the Lean AI Method, described in the book, and how companies can use it to start implementing AI.
    How do you feel about companies using “off-the-shelf” AI, like TensorFlow?
  • Why is now the right time for AI to pervade throughout tech and industry?
  • How do you value AI and determine what’s real?
  • How does one put a valuation multiple on fairly nascent category where the value can be exponential and the moat can be unbreachable but there’s not a whole lot of precedent?
  • What are the key data categories or types that are most valuable for startups to be capturing and analyzing from day one?
  • How do you find the right types of people with the right capability and mindset to join a startup if they have no formal training or experience directly w/ AI?
  • How do you see the asset class evolving?
  • Do you think the rolling fund product can make the leap to institutional investors, or will it continue to serve smaller, retail investors?
  • What’s your opinion on insider rounds?
  • Thoughts on Tiger Global?
  • Does China surpass the U.S. in tech innovation (and AI) over the next decade? Why or Why Not?

Guest Links:

The host of The Full Ratchet is Nick Moran, General Partner of New Stack Ventures, a venture capital firm committed to investing in the exceptions.

To learn more about New Stack Ventures by visiting our Website and LinkedIn and be sure to follow us on Twitter.

Want to keep up to date with The Full Ratchet? Subscribe to our podcast and follow us on LinkedIn and Twitter.

Are you a founder looking for your next investor? Visit our free tool VC-Rank and tell us about your business. We’ll send a list of possible investors right to your email’s InBox!

Transcribed with AI:

0:00
As Fontana is back on the program joining us today from San Francisco, he is the managing partner at zetta, an AI first VC, and ash is also the author of the AI first company, how to compete and win with artificial intelligence. He’s back with us today to discuss the book and an update on zetta. Ash welcome back. Thank you very much for having me, Nick. Yeah, it’s always a third time or third time, always a pleasure. So bring us up to speed on the firm. zetta. Right, what were Yeah, with a un thesis new developments over the past couple years.

0:33
Yeah, we are still doing what we have always done, which is backed companies, as I like to say pre traction post data. So that is before there’s quantitative evidence of market need, you know, a whole bunch of revenue and recurring sales, recurring revenue and repeatable sales processes, whatever else, but after there’s evidence that quantitative evidence that the predictions they’re trying to make working, or promising, so that’s what we’ve always done. So you could call that seed stage AI focus b2b business models. We are on to our third fund, that’s $180 million fund, the big thing that I guess we’re formalizing is the focus on Europe in the UK. And so I’ve been going back and forth to Europe in the UK for about eight years making investments setting up an angel list, the very first investment I made was that I was in the UK trackable, that’s already worth a billion dollars. So I’ve been doing that for years. But you know, that plane trip gets a bit much after a while. And also, it’s the case that at this point, it’s undeniably true that the year Europe is the center of machine learning research. They’re more machine learning researchers there, they’re more papers published there. They’re more software engineers there, and whatnot. Now, of course, it’s a little bit unfair to say, Europe’s a place. But you know, effectively, it sort of is for an investment from an investor’s perspective in terms of how to figure out where and how to spend your time and how to support companies. So that’s something we’ve certainly formalized, we’ve always done it, like I’ve made it, as I said, My very first investment presented back in 2015, was in the UK. But now I’m here full time. And that’s, that’s what we’re changing. And then the other thing is refocusing on platforms. So that is companies that are developing tools and infrastructure for data scientists and machine learning engineers, so less vertical applications of machine learning, still absolutely willing to back and help companies in specific verticals that we know very well, like insurance, FinTech and certain parts of healthcare and industrials. But really focusing on tools and infrastructure for machine learning engineers and data scientists. So they’re the two big changes. Got it?

2:53
So you’re spending most of your time in Europe here. I thought you’re in San Francisco. Yeah. Okay. Yeah, no, I’m not in San Francisco. Okay, were yet today, Ash.

3:02
And they’re sometimes Well, I’m moving around all over the place. So the answer will change tomorrow. today, tomorrow, so so you’re spending time between London and Zurich and Munich and Milan, and places like that, and then popping back to San Francisco a lot, too. And I’ll be in New York next week. But anyway, that’s just this week. And next week.

3:21
Very good. You’re busy man. I mean, I miss the circumstances. It sounds very glamorous, you know, if you traveling around Europe, because it’s been, it’s been a long time forever best. Yeah. So yeah. So I want to talk about the book, you know, I think a good place to start is give us sort of the inspiration and the background on the book. And, you know, what’s kind of a brief overview of maybe the framework for the book?

3:46
Yeah. So look, I’ve had the benefit of working for founders that are really pioneering the development and application of machine learning in the real world figures. Now, you know, I did a bit of that myself when I was running my own company, but zerto, starting in 2013, where we started to work with these companies that were among the first to do that. And you really learn the most about how to do that, on the ground, how to build a machine learning research team integrated with a product development team, how to sign a contract with a customer that allocates data rights between the company and the customer properly and well and in a way that makes everyone happy, how to price a product so that you encourage you more usage, and therefore more data collection. There are all these things that are idiosyncratic to starting a company that’s applying machine learning in the real world, and a very different to just starting a software company. And I’ve learned about those things and I’ve solved problems around those things. We’ve had those for a long time now, and accumulated this knowledge you in the boardroom, so to speak, and it was time to share it. It was the case that there was enough of it to make up a book. And to the second part of your question, that I had really coalesced on a few big ideas that hadn’t been communicated or developed by anyone else. And so they are, for example, the notion of a data learning effect. And that is a completely new type of competitive advantage. It’s not a network effect. It’s not a scale effects, like data is the new oil. And it’s not just like a learning effect, like a consulting company that just gets better and better at something. It’s not like any other form of competitive advantage that we’ve thought of in the past, certainly not like a brand or a patent or something like that. It’s a new type of moat. And it’s not even a moat, it’s more like a loop. So, you know, I’m not going to do it justice, sort of tried to explain what it is and how it operates here. That’s why I wrote a whole book about it. But it’s this automatic compounding of information that happens when you get a critical mass of data, you have the capabilities to turn that data into information by labeling it, cleaning it and whatever else. And then you build a self learning system that learns over and over again, something from that information so that it can automate or predict something. And this is just the type of competitive advantage that we all sort of know is out there. But no one really knows how to articulate doesn’t have language for how to build, how to measure and the other three sections of the book, what is it? How do you build it? How do you measure?

6:39
Interesting, so I guess the primary difference there between the data learning effect and a data network effect is, it’s not reliant on a network, it’s just reliant on more data, or can you

6:53
that’s a good nuance to the model, because data learning effect is something that encompasses a data network effect. And so a data learning effect is three things critical, massive data, the capabilities of processor into information and a self learning system. The first two things don’t have data network effects, you can just go and get a whole bunch of big bucket of data from somewhere. And the second thing, he doesn’t necessarily require data and network effects. Because you can claim data manually, you can claim data with, you know, one off scripts, you can do various things to claim that you can hire a data labeling team. The third thing, a self learning system is something that is involving a data network effect. And that is where the addition of data makes the whole system more useful. Now we can all recall what a network effect is, which is where the addition of a node to the network makes the whole network more useful. The more people we telephones, the more useful the telephone system is, the more people on Facebook, the more useful Facebook is to you, yeah, more people to contact more information in your feed, etc. That’s a network effect, the data network effect is different. And a data learning effect requires a data network effects for the self learning system, part of it, that automatic compounding of this information, but it doesn’t mean anything if you don’t have good data, a critical mass of good data that you’ve claimed and turned into information to feed that system. So it’s a it’s it’s a component of it.

8:31
How do you decide what is enough from a critical mass of data standpoint? I mean, this is, this is an issue that we encounter, you know, I we invest in a lot of startups, many of them are software based, and many of them have, for instance, a AI component. That’s, that’s critical in their vision. Right. But early on, in the startups growth when they’re at the pre seed seed stage, which I know you’ve had a lot of practice investing in. The AI component is not it doesn’t exist, it’s not present. I know that you, you know, you look to companies that are AI, first, you look to companies where AI may be the top priority, and the first thing on the agenda during meetings, right. But many of these early stage companies, they don’t have the critical mass of data yet, right? They haven’t built all these algorithms in loops. And actually, maybe an MVP is just rooted in automation for the customer that makes their life a great deal better. And, you know, further down the path, they can build sort of the infrastructure and the talent required to add in AI as a moat and a differentiator. You know, what, what are your thoughts on that? And then the critical mass required of data to, you know, to leverage some benefit from the AI.

9:56
Yeah, so your experience very much maps to my experience and therefore very much maps to the sequence of frameworks in the book. And so the book starts with, you know, explaining what this data learning effect is, but then go straight into, okay, what’s the very first thing you can do? The very first thing, you can automate the very simple prediction you can make. And I call that lean AI. And that’s been your experience, too, which is, you know, often the instantiation of AI at a company doesn’t really look like AI just looks more like a script, it looks more like a fairly simple statistical model. But that allows you to then figure out alright, to get closer to something that’s more automated, that is self learning. Do we have to get more data? Do we have to build more features, more algorithms? Put more algorithms in this model? Or do we have to do something else, like hire a different type of person or build a product on top of this that gets feedback then customers? And so yeah, often it is the case that AI doesn’t start looking like AI, it starts looking like something else. But that’s something else then collects more data, to then build a more powerful system, and so on and so forth. Now, to your question of, you know, what’s a critical mass of data, it’s just completely dependent on the problem and the method you’re using, right? So some prediction problems require very little data. To solve that is, you can develop a really reliable prediction just by looking at a little bit of historical data. But with a model, for example, that is really powerful, because the data might not be representative of a long period of time, but it might have lots of dimensions to it. So the model can learn very quickly from it. But sometimes, you know, especially with things like vision and stammering recognition, speech recognition problems, you need a huge amount of data to learn over because it’s such a dynamic representation of reality, someone’s voice or picture or something like that, you have to analyze so many parts of it, in order to discern a pattern, so many pixels, so many colors, so many different points in the sound wave. So different problems require a different critical mass of data. How you instantiate an AI system is different in every case, but usually simpler is better at the start. And then what you do next is the real question. The point is, you know, from your initial experiment, what did you learn that helps you figure out the very next thing, and the very next thing being, you know, what you should invest in next. And then so that’s where the book goes, after lean AI it goes into, alright, if you want to collect more data, here are some ways you can do that 30 ways you can collect data, data, labeling customer contracts data Coalition’s public data, etc, etc. Or if you have to develop different types of models, like you actually have a critical mass of data, but the models to start mining well over them. Well, here are some different model types to consider. And if you need to build these different types of models, or need this infrastructure around these models here, the people you then have to hire to do that. So yeah, my experience maps to yours. Often starting simple is really good. But the question is, what did you learn from that simple experiment, from that initial thing that you did for your customers? That gets you closer to developing a data learning effect that truly self learning system?

13:38
tells you touched on lean AI? Or the lean AI method? This is something that you discuss in the book? And can you give us a little more color on what that is? And how companies can apply it?

13:50
Yeah, so lean AI, just like the Lean Startup method. At its core, is a series of questions. And you know, I’m sure you play this role. And many people in your audience play this role as an investor, which is trying to ask good questions that get you closer to figuring out what your customers really want. Now, in the context of a lean startup, those questions around what features do they want? You know, what buttons do they want? What would be useful for us to calculate for them quickly, etc, etc. In the context of an AI first company, those questions around what conditions do you want to make or what do you want to automate? So you know, in a very abstract way, lean AI is just a series of questions. That gets you to the following a way to with one data set, one type of modeling technique could be very simple statistical technique. And with one report that you produce on one machine, you don’t need some massive cloud based system to do it. Yes, you to a prediction What a little way that automates something. Now, it could be a low accuracy prediction, or it could be a low degree of automation. But it gets you to something that you can then put in front of a customer and go, alright, but this prediction be useful to at this level of accuracy. And they can say that prediction is actually totally irrelevant. Like, I don’t need to know when you know, this shirt will be popular or not. Or what if this delivery window will be, will be 20 minutes or two hours, that actually doesn’t make a difference to my customers. Or they will say, oh, that would be really valuable. But we’re only going to tell our customer that the package is going to arrive in a two hour window, if we’re 95%. Sure. And you’re saying you’re 85%? Sure. So try to get it to 95. And then we will pay a lot of money for this, because then we can actually tell our customers that the package is going to arrive in a two hour window, and they’ll be really happy. And we won’t get many complaints when it doesn’t, because it’ll only be 5%. And so that’s what Lena is, it’s a series of questions, to figure out how you can just using one data set one machine one technique, and generating one report, produce something that you can put in front of a customer and get feedback on so that you can figure out where to invest next, in a totally different model, a totally different data set, or a totally different products. That sort of shows the prediction, or whether you’re on the right track. And that’s that’s the whole point of it.

16:20
Do you find that most of the companies that you’re working with are tech first. So they’ve they’ve figured out how to how to create some novel interesting AI, and then they go and search for the market and the problems that that tech can be used against? Or is it often you know, starting with the problem, and with the customer set in this instance, and then working back into, you know, the right AI based solution that addresses or both?

16:51
Yes, and most of the companies we work with, applying machine learning or artificial intelligence to a specific vertical, are actually 5050. Now usually one of the founders has a lot of experience in the industry, and has just come across a problem that they think they themselves have AI, we think we can process this insurance claim, we think we can track our inventory in our warehouse, we think we can use a robot to pick this thing up, we think we can, you know, put a camera on a forklift, so it doesn’t run into someone. And so they become acutely aware of really significant problems in their industry. And then they’ve coupled with the partner with a founder from a machine learning research background. And so they bring the data, the heuristics, the knowledge of the customer, and how a system could integrate the customers to them. And then the researcher brings the multiples, basically. And, you know, maybe some of the product development skills as well. And so a lot of the teams, we see sort of vertically applying AI, or applying to a specific vertical, I should say, are 5050, then one founder from each. Now, a lot of the teams we work for that are building machine learning tools and infrastructure. They’re completely from the field of machine learning and data science, because, you know, they are the customer. And they’re also the people who build it. And so, you know, they have, for example, tried to build such a massive predictive model over and come across the same problem again, and again, and again, when they’ve tried to play it. And so they make a new deployment tool or something like that. So yeah, it’s usually half half for for most of these vertical companies, and then 100% data science and machine learning for the horizontal and so to speak. Got it?

18:40
Yeah. Just to kind of round out the point with with Lena AI, you know, how do you feel about companies that are using off the shelf AI? You know, something like TensorFlow? Yeah. You know, on one hand, they don’t have to build it themselves from the ground up. On the other hand, their data, to some degree is no longer proprietary.

19:02
So firstly, yeah, good for them. And it’s sort of crazy not to try to use a lot of the tools available. It’s just to what degree right, you know, using it as a notebook to just do your work in using it to deploy and scale a system? You’re using it for that? Or are you using a pre trained model? Now, the format is probably something that you’re going to do for quite a long time, because you’re probably not going to hear a deployment problem or a scaling limit for a while. And when you do each show, you’ll have to build your own system. But until you do, you know this, there’s no need to do that. The latter is something that will probably cause you to hit a limit pretty quickly, in that, you know, you use a pre trained model to, for example, recognize a certain color on a production line. You know, a A piece of fruit that’s brown and not red. And so you want to remove it from the production line. You know, that is something that you could use an off the shelf model for probably to get to a certain degree of accuracy. But when you’ve got a million pieces of fruit flying past the camera per hour, that model might not work very well, the accuracy might drop at a certain volume of framerate, or camera quality. And so what you’re going to have to do at that point is probably just rebuild it from scratch. And to do that, you need to actually understand how it all works. So I guess all of this is to say, I did delineate between tools and infrastructure. And you know, it’s obviously very smart, and indeed necessary most of the time to use someone else’s tools and infrastructure, and pre trained models where it might be a good place to start. But I have found that often the state of the art and what’s required to make dairy, reliable, scalable systems that deliver a huge amount of value around a specific problem in the real world is that you have to eventually develop your own models from the ground up.

21:08
And when you’re vetting potential startups for investment, you know, how do you think about valuing the a UI? Or such as her? How do you think about valuing the AI? And then also, how do you determine the difference between what’s real and what’s, you know, hand waving? Lots and lots of acronyms behind the name and lots of thought and theories, concepts. But, you know, there’s not real AI behind the vision, huh?

21:40
Yeah. So on the format, I often say, you know, my job is to price the risk that the AI works. And what does that mean, it really means price, the risk, that a system will be able to generate a prediction of sufficient accuracy. So that something can be automated, so that the underlying business goes from a low gross margin business, because they’re doing a lot of things manually to a high gross margin business, because they automate those things away, so that investors put a higher valuation multiple on it. So to reverse all of this, so to say that another way, if I invest in a company with a low degree of automation, but I know that they can get to a higher degree of automation, because I know the machine learning problem they’re trying to solve, that will increase automation is tractable, because I know how the models work, or I know that more data will make it better I know where to get that data from, etc, then I can invest when it’s a low margin business with a low valuation multiple, and sell it so to speak, whether that’s through raising more capital from another investor or selling the company, at a as a higher margin business that commands a high valuation multiple. And so that’s how I value these businesses, I sort of make some assumptions around what their margin can get to, and therefore how the, the market will value them. So that’s, that’s the format. Now, of course, you know, this is my whole job and explaining what I do and how I do it. And all the work I do every day in three minutes is impossible, lots of different checklists and methods to figure out all the components of what I just said, and databases, evaluations, and whatever else. But that’s the gist of it. The second part is obviously very much tied to the first part of your question. And that is, well, how do you even how do you know it works? And yeah, there are lots of ways to figure out if an AI system is working, you know, essentially, accuracy metrics, or precision and recall metrics, or lots of different other metrics that are out there and sort of known. And I cover some of those in the book and just give people a quick sort of crash course on them, and try to explain them in a very straightforward way. But you know, only to get to the following, which is okay, what’s the ROI of this system? You know, at a certain level of accuracy, what’s it really going to generate for customers? And that’s the, that’s the hard thing to figure out. And again, you know, there’s a whole chapter on that in there. But that’s what you want to get to and so in a sense, and to wrap up this answer, it’s very straightforward in there there’s nothing special about AI first companies in terms of figuring out you know, whether they’re going to work it comes down to the same question, you know, are they are they doing something valuable is the money you spend on this AI first company’s product, generating a return for you?

24:52
You know, related to your point on on valuations. throughout the book, you discussed this concept of loops, not mode. With respect to data, I’ve been thinking a lot lately about Warren Buffett, and of course, compound interest. Now the guys over the acquired podcast just did a nice series on Buffett and Berkshire Hathaway, I kind of feel like compound interest is to finance as compound data is to technology. And we’re only scratching the surface of what’s possible here. So ash, how does one put a valuation multiple, on sort of a fairly nascent tech category, where the value can be exponential, and the moat can be unbreachable? But there’s not a whole lot of precedent?

25:36
That is such a good question. And I think we’ve seen people very gradually realize that the multiples that were put on these companies were completely wrong. And so you just look at the valuations of these companies in the public markets. Now Google has gone from, you know, the time of IPO, a company worth a fraction of what it is now, because people have realized over time, hang on a second. They are so far ahead of everyone else in terms of their predictive accuracy around search recommendations, or recommendations in search things, putting things in front of you that you want. Yes, so far ahead of everyone, because they started collecting data and building models around this over 20 years ago, that no one has a chance of catching up with them. And I guess the best business on the internet, and, you know, perhaps one of the best businesses of all time, if not the best, because it has a data learning effect. And they have a data learning effect, because they were AI first because they were so deliberate from day one, about getting the right people to build the right models, and giving away in many cases for free products that collected the right data to feed those models. And so we’ve realized over time, hang on a sec, this is unassailable, you know, we started valuing Google as public markets, investors, sort of like an ad for sort of like a tech company. But I realized over time, you know, no one’s gonna catch up for a really long time. So we can value added based on revenues that are further and further out, because those revenues are actually very likely to be realized. So I think that’s one element of it, like, we have to change our time horizon. And you’re starting to see this in how the markets valuing certain companies, you know, looking, not just one or two, but like five years ahead in terms of forward revenue. If you look at, you know, investors that are being incredibly aggressive right now, like to math, and social capital, and others, you know, they’re valuing businesses based on five year forecasts. And they’re doing that because they can see a moat around a business that’s very different from what others say. And they’re able to do that, because they have a fundamental understanding of what generates this moat. And in some cases, it’s their learning effect. So I think the main thing is just think about the horizon. If it’s a truly different type of moat, that lasts for a lot longer than you can evaluate on it using a different basis, same method, different basis. So that’s one thing, I think that that changes. Another thing that changes on the other side of the of the equation is the costs it takes to build these businesses. And so, especially as an earliest stage, investor, public markets investor, you know, you’ve got to think about what’s my cost of capital? And how much capital Do I have to put into this company before the data learning effect kicks in, because until then, you know, I’m going to be spending a lot of money on data, a lot of money on me very talented people to develop these models, and a lot of money, and then only then building the software that sits on top of these models. And so it costs, you know, multiple on sometimes an order of magnitude more than what it costs to build a software company. That can be worth it if the de learning effect is strong. And your cost of capital is low enough. But you’ve you’ve again, you’ve got to change your horizon on that side of it, too. So I think it’s just about horizons. I mean, because that’s what moats are all about, right? Like how For how long will your revenue be relatively protected? And I think we’re slowly realizing that,

29:17
as we recently had, Amy noi jaquess from anthem is on the show, we talked a bit about FinTech versus tech fin, you know, sort of these finance first tech companies versus companies where, you know, finance is not the mission, but it plays a major part of the workflow or the customer journey, you know, it adds a bunch of value, you know, integrating sort of established companies that are integrating embedded finance into what they do. You know, my question to you is, do you see the future in the future, some embedded AI as a part of every company? And you know, how far off are we from that?

29:55
Yeah, I do. There’s just such an imperative to compete at a different level now than we had to compete before. And that is, you know, to adapt to the economy to adapt to customer preferences with daps more quickly. And what is adaptation? While it requires learning price learning how customer preferences are changing, learning how the environment around news, changing the supply chain, whatever else? What is how do you learn quickly, or what is fast learning intelligence. And, you know, you can use your brain to learn really quickly about what customers need and what’s changing and supply chain and whatever else. Or you can augment your intelligence within a form of artificial intelligence that is out there perceiving things in the world, collecting updates on, you know, what products are moving, where on what customers are buying what, and then serving you our predictions. And so, there’s an imperative to move more quickly than ever, and to learn quicker than ever, and we’re limited by what we have in our heads. So we have to use something else to help us learn more quickly. And I think everyone sort of gets that now. It’s, it’s, it’s an imperative that everyone feels and it’s an it’s an opportunity that everyone sees to use artificial forms of intelligence to augment our own. And if you don’t do it, someone else will, basically. So I think all companies will become AI first, you know, it’s the case that today, the only trillion dollar companies AI first, and if that’s another sign of what’s to come, I’m not sure what it is. But I really do believe that we’ll all have to gradually augment our own intelligence with something official with one of these self learning systems in order to keep up with each other and sort of bring the future forward, so to speak, in a really productive way, so that we can get what we want better, faster, cheaper,

31:56
Ash, you know, talent in sophistication in AI, whether it be coding data, science, etc. It’s still pretty early innings, you know, how do you find the right types of people with the right capability and the right mindset, to work for a startup that may have no formal training or experience, you know, directly with AI?

32:17
Yeah. So you know, a lot of it comes down to the same things that it comes down to when non AI first companies, you know, what’s your mission? What are you really trying to solve and do people care about it. But in the context of an AI first company, there’s an interesting sort of talent loop that I’ve observed, which is, companies that have got access to a unique data set. And that doesn’t necessarily mean an absolutely massive data set. But it means a data set that is very likely to be informative to a self learning system around a specific problem, like diagnosing kidney failure, or, or predicting kidney failure, or I should say, or predicting septic shock, or something really important like that, a company that has access to a data set, that could maybe be informative to yourself learning system that can do something like that attracts people that want to solve those really important problems. And those people then build better models. And then those models go into production and convince customers that, you know, they can be really useful. And then customers contribute more data, which attracts more people, which leads to better models, etc. And so I think we’re getting to the point where, whatever observed where, you know, data can beget talent can beget more data. And so that’s one thing, you know, if the mission is right, if the company culture is good, but then you have a really interesting data set that could help solve a really important problem, you get into this slight nice top loop, I’ve seen that happen. The second thing I think is really important is to be really cognizant of the different organizational structure necessary to run an AI first company. And that is finding the right balance between having data scientists and machine learning engineers and these people that have to figure out how to build predictive models close enough to the real world that they can figure out, you know, the real cause cause of things. And so they can program those things into models, those heuristics into models. But also gives them the ability to go back into the lab, so to speak, and actually do that work. So figuring out the right balance of like distributing data science and machine learning talent out into the field, but also giving them an environment where they can go back and work with each other. Using the best systems, the best collaborative systems, the best, highest performing computing systems and whatnot, to actually develop the models. That’s the second thing. And the third thing is being cognitive. Have roles, you know, a product manager is very different to a data Product Manager. And Product Manager is all about asking customers what features they want. And then figuring out how to get help engineers or get engineers to build though, a data Product Manager is all about figuring out what data on model needs to get better. And then figuring out with engineers how to build features into the product that collect that data, you know, something as simple as a button, or as complicated as, you know, an account or system that collects that data, a data infrastructure engineers, very different to an infrastructure engineer, and you’ve got to be cognizant of that they have different skills, they’ve got experience with different systems. Now, there might be experts at snowflake systems. That’s very different to just, you know, a run of the mill infrastructure engineer who just rack servers. And so just being cognizant, cognizant of the difference between these roles, will allow you to attract people that can actually do these things. Because if you’re not cognizant of the nuance around their skills, then, you know, they’re probably going to think they’re going to be underappreciated, and under resourced.

36:10
You know, I want to transition a bit here, but before we do, you know, maybe a final thought on the book. And I’d love to hear what, what belief whether commonly held or not, did you have going into the writing of the book that that was disproven, you know, what was what was kind of a shock or surprise?

36:33
Yeah, that’s a really good question. And the shock led me to describe this entire section of the book, my belief was that people needed to understand all of the traditional forms of competitive advantage, like supply side network effects, demand side network effects, brand patents, regulatory capture, in order to understand this new type of competitive advantage data learning effects. And actually, after writing, with the help of an assistant, like a researcher, a whole section, like giving everyone a whole crash course in competitive advantage, lacks the sort of thing that get a business school, and then leading it into what’s the day the learning effect, I realize, actually, this day, the learning effect is so different to anything that we’ve ever seen before, that you don’t need to understand any of that, like this is not a data learning effect is not an analogy. It’s a new concept. It’s an actual idea. And so I just scrapped that crash course. And like, no one needs to get a business full again. And if anyone really wants to read that stuff, you know, Michael Porter knows more about that than I do. So they can go and read all that stuff, if they want. A data learning effect is completely different. And it was also a function of pages, right? You know, I kind of give 50 pages of Crash Course, when to describe a data network effect, which as we discussed before, is only one component of a data learning effect. Already, it takes 20 pages and a whole bunch of illustrations. And so I just thought, let’s just get right into it. This is new, this is different. This is big. And and it takes time to understand. So let’s just go right into it. And you don’t need to know anything about competitive strategy to understand this new type of competitive advantage.

38:35
Awesome to transition a bit. Maybe we’ll do some quick questions on the state of venture cap. Yeah, right. You were sort of formative in in a lot of the way that the industry has changed with angellist. And syndicates. I’d be curious to hear your take on a few different items. We’ll start out with venture capital funds and investing early stage versus versus late stage. You know, why do you think VC should stick to early stage investing?

39:04
Sort of facetiously, that’s what venture capital is. It’s pricing, the risk that a product will meet the market, it is investing in product development, before others realize that the product is possible to build like that it can work and that it will be valuable that there will be customers for it. And that is something that requires a whole set of skills, wholesaler skill, technical skills, a whole set of skills around market, understanding, market development, market assessment. And those skills take a really long time to learn, especially in certain very technical fields. And if you don’t spend all of your time honing those skills, you’re just going to be bad at the job, I think. Now, growth stage investing also requires a huge amount of skill. I mean, I’ve done I’ve done it for one of the biggest asset managers in the world. And the valuation techniques you need to use the market checks you need to do due diligence you need to do on companies that are established and have existing operations is is very hard. And that is a whole nother set of skills. So, you know, I think both of these things are instantiations of the craft of investing that are very specific, and require specific skills that take a decade to develop. And if you just jump between the two all the time, you’ll just be good at

40:31
how do you see the asset class evolving? In Do you think there will be different products for LPs, then, you know, just your standard LP templates from fund investing and startup equity?

40:43
Yeah, so I think two things. There already are different products for LPs in that there are different funds focused on different stages. So I think, you know, maybe different brands will offer more of those products for us at the moment, it’s sort of one brand, one product, you know, the venture capital brands have traditionally offered a venture capital product, now they’re offering a growth product and a private equity products, and some of them even a public equities product. So I think different brands will have different products, but those products are already on offer their LPs, just from other brands that weren’t traditionally venture capital grants. And this has been happening throughout history, you know, public equity funds, became private equity funds and start offering that product or a private equity fund started offering a property for a lot like Blackstone or whatnot. So the this has always been happening, I guess, the really interesting part of the question, and the thing that may happen is difficult to find types. So different products, features that are not around stage or the underlying asset, but are actually around the instrument through which they invest. And I think there will be some some really good stuff developed there, there already is not that I can point to what are some that I can point to one such thing and different types of debt. So debt structures that are not collateralized, so don’t have any underlying asset, but, you know, still have a good risk return profile, because the revenues are so predictable, because you can get data on these businesses, like subscription and software businesses over very long periods of time, that’s highly predictive of a repayment profile. different products, like backing research companies to combine with existing mainline businesses and sort of create joint ventures then spin off new products. And so your underlying asset is not necessarily a company, but it might be a revenue stream from a joint venture between a mainline company with for example, data and a research company with for example, models. So there could be a product like that where, you know, again, LPs, are investing in a different underlying asset. And then, of course, that whole buy stuff around crypto, where most of the product innovation is happening, to be honest.

43:10
Very good. What do you think the rolling fun product will make the leap to institutional investors? Or will it continue to serve more smaller retail folks?

43:21
So in a sense, I think it has to, because the people running these funds have access to such good opportunities. You know, because they’re able to run these funds, while they’re still founders and founders want to talk to founders and whatnot, and other founders who invest in their companies. And so these funds are getting access to incredible opportunities. And I think, therefore, you know, you just got to go where the opportunities are, in a sense, I think there’s still a lot of tweaking to do with the rolling fund model, before it makes the leap to institutional appeases. Because the fee structure is a little bit odd, the netting out of returns is a little bit odd. The ability for an LP to think, over multiple decades in terms of their obligations to for example, you know, the people that need a pension from their fund. Their liquidity horizon is really uncertain over the rolling Fund, the amount of capital they can deploy in it. So if you’re an LP we like a multi decade allocation strategy, a rolling fund is a non tractable investment vehicle for you. But there are ways around that and there’s some tweaking to the to the model that could be done to make it more relevant to such an institutional LP with with defined obligations like that. Ash, what

44:41
is your opinion on insider rounds?

44:46
I I don’t understand why people think inside rounds are bad. And in fact, I think the opposite. If you’re not doing inside rounds, you are by definition, giving away opportunities. If you wait until someone else comes in and prices an opportunity, you have probably missed the moment where you could have paid less. And you are sort of by definition, paying the highest price and you be you go from being a price maker to a price taker. Yeah, that is if you’re going to make it a follow on investment as a company. So I could go on and on. But I mean, that’s that’s how to summarize it. Like, if you don’t want to do inside arounds, you go from being a price, make it a price taker, as an insider, you have access to information that allows you to make a price for a company, and outsiders do not have that information. And therefore, you’re just at the whim if you’re going to make a follow on investment, but only if someone else invest at the same time, you’re at the whim of them and pricing it and they’re much more likely to miss price than you are. The other thing about inside rounds is you’ve got to be prepared for them. You know, a lot of companies face a lot of problems along the way, and often get themselves into a situation and you’re complicit in this is the person who’s meant to be helping them allocate capital on set strategy, where no one else will find it. But fundamentally, it’s still a good idea, the timings just a bit off. And so if you don’t prepare for that, then you’re just gonna end up with a bunch of companies that almost make it and the return on that portfolio is zero. So you’ve got to figure out a way to not not getting a situation. And, like, let’s just get down to what this is really about, which is if you make a promise to a founder, that you’re going to help them build a company, then it’s your job to figure out how to have enough capital or get them enough capital to do that. Yeah, they are the financial service anyway, right? You’re the financial, exactly. As an investor, you are the financial services provided to a founder. And if you don’t figure out how to raise money from other funds, raise your own funds from LPs to do inside rounds, raise debt, and whatever, you’re failing the founder.

47:11
That’s fair, that’s fair. What’s your take on tiger, and kind of all the news about them and lots of differing opinions amongst Z’s, and in startups out there.

47:22
I think they’re phenomenal, and anyone thinks otherwise is probably the biggest opportunity. This is the point, the biggest opportunity in private equity in history, the next we think the last 10 years in private equity have been amazing, the next 10 years are going to be even more amazing. And I haven’t really flipped on this. And this is another conversation that will take an hour or so to have. But I’ve really flipped on the idea that it makes sense to be quiet price sensitive at the early stages. And I think up to a certain stage, and I’m not quite clear on what that stage is, in terms of how to define the risk reduction points in companies and whatnot, it actually doesn’t make sense to be too price sensitive, you actually just want to be super ownership sensitive. Because the opportunity set for a lot of these technology companies is so huge, that if you have a reasonable ownership and they they get there, you’re going to return your fund for sure. And so anyway, the point is, I think Tiger is one of the few firms in the market that has properly internalized the shift in outcome size, or technology companies bonafide technology companies, and I think most people are still valuing technology companies at the stages, which become brands like Tiger invest, assuming an outcome size is in like, you know, what’s the exit valuation, that is an order of magnitude off. This still in the world where software companies IP owed 300 million box, so a mil a billion dollars, and that was an amazing outcome. And then not in the world in which you know, these companies are IPO and for 50 to 100 billion. And if you are new if you if you are in that mindset, or you you’ve fully internalized that shift in in exit quantum, then you can do what Tiger does all day long.

49:28
Well, you’ve got when you’re moving that kind of money, you’ve got the opportunity set and size, but you’ve also got the opportunity cost of not being in it right because you have to find a place yeah, to put that money you know, different sort of challenge. What about the speed piece, the perceived you know, fast decisions and and tigers making

49:50
a comment too much on that because I haven’t been involved in too many diligence processes with them recently. And it just comes down to detail like If I’m involved in the diligence process with any firm, not them, where I see them, ask all the right questions in a company that I’ve already invested in, ask all the right questions, go through all the things they need to go through and dig up absolutely everything they need to dig out there. And I have a huge amount of respect, whether it takes a day or whether it takes a month, you have a huge amount of respect. If I see a firm come in and not do that, and clearly dismiss something, well, then, you know, I think that’s a problem for their LPs. So I can’t really comment on that. And I think speed is the thing, like, you know, I’ve executed diligence processes that were incredibly thorough and very short periods of time. But it’s also taken me four months, in some cases, a longer actually, like even even seed and series a deal. Sometimes it takes me six months. But the one that took me six months as my best investment, probably in history in my history. So sometimes it just takes that long, and sometimes it doesn’t, sometimes it is not that much to see. And your valuation is based on extraneous factors. So I’m not sure I don’t think speed is that important indicator of the quality of the diligence process.

51:15
Interesting. I was talking to a notable LP that’s in a lot of funds the other day, and he was saying, what everyone Miss perceives is the amount of diligence Tiger does before they formally engage. And so their diligence, their diligence process is actually quite long. PFS, the perception is that a short?

51:32
Yeah, and this ties your two questions together, or the two things we just got together, which is, if a lot of your belief is that the outcome quantum days or quantum is so big, the opportunity set is so big, a lot of your research, all the work you have to do to underwrite a certain evaluation is on the market. And you can do that without speaking to a company at all, you can do that by understanding the market backwards, and then go to the company, check a few things, check that they are actually building, let’s say the building, etc, etc. And then you can properly underwrite that valuation really quickly. So yeah, that framework is more than sufficient if the basis of your underwriting is the market opportunity rather than something that’s like highly idiosyncratic to the company.

52:24
And just to finish up your assha What’s the best way for listeners to to locate the book and to follow along with you and data? Yeah, so

52:33
the book is all at the AI first company.com and you can find little exurbs. You can find the illustrations you can find reviews and podcasts like this there. You can find reading lists that have a lot of other great books on there about AI if you just want to learn more about AI, I said the AI first company, or one thing.com and then I am just ash funtana, sh fo NTNA on Twitter, on LinkedIn on Gmail on wherever you want to find. Perfect. Well,

53:08
the man is Ash Fontana. The book is The ai ai first company how to compete and win with artificial intelligence as always a pleasure. Thanks for joining us today. And looking forward to number four next time. Yeah, we’ll get that. Thank you very much. All right. Take care.

Transcribed by https://otter.ai