161. Why SaaS is not a fit for VC and How AI Compounds Competitive Advantage (Ash Fontana)

Ash Fontana The Full Ratchet

Download_v2Nick Moran Angel List

Ash Fontana of Zetta Venture Partners joins Nick to discuss Why SaaS is not a fit for VC and How AI Compounds Competitive Advantage. In this episode, we cover:

  • Categories of AI that Ash is most interested in
  • The difference between real AI and AI-enabled companies
  • Why SaaS will cease to be investable by VCs
  • The current AI stage of adoption
  • How he times the market
  • The four phases of AI
  • What phase of AI they invest in
  • How AI is and will be affected by limited data
  • How startups can compete for talent w/ GAFA (Google, Amazon, Facebook, Apple)
  • The moats being created by their AI-first portcos
  • How they think about metrics and milestones for AI-backed companies
  • If AI should be feared
  • and finally we wrap up w/ Ash’s thoughts on Chris Dixon’s position that we will see a movement from centralization back to decentralization in tech– and the role that AI will play

 

Guest Links:

 

Quick Takeaways:

  1. The price of doing startup investment deals, prior to AngelList syndicates, was about 10x what it is today w/ syndicates.
  2. Having specific investment focus helps with investment process, operational expertise, talent networks, identifying common problems, customer networking, gaining intelligence, differentiation and, ultimately, drives better returns (16% higher multiples, 33% higher IRRs and have lower failure rates).
  3. Tech is shifting from making humans more efficient to completing activities for humans– and this is why AI is the next revolution.
  4. They consider what type of problem is trying to be solved by AI– and if sufficient, data, tools and technology exists to drive prediction in that area.
  5. They invest in AI that creates the core value for the customer/user.  They do not invest in “AI-enhanced” companies where the key differentiation is not AI.
  6. In Venture Capital, one should invest in something that has a competitive advantage for decades– the moats must be durable over long periods.
  7. Because of the importance of long-term, durable moats, SaaS will cease to be a category for Venture Capital investors.
  8. 4 phases of AI– we are currently in Phase 3:
    1. Phase 1 (low risk): AI applied to consumer applications (Google and Amazon giving recommendations)
    2. Phase 2 (slightly higher risk):  AI applied to enterprise SaaS (CRMs suggesting leads)
    3. Phase 3 (high financial risk):  AI-centric applications that completely replace a workflow (AI tech to estimate damage on a car)
    4. Phase 4: (very high financial risk): Applications we never considered before (AI to optimize data center use or energy flow across an electricity grid or making medical diagnoses)
  9. In many cases, even if an AI tech has better efficacy than it’s human counterpart, it will still incur adoption risk.  Many people are not ready to trust AI as a total replacement for human judgment.
  10. In AI, data is the moat and machine learning is a way to compound the value of that moat.
  11. Questions Ash asks about data:  Is the dataset really hard to get, is it fungible does it have high dimensionality, does quantity provide quality, is it perishable
  12. Is there a virtuous cycle w/ the data: the data feeds an algorithm that predicts something for a customer, the customer uses the product more and more, that adds more data to the system which makes the system better and better.
  13. The two key questions he asks of startups:  Is there significant value in the data and do they have a way to compound that value.
  14. The key metric for phase 3 AI is:  Is the efficacy better than a human?
  15. AI itself shouldn’t be feared but AI can create monopolistic power, held by a few companies– and that is something to be concerned about.
  16. While investors like Chris Dixon see a future of a decentralized web, Ash cites the significant expense of decentralized applications and how the economics and speed don’t work for many applications.

Transcribed with AI:

0:03
welcome to the podcast about investing in startups, where existing investors can learn how to get the best deal possible. And those that have never before invested in startups can learn the keys to success from the venture experts. Your host is Nick Moran and this is the full ratchet

0:23
Welcome back to TFR Today we welcome ash Fontana from Zetta Venture Partners, ashes firm was the first to focus exclusively on machine learning based companies. Ash has brought in deep knowledge on AI data and creating compounding competitive advantage. previously asked spearheaded the effort to launch syndicates on AngelList, an investment approach that has created significant value for our firm news tech ventures and our backers. In this interview we cover the categories of AI that Ash is most interested in the difference between real AI and AI enabled companies. Why venture backed SAS businesses will falter the current AI stage of adoption, how he times the market, what phase of AI they invest in, how AI will be affected by limitations on data, how startups can compete for talent with gaffa, Google, Amazon, Facebook and Apple. The moats that are being created by their AI first portfolio companies, how they think about metrics and milestones for AI backed companies, if AI should be feared, and finally we wrap up with ash his thoughts on Chris Dixon’s position that we will see a movement from centralization back to decentralization in tech and the role that AI will play. According to CB insights, AI investing just had its biggest quarter with over 1.9 billion invested in q1 of 18. The segment continues to grow as it moves from hype to real applications driving real value. I hope you enjoy the discussion about it. Here it is with ash Montana, of Zetta Venture Partners.

1:59
Ash Montana joins us today from San Francisco. Ash is managing director at set of Venture Partners. That was the first fund in the world to focus exclusively on machine learning based companies. They manage 185 million and have seen some big early successes with unicorn Domo, and also Kaggle, which sold to Google about a year ago. Prior to Zetta. Ash played a key role at AngelList with their syndicate deal platform, Ash, welcome to the program.

2:27
Thank you, Nick, thanks for having me.

2:30
Yeah. Can you start off with your background? And how you ended up at Zetta?

2:34
Yeah, sure. Um, I guess when I was a kid, I was really interested in two things, I was weirdly interested in pulling apart companies and looking at balance sheets and investing in stocks, but also pulling apart computers. And I just kept following those interests, I guess. And realize there was a job that existed that combine those two things. And that is investing in technology companies. So realizing that was a job was a very satisfying moment in my life. But then I had to go out and get the skills to do that job. So I went to law school, I worked in lots of different types of investing. So I worked in like growth, equity, public equities, research, investment banking. And then I felt like I had the skills to go for it. And out of nowhere, and opportunity popped up to join AngelList after I had started sold my previous startup. And I just started working there and got a very unique perspective on early stage investing, and really got the got the business side of the company going so worked with the teams to build the products, set up the funds management did sort of the I guess you’d call it business development to to get silicates off the ground. And it was just two of us really working on that in the early days. And now it’s obviously the platform and managing billions of dollars. So that was a very cool ride. And after that, it was only after that, that I felt ready to concentrate my efforts and really focus on helping companies on a one to one basis and managing that amount of money. And that’s when I met my partner Mark and we really gelled over this, this focus that we have and what we wanted to invest in and went for.

4:16
Good. So I should thank you for helping spin up syndicates. Hmm, that’s what that’s where I kind of got my stuff. You should

4:22
thank a lot of people. I mean, it was I’ll never forget the day and have all walked in the door with this brainchild. We were doing this online investing thing where startups could raise money for themselves online. And it was working well. But navall just sort of walked in one day and he’s like brilliant moments that he has. And he’s like, what if we just let anyone raise money for anyone using this platform and call it syndicates and I just I was blown away. I’m like that’s a fantastic idea. And he’s like, Okay, you go and do it now. So I teamed up with another guy there Mike you’re fantastic engineering AngelList and we we did our best So to get it going, obviously, many people have touched the product since then. And and it only is what it is because of, you know, it is just the platform, right? The real work on top of it is done by people like you who run the run the deals and do the research and on the relationships. Well,

5:15
it’s an amazing platform. And coincidentally, I’m going through the closed process on my fund right now. And I’m actually using AngelList. They have full venture fund capability in our back office. So it’s Oh, yeah, amazing.

5:28
Yeah, I’m fully aware, we set up all the funds management stuff, and worked really hard, basically, to automate as much of that as possible and get it down to, you know, when we started, the price of doing it was more than 10 times what it is today. So there’s been a lot of like, little, little things over the years that have reduced that cost and made it a lot easier. Yeah,

5:50
it’s nice. It’s nice, I guess I’m going to be I’m going to be in the first tranche of a full venture funds on the platform they’ve been doing. So yeah, fingers crossed. Everything’s been been good so far. So I really enjoy working with the team over there. And yeah, it’s great. So anyway, let’s get let’s get onto Zetta. So tell me about your investment focus and, and sort of this this machine learning orientation that you guys have? Yeah,

6:14
I guess the, I guess maybe I’ll start by just talking about focus in general. I mean, what does it mean to be a focused fund? Why be a focus fund because, you know, most most venture funds are not focused their technology generalist so to speak. And we’ve just sort of taken the view that being extremely focused, is much better for developing an investment process, it’s a must have better in terms of the operational expertise and the problems you come across, you see more similar problems, the networks of people you hire from, you can be more focused in building those networks. So for example, building a network of very good machine learning engineers, or very good data engineers, data, infrastructure, people, networking with customers, getting intelligence on who’s acquiring what, and just marketing our firm, right, there are a lot of venture firms today and a lot of different ways to break out. But one of those ways is really going deep in an area. So we just believe in having a focus on interestingly, it’s empirically proven that focused funds perform better than generalist funds is there. Yeah, there was a pretty good study by by just learner Harvard, Harvard a while ago. And he showed that focus funds get a 16%, higher multiple on invested capital and a 33% Higher IRR than generalist funds. And they also have a lower failure rate. So they have a lower loss rate in the portfolio. So it’s sort of more of a side point, like the methodology you can question. But it’s interesting to note that empirically focused on performed better, so we wanted to be focused. But you know, focusing on a sector or trend, it’s easy to be focused on something that’s a bit more of a fad. So we sort of thought, well, what’s a fundamental shift in computing, that’s going to affect the next couple of decades and technology investing. And that shift in computing is the shift to intelligent systems, you know, moving from a world where your software is just a fast calculator or something that executes a little workflow for you to something that makes decisions for you. When you think about what technology is, it’s, it’s something that gives you leverage as a human being. And if if technology can go from just helping you do something quickly, to helping you actually make a better decision, that’s more leverage. And we thought that shifting computing was going to affect everything, every category of software, will go from being a workflow to being an intelligence system. And so that’s what we decided to focus on. And we did that officially launched in what officially started in 2013. And that was around the time where, you know, it was pretty obvious to us at least, that there was a resurgence in research in machine learning that computing power was getting cheap enough, the right sort of chips were available on the cloud to run these models. And there was a lot of data. And that’s, that’s to sort of wrap it up. That’s why we’re called is that up? Because in 2013, as zettabyte of data went across the internet for the first time, and that is what’s really enabling this era of intelligent systems.

9:21
Got it? So so how do you frame out AI? And sort of what are the categories that you’re looking at within it? Do you look at machine learning NLP? You know, under categories that way? Are you kind of taking a different frame?

9:35
Yeah, it’s very, you know, now that we have that focus on intelligence systems, anything within that is fair game, and both in terms of the type of technologies people use to build intelligent systems, and also the areas to which it’s applied. So on types of technologies, you know, some some problems. So for example, optimizing the use of suppliers in a hospital I just need very good regression methods, or, you know, figuring out certain supply chain problems just need very good probabilistic techniques or realistic programming techniques. But some problems need sort of very cutting edge neural networks like language translation. So different tools for the job. And, you know, we, it’s very hard to stay across all the areas of intelligence systems and machine learning. But, you know, we have a shot at it. Because, you know, we don’t have to, for example, know anything about consumer branding, or marketplaces, or social networks or E commerce, we don’t profess to and nor do we think we do know anything about those areas. But we do know a fair bit about machine learning. So different tools for the job, all different, all different areas of machine learning a fair game for us, and also sectors, you know, we don’t just focus on industrial IoT or industrial intelligence systems, but we don’t just focus on like applying this to ecommerce applications. We think it’s going to affect every area of technology, every area of the industry. And so we were looking at, we look at all of them, I guess how we broke it up is based on the type or the quality of problem the AI is trying to solve. So is it trying to solve a problem? That is one solvable by using these methods? Like all the machine learning actually deliver a prediction that’s a value to the customer? And is that technology available today is available cheaply enough? And can you get the data to feed it? And so we sort of focus more on where the technology is on the technology, the risk curve and the adoption curve, and less on what particular tool they’re using? Or what industry they’re in? Got

11:43
it. I always find it funny how so. So my firm has a precede focus, we also have an IoT focus. But I always find it funny how I talk to experienced practitioners within venture and their feedback to me is Ooh, are you too focused? You know, can you get enough deals? But I’ll talk to people in other asset classes, and they’re like, oh, my gosh, you’re not focused at all? Shouldn’t you focus your you know, within IoT? Or, you know, to your point within AI? And it’s all a matter of perspective, I think the insiders probably understand it a bit a bit better. Yeah. And

12:15
look, it’s been an education process for us to, you know, when we first started raising the fund in 20 1314, we absolutely got that feedback as well, like, that’s why to focus, you know, everything from that to focus to we’ve heard this story before in the 80s. And now, people are asking the question, you know, are you focused enough? So we’ve certainly got that feedback. But, you know, we’ve just sort of stayed the course. I think it is very early days for the application of intelligence systems to lots of different problems and great consequences society. And so we’re just staying the course and staying focused on it and trying to keep up. You know, in the early days, it was really easy to keep up with all the research papers coming out of the big institutions. And companies, but now, it’s a it’s a real fire has. So we’re doing our best awesome.

13:06
So ash, I gotta ask you, you know, it seems like lately, every other deck I get, especially from the SAS company is powered by AI, or it’s, it’s built on blockchain. What do you think of the numerous companies that aren’t really AI? But maybe they’re AI enabled? And how it almost seems like every startup founder wants to use the AI buzzword, even if that’s not the compelling differentiation?

13:35
Yeah, for sure. And, look, most of them won’t have any sort of intelligence system in play, or will just sort of be pulling something off the shelf. But that’s pretty easy to prove, you know, for people like you and I to figure out, and that’s fine. You just assess that company on a different basis, you just ask, well, do they have some other source of competitive advantage? You know, us we were completely focused on valuing data and machine learning as a source of competitive advantage. But, of course, it’s not the only one. Again, you can have brands networks, critical mass of a marketplace, like they’re all pretty compelling sources of competitive advantage. So you sort of work out pretty quickly if it’s if AI is the core competitive advantage, or if it’s not, with respect to, then it sort of gets a bit more nuanced. So is it just a sprinkling of AI like a few predictive features on a SaaS product? Or is AI really at the core of it, like the whole value to be delivered to the customers? depends on getting the prediction, right? Whether it’s recognizing something maybe enjoy analyzing, trying to extract some meaning from a paragraph of text. So in the former category, I sort of call it AI enhanced, like sprinkling AI on SAS. That can that can be everything from you know, a nice feature that doesn’t really differentiate you in the market to something that’s pretty valuable to customers and you just got to learn by talking to customers and Figuring out how hard that feature was to build as to whether you’re confident as an investor, that’s a source of competitive advantage and differentiation in the market. So those sorts of products can be good. I will say, in general, if you’re just sort of sprinkling AI on a SaaS product, today, it’s probably not enough to differentiate because there are a lot of great tools available. And a lot of people within that data can can build some sort of predictive feature pretty easily. So there’s probably not enough to be a source of sustainable competitive advantage. Today, we just sprinkle AI on SAS, I would say. So you’ve just got to figure out whether it’s a sprinkling or whether it’s was actually really hard to do.

15:41
Sure, sure. Well, clearly, you’re somebody that likes, you know, core technology as a driving differentiator and value source. And you’ve written about SAS before I’ve read some of your articles. And you take an interesting viewpoint, maybe a contrarian one. So more about what you think of SAS as an investment category within venture capital.

16:05
Yeah, so the key words there, I guess, in that question, within venture capital, I will say the start like there are endless opportunities for entrepreneurial people and investors that sort of one, as a non venture type return, to build SAS products for a whole long tail of industries that made better software. That’s a, that’s still and will be a huge opportunity for many years to come. However, I think for venture capital funds, you know, given the duration of our funds, you have to be investing in something that’s got to be has, presumably has a competitive advantage for decades. And I think it’s pretty hard to build a competitive advantage by just building software, like software is not as hard as it used to be to build and building a really nice workflow product that’s well designed that runs fast, it’s in the cloud. You know, it’s not that hard, you know, a lot of people could copy that product in a year, for example, or six months. And so, you know, if you’re a venture capital investor, investing in that sort of stuff, and then that’s copied one or two years later, you haven’t even exited at that point, and the company will probably not enjoy a competitive position in the market and then won’t be valued very well. And then you won’t get your attention. Whereas if you’re an investor, investing in other areas of technology, that are very, very difficult to build with today. So for example, machine learning, you’re going to be a little bit further ahead of the market. And so by the time that your company gets to the point where it might be able to exit and provide a return to you, and therefore to your limited partners, it will be valued mostly. So to sort of sum all of that up, I think that just pure SAS is not really a category for adventure investors, it might be a category for debt investors, or very, very early stage angels that get in at great valuations. But for venture, it’s just it’s hard to believe that you can have a company that’s going to keep delivering good returns to you in 510 years time.

18:14
Wow, it’s controversial. So you think SAS is dead, or SAS is dying, at least for for venture capital investors moving forward, like

18:22
seed series, a venture capital investors, I think it’s pretty hard. Of course, some products will be will be unique, but just building software is not is not going to keep you ahead of the game anymore.

18:35
Interesting. So bet back to AI. Where do you feel like we’re at in in the adoption of AI?

18:43
Yeah, good question. And we think about this a lot, right? My partner Mark likes to say, you know, as venture capital investors, we’re just paid to time markets. And so we think a lot about where are we in the adoption of AI by real companies in terms of buying products enabled by that. And where we’ve landed recently is breaking it up into sort of four phases of the adoption of AI. The first phase is AI applied to consumer applications. And so we saw this sort of about 10 to 15 years ago, where, you know, Google uses AI to make search results better. Where Amazon uses it to give you product recommendations. And, you know, the risk of adopting AI in those situations is very low. Like if Amazon gives you a product recommendation, that’s good. Awesome, you’ll you’ll buy it, but if it’s not very good, you know, you might laugh It might suggest like a silly mask or something or other or costume and you just laugh and move on. So it’s pretty low rate. What’s that?

19:43
So we’ve all been there, right? We have all been

19:46
there all Google giving you a principle completely bizarre so on Netflix recommending something inappropriate to your kids. So it’s not the biggest cost for those companies to adopt. play out. And that’s why they did it first, right Google, Facebook, Amazon, Netflix, they did a great job of sort of pioneering the use of AI in their products early on. And then we moved to sort of a slightly higher risk application of AI. And this was around 2010, to 2015. And that was where it was these AI enhanced application. So you have like a CRM or sales lead gen tool, and it would start using predictive algorithms to suggest a lead to you. Now, again, that’s sort of not zero cost, if that lead is bad, because you’ve had to make the phone call and it turned out to be badly. But it’s, it’s pretty low risk, and has a pretty nice payoff. Because if that lead is good, you know, the AI system has recommended something to you that’s made you money. So sort of sprinkling AI on SAS was the next, the next next risk point, or the next phase of adoption of it. And I think we’re sort of getting to the end of that phase where, you know, consumers are pretty comfortable, consumers of enterprise software, are pretty comfortable being suggested things by AI, if it’s not going to cost them anything to sort of have a look at that suggestion. The third phase of AI is sort of this AI centric stuff, which is where you’re starting to completely replace a workflow. So you know, you’re using image recognition technology, to assess the damage of a crashed car, and make a decision about to repair or replace that car, like a significant financial decision. And there’s no human really involved in that maybe they’re involved in the labeling and training of the system. But there’s no human involved in that. And that’s a pretty risky proposition for an enterprise and buying that product. Because if the AI fails, the whole product fails. But you know, computer vision, and other areas are getting so good that they’re thinking about adopting products like that. So I think that’s the phase we’re in now, where AI is getting more centric. And the risk of using it is pretty high. But the payoff is great, because it replaces an entire process in a company. The next and final stage of AI, which we’re just starting to get into, I think, is applications that we didn’t even think of before, that we can start thinking of because we have these AIs that understand really complex systems. So that is, you know, using AI to optimize datacenter use data center power usage, or even bigger than that using AI to optimize the flow of energy across an entire electricity grid. They’re things that humans just can’t even think about solving, because we can’t handle that degree of complexity in our heads. And that, but they’re obviously very risky situations, like if you let an AI run wild on the power grid, and it doesn’t work, then we’re all in trouble. And that’s true of a lot of medical applications as well, right? You know, it’s very different to sort of make a suggestion to a doctor. And that as opposed to replacing a decision that the doctor would make, for example, the popular example is analyzing images. So radiography and such. So I don’t think we’re quite at the point where we’re ready to trust AI there to do those things. But we’re getting there. And so we’re sort of in this third phase of AI adoption, getting into the fourth phase.

23:16
Interesting. Yeah, your your doctor example, I was, I was reading an article recently, I can’t remember where it was. But it showed that if you take doctors and show them a, an x ray, for instance, their diagnosis changes, you know, between an array of very intelligent doctors, if you know, all 10 of them, looking at the same x ray, they, they have completely different diagnoses of you know what the issue is. And then even if you give the same doctor, the same x ray, and it’s mixed in with others, but you give it to him over and over again, he will diagnose it differently, differently every time. Yeah, so that was just, it didn’t increase my confidence in, in our medical professionals. I don’t think it’s their fault. But it was, I mean, it was alarming study to read. It is

24:03
but you know, it’s, it’s also even when presented with that information, people, for many, many reasons are still just not willing to let our eyes that objectively better at diagnosing completely replace a human being. Because, you know, all the obvious reasons medicine is not just about the diagnosis, it’s about the care. And so we’re still trying to figure out how to use AI or, or how to get people to adopt AIS to do those things that humans to completely replace humans. And so that’s like a an adoption risk, right? And that’s, you know, as investors we’re paid to sort of price that risk. And we figure out or price that risk by talking to a lot of doctors talking to a lot of hospital administrators talking to a lot of patients, reading the studies like you were reading and figuring out okay, well will only these things change in the next five years because If they do, this technology will absolutely be adopted. And we’ll have the leading technology in the space. And that’ll be the spaces worth its margin, but one, that’s our process. So it’s very sort of important to be very cognizant of exactly what adoption risks you’re pricing. And that’s what we do specifically, in our field with AI. And it’s such an interesting question with AI. Right? Because, you know, it’s such a human question, really, with AI as being good enough to replace humans in some tasks. It’s, it’s very interesting to sort of consider, okay, is it objectively better, but, but still won’t be adopted? Like the example you gave?

25:39
Yeah, for sure. So as to your earlier point, about timing the market, you talked about these different phases, phase one through four? And how we’re kind of in the midst of phase three? What are the implications for your investment strategy? You know, it’s a long cycle business, does that mean that you’re, you’re firmly sort of embedded in investing in the phase three tech? Or, you know, will you go and invest in phase one or two? Or is it all about? Yeah, no, phase four at this stage?

26:05
Yeah, good question. Um, I can probably take it down to brass tacks pretty easily, which is that, you know, if something’s in the phase two, which is these AI, enhanced applications, like SAS, with a sprinkling of AI, we tend to require a bit more traction and a bit more market proof. Because, you know, they don’t, they probably won’t enjoy a competitive advantage just by virtue of their technology and their data for much longer. So if it’s a phase two company will absolutely look at it like, as I said, there are so many ways to apply this to solve so many problems, but will require a bit more traction, whereas these phase three companies, so these sort of AI centric companies that are using AI to replace a complete workflow will invest like, absolutely well before attraction, when it’s just a prototype or a product. And they collecting unique data. And they, they’ve got some experimental evidence that the predictions they’re able to make are accurate and have some sort of commercial or industrial consequence. So these phase three companies will look at it when it’s more of a product or a prototype, collecting unique data, these phase four companies, which are using API’s to solve problems that, you know, we don’t even use software to solve today. They are companies we’re very, very excited about. And, you know, if it’s a credible team, and I have access to unique data to run the experiments that need to run, then we’ll start backing them at that stage. So again, it’s just about figuring out exactly where they are, in terms of adoption risk, and also sort of how defensible will the technology be, if the technology is going to be really defensible, like, for example, building an AI that understands demand and supply on the power grid, then that’s going to enjoy a really long period of a competitive advantage in the market. So we can invest a little bit earlier there. So that’s, that’s how we think about it. I

27:59
like it. So you’ve referenced data a couple of times now. And I was chatting with a friend and advisor, Leopold EVITTS, at Susa ventures, he’s got a thesis and focus around data. And, you know, part of his position, or part of the challenge on data is that the datasets themselves can be limited, right? If you don’t have a broad and robust data set, then whatever algorithms or other forms of intelligence that you apply, are can be limited, right? You need a robust data set. So, you know, how do you think this intelligence era, as you call it, and AI itself? How do you think they may be constrained or not, by, by limitation of data?

28:42
Yeah, it’s a really good question. And funnily enough, I’m just getting to the tail end of writing a book right now about, you know, what does competitive advantage mean in this sort of fourth era of computing this AI era, and a big part of the book ended up being and I didn’t expect this. But once I started writing about it, that sort of figured this would be quite useful. So I wrote a lot about it, that ended up being just tactics for acquiring datasets. And I go through all sorts of tactics. So you know, the most obvious source of data is just getting it from your customers. And so then that involves all sorts of questions around the rights and your contracts, and figuring out how to build a customer data network to get customers comfortable with sharing data with each other and things like that, you know, building a workflow application that ostensibly does one thing, but he’s actually collecting data for another thing, and has this interesting data resource, so to speak, building really interesting integration ecosystems to collect a bunch of data from a bunch of different sources. And the company we work with clearly does that. There’s this whole new sort of business area of business operations forming around how do you build really efficient data labeling operations. So a couple of companies we work with and for which I sit on the board, they have built these amazing teams that are really efficient at using non expert humans to label datasets that previously, were only understood by experts. And then they use all sorts of tools like interactive machine learning and active learning tools that help the humans get faster and faster at labeling the data. There’s all sorts of cool stuff, you know, everything from acquiring data from government sources to, you know, creating token based incentive networks for people to contribute their own data. And then that if that data is bought, that value accrues back to the token holders, like a crypto token holder. So there’s so many interesting ways to build datasets, and we see a lot of weird and wonderful ways that people do that. So, you know, while yes, I think sort of the, the elephant in the room in this question is a little bit that these massive companies like Google, Amazon, Facebook, Netflix, have huge datasets, and they’re not giving them away anytime soon. So how does a startup compete, I really think there are lots of ways a startup can compete, if they pick their niche accordingly, like, Sure, you’re not going to be those companies in terms of acquiring certain datasets about consumer behavior. But if you’re trying to acquire a data set around, you know how a certain process works in a factory, then you’re gonna be able to do that, you or there’s an opportunity to do that in a way that no other company can, and you just got to be really creative.

31:24
So is it going to be the the non consumer opportunities that you think startups have have better opportunities in than a

31:32
very sorry, I think, in a very general sense, yeah. And we just don’t, we just don’t work with any consumer companies, or look at any opportunities that are direct to consumer, sort of, for that reason, you know, also, because our experiences in enterprise and in sales and marketing and that sort of thing. But yeah, we do have a belief that it’s pretty tough to beat a lot of those big companies or get a data advantage if you need certain consumer behavior data, because they’re sort of locked up.

32:02
Have you ever passed on an investment? Because let’s say the tech itself was incredibly compelling. But you had concerns about a lack of a robust enough dataset to be able to leverage? Yeah,

32:16
that’s a really interesting question. I think for a lot of investors in this field, which is, you know, is the algorithm enough? Do you need the data? At what point do you do inverse when’s too early? Right? Like, how do you price the risk of that machine learning system actually working in practice? It could be, it could look fantastic in a research paper, but once you start feeding data through it, it might be completely unstable. You know, six months later, it might start spitting out completely ridiculous results. So we have, we have passed on companies that, you know, may have had a really interesting approach theoretically to solving a problem, but haven’t actually put real data through it and seeing if that algorithm is stable over time, for example. And we have a bunch of different ways we analyze datasets and analyze the efficacy of the machine learning model. And then it’s relatively quantitative, the way in which we analyze that stuff.

33:12
Awesome. So we talked a lot about moats on the program. We had losses baryon. From locks, he talked a lot about Moats. We had Tim O’Reilly talked about moats, and James Hardiman from data collective. How do you think about moats with regards to you know, these AI first companies that that you’re investing in?

33:34
So I think the distinction is that data is the mode. And machine learning is just a way to compound the value of that mode, or increase the size of it at an increasing rate. So, you know, step one for us is always analyzing the data set. You know, is the data set, something that was really hard to get? Is the data set something that is, is it using fungible data, like, it’s all well and good to have a data set that was really hard to get. But if you can feed the same algorithm with data that is very similar to that, but easier to get, well, then that data’s first service, that’s not very valuable. Does the data have a high dimensionality? So, you know, what does it tell you about the problem you’re trying to solve? And does it have bread? Does quantity provide a quality all of its own with that data set? And is it perishable? You know, is it a point in time data set? Is a data set that’s refreshing itself? Does it need to be refreshed? Will the perishability of the data set affect the performance of the predictive algorithm over time? So we sort of run through all these questions when we’re looking at the data set? And then once we’re satisfied that the data is sufficiently unique, as an asset, then we figure out okay, well then how do you use this to develop a competitive advantage that’s going to grow over time using machine learning or by by feeding this data into an intelligent system that sort of gets this virtuous loop going, that is, the data feeds an algorithm that predicts something for a customer, the customer really likes that. So they use the product more and more. And then that adds more data to the system. And then that makes the prediction better, and so on and so forth. And you get into this, like virtuous loop, which that was one of the very first things we wrote about when we started the fund in 2014, like this virtuous loop effect. So, you know, how do you measure that though, at the seed stage, right, like we’re investing sort of, mostly pre, definitely pre traction, but you know, sometimes before products all the way in the market. So how do you measure that, and the only way you can do it is by looking at experiments. And so you know, taking the data set, or trying to make a few predictions that you think will be a value to customers. And then we dig into those experiments with, with companies in sort of a second or third meeting. And we try to understand, Okay, well, what accuracy thresholds are you trying to get? Is this going to be useful to your customers? If it’s 70%, accurate, or 80? Or does it have to be 100%? Accurate? So we figured out the accuracy threshold, and then we figure out, okay, well, how close are you that like when you ran your experiment? What predictive accuracy did you get with what precision and recall? And you know, the delta between what your customers want? And what you can do today? What are you going to try? And what are you going to change about the system to try and make that up? And then we ask a bunch of other questions, you know, how much daily do you need to get there? Do you need a bit more data to get there? Or do you have enough data today? What’s the critical mass of data? And then, you know, did you run that experiment multiple times with different data? And did it stay the same? Or did it start predicting weird things or generating weird sentences? And then, you know, what’s the payoff for your customer in the end? How much value they’re getting? Yeah, but so step one is analyze the data and figure out if it’s actually a unique asset, and step two is figure out whether they have some way to use an intelligent system to compound the value of that data.

37:01
Love it. So you know, this is kind of an interesting point. We’ve got these standardized metrics. In SAS, for instance, right? It’s, it’s pretty Metro Sire. Yeah, it’s pretty Metro SAS, it’s pretty clear for entrepreneurs. So all the entrepreneurs listening in the audience, what sort of milestones and thresholds they have to reach to get to various stages of fundraising, right. And you went through some of the the main things you’re looking for, like accuracy, precision, critical mass of data, the efficacy of different data sets, as well as the value for the customers? Are there standardized metrics and milestones in a AI that you’re looking for, along these different dimensions? Or, or are there other dimensions that, that you’re sizing up as, as companies move through these different stages?

37:46
Yeah, that’s a really good question. And it’s sort of our our big work in progress, right, which is, how do we standardize around some of these things so that we can improve our decision making here as investors? And the short answer your question is no, I don’t think so. There’s no sort of standard measure of like, okay, this company’s AI is really working, I guess, because it’s being applied to so many different fields. And, you know, as we talked about before, in medicine, you probably want to be close to 100% accuracy, if you’re making a life critical decision, but in some other fields, like, you know, maybe inventory management in a retail store, it’s only 50% accurate today. So getting to 70%. Pretty good. So I think the reason their answer, the standard metrics that you have to hear, is because all the applications are completely well, they’re idiosyncratic to the industry you’re applying it to. However, I think the type of metrics we should be measuring are starting to get standardized, and they’re some of the things I mentioned. So it depends, I guess, the global metric you would think about for these AI centric applications. The third wave, just sort of referring back to something we were talking about earlier today is is it better than human is probably the main metric. Now, I don’t think it’s fair to expect a seed stage company that developed an AI system that better than a human, but if there if it, if it looks like it might be better than a human in an experiment, then that’s pretty promising. You just sort of figure out all the ways in which that that could change once it goes into production. Right.

39:28
So the burning question that everyone always has is, is there a reason to fear AI? And, you know, I want your take on this. And part of the reason I want your take is, I was talking to Tim O’Reilly about this, and I asked him and he said, There is no reason to fear it. It’s a tool, just like every other type of technology. It’s just a tool, but then he qualified his answer by saying, Well, you know, but if you did want to build some type of runaway AI, you do it on a blockchain foundation. So, anyway, I just kind of want your take like you know, is Artificial super, super intelligence when we get there, is that something to be feared or not?

40:04
Yeah, look, the short answer that, you know, I’d give, you know, the family at Thanksgiving is absolutely not, you’ve got nothing to worry about. Because you know, when you’re in the weeds, like when you’re looking at what these companies are actually doing day to day, it’s, it’s still sort of basic computations. Like it’s not, they’re not really doing anything that’s scary, that require a lot of hand holding. There’s there’s systems, and there’s a lot of human involvement today a hell of a lot. Everything from labeling the data to picking what features to even try out to try and the features and trying different methods and making it work on different hardware, and then making it work under all these different operating conditions. It’s, there’s so much hand holding. So I really think there’s nothing to worry about today, in terms of could AI be used to do something that we don’t want it to do? It’s just not going to get any run away abilities. However, you know, what AI is. And what machine learning more accurately is, is it’s it’s a way to let computers do more stuff. And it’s being used by the big companies to develop runaway competitive advantages. So I think I think we need to reframe the debate to be completely about monopolies. And to really think about, you know, should we redefine our understanding of what a monopoly is, because it was really formed. I was I did my JD and have looked into all the the origins of antitrust and whatnot. And the the definition we have monopolies is really just around well, if it’s good for the consumer, then it’s not a monopoly. And that’s a really uni dimensional way to think about a monopoly because good for the consumer is like, Okay, if it’s cheap, it’s good. But we all know if it’s cheap, it could be bad. It could be bad for the environment, it could be bad for culture, it could be bad for people’s emotional health, right? It could be bad for a whole bunch of different reasons, and like the Chicago school of economists and sort of managed to commandeer the definition of efficiency, and as determined by a price, and then that has informed our view of what a monopoly is. And I think we need to really go back and question that because some of these tech companies bring it back to your question. Some of these tech companies are building new runaway advantages, and there’s massive monopolies that are subsuming entire industries like retail and whatnot. So I don’t know, I think we don’t have to be worried about an AI getting a runaway advantage, I think we do have to be worried about a group of people running a company getting a lot of power in society, because they’re using AI as a lever to dominate entire industries and get a lot of money for

42:54
themselves. Interesting. So I’m kind of glad you brought up this point on monopolies because I was reading an article the other day, Chris Dixon wrote it. And it’s kind of about centralization of power with Google, Apple, Facebook, Amazon, right. And he’s writing about centralization versus decentralization, and how the web and the internet kind of started as this decentralized group of contributors, creating standard protocols and how it’s shifted into, you know, power centralized with these tech companies. And he thinks web 3.0, it’s going to shift back to decentralized environments. Yeah, any sites, drivers like crypto networks, and also the breadth of developer talent that’s outside of these, these tech companies. Just would love to get your quick take on whether you agree or disagree with Chris and how you think, yeah, AI might play a role.

43:45
Yeah. So I think the first thing to say is these tech monopolies have such a runaway advantage at this point, that there’s no real turning point, no countervailing force in the technology ecosystem that’s really going to prevent them from doing that, like these algorithms. They got us so good. They’re just getting better and better. And they’re not really Brett, there are no real breakpoints in the success of the products and technologies. So breaking these monopolies up is only really possible through some massive exogenous threat, something that comes from outside the technology ecosystem. So it’s going to come from, you know, a societal backlash, or a regulatory action, or a fundamental change in the internet, which is getting to Chris’s point about a decentralized web web 3.0. So I would say that, you know, it has to be something of this magnitude, like a full decentralization, decentralization of the web, to break down to the monopolies now, is that going to happen? I think, look, I’m all for decentralization or the technologies that decentralized they’re really interesting technically, but also, what they may allow for society could be really great. My frustration with and the sorts of arguments so far is we’re not really grounding them in computational capability and complexity, which is that decentralization is all well and good. But it’s just really, really expensive to run these centralized networks today. And I think we need to start having more rational conversations around what we can and can’t decentralized based on the computational abilities, capabilities that we have today, you know, should we be decentralizing the entire payments ecosystem? Probably not. Because that’s just not going to work for us. We’re used to payments of a certain convenience and speed and we just can’t run computers on decentralized networks that fast. But, you know, is it worth decentralizing access to healthcare data, or thing or decentralizing stock exchanges? Probably, because we can run computers fast enough to do things at the speed to which we expect and are required to do in those areas. So I’m all for it. But I think we need to start grabbing some of these arguments in computational capability and complexity.

46:03
Ash, if we could cover any topic here on the program? What topic do you think should be addressed? And who would you like to hear speak about it?

46:08
Yeah, I think something that isn’t sort of spoken about enough outside of the doors and the offices of investors is really nuts and bolts questions about process? What questions do you ask what models do you build? How do you valuation? It goes back to the Picasso quotation, like when when art critics get together, they talk about form and function wherever. And when artists get together, they talk about where to buy cheap turpentine as sort of want to know the cheap turpentine tricks. Investing. You know, everyone asks like, what CRM do you use? Where do you get this market data? It’s sort of funny being an investor, because you don’t really have a water cooler, right? Like, everyone’s always traveling around these firms are pretty small. You talk to your partners, but getting best practices from from lots of different places in the industry, is actually a pretty hard thing to do. So I would sort of like that at the end of these interviews you do with investors, you ask them something like that? Something very specific about their process that they think they do that no one else does.

47:13
I like it a lot. Is there anyone in particular that you you admire from a process standpoint?

47:17
Yeah, it’s funny, because it’s hard to tell, because you don’t really hear too much about bolts of people’s process. But I’ll give you, I’ll give you someone on my wish list. That is not in venture, but it is probably, in my mind, one of the most fantastic investors of all time, which is Howard Marks. So good luck getting Howard Marks on the show. There

47:39
it is, all right. What investor has inspired and influenced you most and why? If not, it’s

47:45
funny, I would say Howard Marks, but the reason I like him is because he resonates with me, like basically, he just doesn’t care what other people think. And he thinks that’s the most important thing. So that’s more someone that that resonates with me, someone that’s inspired me and influenced me the most, I mean, obviously, my partner Mark, but um, but also, you know, working side by side with the valid angel list for so many years, you know, we were doing everything together, and just just the collection of heuristics, he has, you know, he just has hundreds and hundreds of heuristics in his head, which is like if this, then that, and there are these amazing rules, you can just apply to an investment opportunity to give you a really high degree of clarity and a really short period of time, you know, if the company has this, just don’t invest, or if they’re doing this just don’t invest. And sort of working with him and just sort of writing down those hundreds of heuristics over time was, was was really worthwhile and something I’ve absolutely carried through to our process today.

48:46
Awesome. And then finally asked, What’s the best way for listeners to connect with you?

48:52
They can just email me, we read absolutely everything and it’s just ashes and a vp.com. And we’ll read everything that sent to us.

49:01
Well, if if you haven’t read ashes writing, you should you should get into it, their blogs on medium. They write very focused specific content on artificial intelligence, some of the best I’ve ever come across. Ash, thank you so much for coming on the program. This has been a real pleasure. And I look forward to meeting up next time I’m in SF.

49:21
Yeah, please do come by. I can promise you that. I will make you some good coffee being an Australian Italian. Perfect. All right. Thanks. Bye. Thanks, Nick. Take it easy.

49:36
That will wrap up today’s episode. Thanks for joining us here on the show. And if you’d like to get involved further, you can join our investment group for free on AngelList. Head over to angel.co and search for new stack ventures. There you can back the syndicate to see our deal flow. See how we choose startups to invest in and read our thesis on investment. In each startup we choose As always show notes and links for the interview are at full ratchet.net And until next time, remember to over prepare, choose carefully and invest confidently. Thanks for joining us.