220. Crisis Coverage w/ Ash Fontana – The AI Investing Playbook; Why “Dev Tools” for Data Scientists is the Next Great Opportunity; When Fund Returns Don’t Fit the Power Law; and Ash’s Favorite Investment Heuristic from Naval Ravikant

220. Crisis Coverage w/ Ash Fontana - The AI Investing Playbook; Why "Dev Tools" for Data Scientists is the Next Great Opportunity; When Fund Returns Don't Fit the Power Law; and Ash's Favorite Investment Heuristic from Naval Ravikant
Download_v2
Nick Moran Angel List

Ash Fontana of Zetta Venture Partners joins Nick on a special Crisis Coverage installment to discuss The AI Investing Playbook; Why “Dev Tools” for Data Scientists is the Next Great Opportunity; When Fund Returns Don’t Fit the Power Law; and Ash’s Favorite Investment Heuristic from Naval Ravikant. In this episode, we cover:

  • Zetta III was announced recently, a $180M fund for founders building AI-first companies.
  • You went from $60M -> $125M -> $180M… how was the fundraise different this time around?
  • Quickly can you give us your definition of an AI-first company?
  • What will you be doing differently with the new fund and how does the pandemic affect your approach?
  • Tom Tunguz just mentioned that in the data they’re analyzing they are seeing a drop in spend on Machine Learning Infrastructure.  How much of a concern is this to you and your portfolio companies?
  • With the launch of the new fund, you outline focus areas both Applications as well as infrastructure and tools… Is the application-layer ready to leverage AI in a significant way or is there still a lot headway that needs to be made at the infrastructure level first?
  • Carlota Perez has written about technology cycles and how new technologies typically go through this installment phase, w/ rapid development and heavy investment, followed by crash and subsequent recovery leading to the deployment phase…  in your estimation where are we in the tech life cycle of AI and is it really ready (or will it be ready over the next 3-7 years) for mass deployment?
  • How effective are the AI models today when much of the input data, generally speaking, is flawed?
  • Talk about the next 3-5 years for Data Science… we’ve seen significant advances in developer tools and systems for software but I still feel like we’re at very early stages in evolution, efficiency and scalability of data science tools/fundamentals.
  • Does your fund returns follow the power law?
  • Part of the advantage to AI-first startups is the supreme data moat that they can build, preventing others from gaining traction w/ competitive solutions. While this is an advantage for the startups that get a head start (and their investors) is there an adverse impact on other startups that are founded later and don’t have the extensive data sets?
  • Many of the startups you invest in are “deep-tech” and will not monetize and grow ARR the same way many familiar SaaS or transactional businesses will.  What are the major gating factors to raise each of a Seed Round, a Series A and a Series B, in these longer cycle tech-first approaches?
  • You’ve create a Playbook on how to build an AI-first company.  It’s evergreen with plans to update regularly as you work w/ companies…  I wonder if you might give us the basics… What do AI-First companies have in their DNA and when building a company, what’s the sequence and major building blocks required at the early stages?
  • Last time you were on the show you mentioned you learned a lot of great heuristics and mental models from Naval Ravikant. Can you give us a couple of these that have been really valuable in helping you quickly frame startup investment potential?

Guest Links:

Transcribed with AI:

Intro 0:03
welcome to the podcast about investing in startups, where existing investors can learn how to get the best deal possible. And those that have never before invested in startups can learn the keys to success from the venture experts. Your host is Nick Moran. And this is the fool ratchet.

Nick Moran 0:23
Ash Fontana is back with us joining us today from San Francisco. He is managing director of Zetta Venture Partners, the first fund to exclusively focus on AI for startups. Ash has worked in PE and investment banking, he founded the company top guest, and he led the launch of AngelList, syndicate program, Ash welcome back.

Unknown Speaker 0:42
Thank you very much, Nick.

Nick Moran 0:44
So 100 $80 million fund just announced, first of all, congratulations.

Unknown Speaker 0:49
Thank you.

Nick Moran 0:50
That must have been must have been an intense effort. And right at the finish line, of course, the pandemic broke, how did that all come together?

Speaker 1 1:00
Yeah, look, um, we came right up against it, like we closed in March, we could have closed earlier, we could have done this or that. But you know, I think it’s always good. As an investor, you realize that process is really important. And you’ve got to respect people’s process. And you want to work with people that are, fully understand what you’re doing and your strategy so that you’re all on the same page and aligned from day one. And so I think it’s just always a balance, you’ve got to give people time to do that, and go through that process and respect that, and get to that point of complete alignment. But, you know, we also had to close the fund, so that we can keep investing in these great companies and whatnot. So I think we strike that balance well, but But it became a closer call towards the end.

Nick Moran 1:47
So it was it was there a shift in the LP base, more institutional this time are fairly consistent.

Speaker 1 1:54
I think, every fund over over time, you know, has the goal of working with a base of partners that are stable, and with them for the long haul. And we’re no different in that regard. But, you know, our last fund was 95%, institutional effectively. And this fund is, you know, around the same, so it wasn’t so much a shift in the LP base. In that sense. It was, in a sense, more of the same. So more, a few more people were added a few more partners were added that, that we think are really great long term partners. And you know, over time, we certainly would like to, and they’re starting to work with more charities and endowments, because effectively two were working for. So yeah, I think it’s just sort of more of the same.

Nick Moran 2:43
You know, you’ve been kind enough to sort of coach and provide some input to me on on fundraising and managing a fund. Was there anything you learned this time around this 100 $80 million third fund that was different than what you’d experienced previously.

Speaker 1 3:02
So there was certainly differences in how the fun was raised. You know, it was much more straightforward. This time, for a few reasons, I’ll get into a lesson. So sort of commentary first and less than second, much more straightforward. I mean, one, the more you do what you say you’re going to do, the more trust you build with your partners. And we’ve set a strategy for fun one, AIC enterprise, and we’ve stuck through to it throughout all the funds one and two, every single investment is in a company that is building AI at the seed stage and has an enterprise for b2b business model. So just doing what you say, you’re gonna do, really builds trust with people and obviously, for good reason. The second thing is, you know, by the time you get to fund three, you either have the numbers or you don’t, you’re either performing and making money or you’re not. And so, you know, we are performing. And so it was a much more straightforward fundraise for that reason. The third reason, and I think this is idiosyncratic to us that it was a bit more straightforward. This time, it’s never easy. I’m using the word straightforward, for a good reason, is look at our thesis to invest in companies building a competitive advantage through data and compound that with a cell phone system was a little bit esoteric when we started in 2013. You know, research into these new methods like deep neural networks was resurgent, and even nation in some fields would just enter the zettabyte era. So 2013 was the first year in which is about data and went across the internet, which is why we call it now it’s sort of more obvious than ever, that intelligent systems are an important lever for us to get computers to make good decisions and more efficiently use the resources we have on the planet, to enjoy a higher quality of life. And so I think our thesis has just become much more obvious over the last seven years. So it was definitely more straightforward. This line this time, sorry. In terms of lessons about fundraising You know, it was really just a very obvious one, which is, you’ve just got to put effort into the relationships really early, you know, constantly, two, three years before you plan to fundraise. Now, you should be telling people what you’re thinking, how you’re thinking through what’s going on the market, what companies you’re investing in, why just get people acquainted with your process and your mental models, and tell them what you think you’re going to do, and then come back to them six months later, and tell them what you did. And they’ll build trust in you. And so I just did a lot more of that. And I think we, as a firm did a lot more of that this time, which was much more relationship building, from the very moment we closed on to, to when we wanted to raise fund three. It’s such an obvious point, but having the discipline to, to do it really, really matters, and do it every single weekend wake up every single week.

Nick Moran 6:02
So recall from our discussions, you you’re doing b2b business models, you typically investing one to 3 million pre traction post data, is that still the thesis? You know, how do you deploy 180 million at seed?

Speaker 1 6:16
Yep, it’s absolutely still faces. So I just want to sort of catch you on this jumping fund size point, it’s actually the same number of dollars per partner. Given that, you know, we’re expanding the team and all that sort of stuff. And so I do think that there’s a limited amount of money that a partner can deploy in seed stage venture capital these days. And I think we’re at about that number. And we were that in fund two, and it’s exactly the same in fund three. So on a dollars per partner basis, it’s the same. So on a company’s therefore, per partner basis, the same and then so all we’re really doing is just having slightly more companies in the portfolio, because we have more people to select and manage those companies. So that’s it. I think, over time, as well, though, you know, you build a reputation with founders, and a for providing value and a brand in the market, that allows you to make a better case for being the sort of investor of record are the main investor in their seed round. And I think we’ve, we’ve proven ourselves in a few ways. And, you know, I’ll let founders speak to why we’re useful, and why they would prefer us to be their main investor. But we’ve just found that over time, we’ve been able to take more of that lead position, rather than a co lead position. And that means you tend to be purchasing a little bit more of a company up front, which means you have to be writing a slightly bigger check. So, you know, slightly larger investments for slightly more ownership of company and leading rather than CO leading a bit more. But this is also the marginal consideration. Now I’m really, again, it’s the same dollars per pot now.

Nick Moran 8:08
Did you develop your entire partnership from within? Or did you source partners from the outside? And if it’s the latter, you know, how did you think about doing that? How did you think about, you know, bringing in somebody external into kind of your, your system and your culture and making sure that they integrate? Well.

Speaker 1 8:27
So I’ll start by saying, I think it’s really great if you can build a team from the bottom up, because, you know, it’s really important in investing to be really disciplined. It’s really important in venture capital, to have good a good culture, so that you work really well with founders and build a great reputation. And different firms approaches different ways. And, you know, lots of firms have fallen by the wayside. Because one of those two things when array, you know, their discipline process around, making any investment sort of changed, they change their focus, they change the way they make investments, they change the way they trade capital to companies over time, or whatever, change their portfolio construction, which didn’t work, or their culture sort of changes in such a way where it just sort of doesn’t fit the founders of the era. And so I think it’s really important to develop that from the inside, so that you can be really consistent in how you make investments and how you manage investments, how you interact with founders, when you’re, when they’re pitching you and then how you interact with founders once you’re on their team and have gone out, so to speak. Now, it’s really hard to develop investment talent, there’s so many things you need to learn to be a great partner and it just takes a really long time. So it requires a real commitment on both sides. You know, if you meet someone where they’re at the sort of associate level at the start of their investment career requires a big commitment from them to sort of want to say, I want to be an investor, investing partner one day takes a big commitment from the firm as well. So it’s really hard to do. But it’s great. And you know, I don’t, you don’t need to hear me say this, like, a lot of the best phones in the world built out from them, I think you just need to have a platform to do that. So we’ll certainly want to do more than that. And on the path to doing that with some people now to date that said, we have done the ladder. So my partner Jocelyn was an engineering manager for 20 years. She was at VMware and Facebook and SVP of engineering director of engineering, she’s one of the, you’d be named one of the 20 most influential female engineers in the world. And she came in having done some angel investing, but really predominantly operating for 20 years. And so that can, you know, that can be a good thing to do as well. You just have to, you know, be really sure that that person wants to give investing as a career we’ll go has had some experience and exposure to it, and that you just know them really well.

Nick Moran 11:03
So you just got done telling us why, you know, people get off track by changing things too much. How are you guys doing things differently with the new fund? And does the pandemic affect your approach? Yeah,

Speaker 1 11:16
so firstly, it’s exactly the same continue invest in pre traction, post dated companies, you’ve got my broken record. Retraction means before they have reached repeatable revenue, customers, or even a product, we’re happy to invest at that stage, we help companies figure out Product Market Fit by getting feedback from enterprise buyers hiring their first salespeople spending their first call marketing, getting their first big customer contracts, post data means we need to see this some tractable path to making a valuable prediction from like an initial machine learning experiment. So that’s exactly the same. That’s what we do pre traction post data, AI, first startups, b2b business models, so there’s all the jargon, that’s what we do, the only thing that’s changing is an explicit change, it’s more of a doubling down. And that is, we’re doubling down on the five major research centers by AI. So previously, we’re very open minded about finding companies wherever they may be, you know, west coast, midwest, East Coast, you know, wherever we’ve really found that it’s very hard to build the quality of machine learning research team that you need to build to come up with something genuinely novel and valuable, outside of one of these five centers of AI research, and those five centers or areas that’s been London, Toronto, Boston, New York and San Francisco. And so what we’re doing in this fund that, again, is not different, it’s more of a doubling down is we’re just focusing on those five cities. And to do that we have offices in San Francisco, New York, and London. And we’re just going to spend all of our time there. And, you know, of course, we’re still completely open minded to companies elsewhere. They’re like, we’re just investing in a company now in the Midwest, and, you know, looking, we’ve invested in companies in Austria and around Zurich. So like, you know, of course, we’ll be open minded, but we’re just at least going to base ourselves there. Does that even matter in this era where you can’t meet people in person? Sort of sort of not? Not really. But we think it matters in the long run, to at least have coverage there. The second part of your question was, how is this recession and Coronavirus? changing how we invest? Um, the short answer is not much like we’re investing actually at a higher rate than anytime really, in the last year or two. We are absolutely very, very active in making new investments. So that’s the first thing to say. You know, of course, we have different views on the spaces that are not maybe going to fare that well over the next year or two. But really, those views are not controversial, because what we’re finding is this is just accelerating the inevitable, which is the inevitable adoption of AI in a manufacturing setting, to you know, reduce the dependency on needing to have humans like handing things to each other in person, or the inevitable and it’s just tailoring the inevitable in that like reducing our need to go to a shop to buy something because I can predict what we’re doing next and just ship it to us to sort of bring it down to a very real world consumer example, or around remote collaboration, all the obvious things. The final thing I’ll say about how it’s changing our investing thesis is it’s just giving us even more of an impetus to focus on companies that are just getting started. You know, it’s sort of a very good time to work with a firm like us get a million dollars and sit in a room and build a great technology to solve massive problems. You know, and what I’m really saying there is, it’s not a great time to try and like go out and do customer development and have a big event to market your product, you know, that go to market phase right now is a little bit tough. But the technology building phase, you know, that can still happen. And at the end of the day, when all of this is, you know, hopefully over, they’ll still be a market for really amazing solutions to really difficult problems like in human health, helping our climate. And so there’s it’s just as good a time as ever to build those solutions.

Nick Moran 15:39
Yeah, you’ve you’ve talked about both sort of the the applications healthcare and medicine, food, nag, energy materials, etc, as well as infrastructure and tools. So data quality, Developer Tools, infrastructure, security, etc. Ash, do you think that the application layer is ready to leverage AI in a significant way? Or is there still a lot of headway that that needs to be made on the infrastructure side? First?

Speaker 1 16:07
Yeah, I mean, the answer is both. It just it’s nuanced. It depends on the area. So I think the world is eminently ready the world meaning, you know, companies that do real things in the world, like run a grocery store, or manufacture a blood vial, or put together furniture or whatever, these companies are very much ready and are adopting this is not conjecture is what I see every day, adopting things like computer vision technology, because we do have cameras, we do have computer vision models that are really accurate, we do have ways to deploy those models to the edge. So now there’s some gaps there. And that’s why I said sort of like deploying some of this stuff, the edge is sometimes expensive, and you know, consumptive and a lot of power or whatnot. But for some technologies, like computer vision, or even just like really solid data science, you know, whether it’s like the evolution of actuarial modelling insurance, or whatever else that stuff. Yes, enterprises are absolutely real businesses are adopting that today. Some of the other stuff like language modeling, it’s just not really that like using a model to complete understand like complete paragraphs of what a human is saying, and then do something with that. We’ve made some amazing strides in the last year or two. But that’s not there. And then some really computationally intensive stuff, some stuff that requires deployment, like right to the edge, the edge meaning like to run on really small devices, not much battery or computing power. Yeah, sure. Like, there are lots of gaps here. And so for some applications, yes, ready to go for others? Not so much. And we sort of frame this in one sense in terms of risk, like, if the risk of adopting it is super high isn’t if the AI model gets it wrong, something really bad happens, then no, we’re not ready, because these models are not reliable enough yet. Or if the cost of adoption is really high, and costs big sort of a type of risk, because you could you’re putting capital at risk. If that’s really high, then because the computation costs or something is really expensive, then know, the industry industry or real businesses won’t be ready to adopt it. But that’s not true for all types of machine learning.

Nick Moran 18:33
So I’m going to put you on the spot here, just came across this article from Andreessen pretty critical article, actually, of AI first companies criticizing, you know, startups in this category for having lower gross margins. Due to you know, heavy cloud infrastructure and a lot of ongoing human support. They talk about scaling challenges due to, you know, a lot of thorny problems and edge cases. And they make an argument that there are weaker defensive moats due to commoditization of AI models, and some challenges with network effects. So I didn’t really prep you for this, but you know, what’s your take on Andreessen sort of slamming this category?

Speaker 1 19:18
I’d say it’s just super defeatist. And so what, like, you read that article, and then you ask, okay, and then I like and I think the nuance is, or there are many nuances to be found in the real world application of this stuff. As in. Yeah, I’ve met companies and backed companies, when they’re at a lower gross margin because they have a lot of human labor costs. And what did we do we use we built machine learning models to automate a lot of stuff. And now they’re 95% gross margin businesses that get through your upfront contracts, and you know, attending 10s of millions of dollars and valued very highly in the markets and now Have a path to profitability, I have all sorts of objective measures of success. You know, in a sense, that’s what my job is, which is to find a tractable path from getting from one margin level, you know, 30%, gross margins to 80%, the valuation of the company improves commensurately. And we, as investors, in a very cold sense, make the Delta, and the founders obviously going to do that, too. So it’s sort of so while so okay, yes, these things are hard. But if you solve them, it’s valuable. So I don’t want to sort of give too much give short shrift to that article. But I think there’s just another level to go beyond that, it raises very good points, like this stuff is really expensive to build. And companies do need more capital upfront. And, yes, a lot of the the layers in this sort of value chain are the steps in the value chain of delivering a prediction, are getting completely commoditized. But not all of them. And once you do spend that initial money to to label a bunch of data and build a model, sometimes you do get to a degree of automation, and self generating of data that others can catch up to. So I think it’s a good starting point for a discussion. But it’s not a complete argument, or it’s not something that really founders can do much with. And so to that end, I’ve actually written in the pay, spent a long time writing an article that is a follow up to that, and ask this, and I’m so good. So hopefully, that comes up soon. Now. It’s just putting the finishing touches on it. There we go. Good timing.

Nick Moran 21:41
All right. So ash, you know, Carlota, Perez has written about technology cycles, and new technologies typically go through different phases, right, there’s this instalment phase, with rapid development, lots of heavy investment, often followed by a crash and subsequent recovery. Next is often this deployment phase, which allows, you know, a lot of progress to be made in a space. You know, in your estimation, where are we at, in this tech lifecycle for AI? And are we really ready, you know, for mass deployment?

Speaker 1 22:17
Yeah. So thank you. Something that you sort of said in that question, which is really important is implied is it’s inevitable, like, we will inevitably adopt AI, because it is the thing that gives us this leverage we need on computers today, you know, I often say, computers today, sort of just like, fast calculators for us, you know, calculator out quickly calculates, you know, stuff in a shopping cart, and the cost of delivering it to us quickly, whatever. They don’t really help us make decisions. And so we will inevitably adopt AI to help us make better decisions, a grid. The question is, when and yeah, color presents model, understanding that deployment is acknowledged is a very good way to figure out a tractable path to adoption, and figure out when. So I think the nuanced answer your question is something like, look, it depends on the market and the underlying technology, as I was saying before, so if you want to use AI, to understand complete paragraphs, and then like, for conversations, like the one we’re having, it’s just too early, you want to use it to identify something to perceive something and an image, that’s about right. You know, it’s still a little bit expensive, in some cases, a little bit unreliable, and others, some upfront cost some ongoing costs. But for a lot of applications, it’s about the right time to do that. If you want to use AI to or deploy AI, to use her words, to sort of make a basic statistical inference on a repeated basis very reliably, like price, something, you know, price price or risk for an insurance product that for which you have 50 years of data, then you can do that today. So look at what just depends on the underlying technology, it depends on the end application, it depends on the risk depends on the upfront cost. It depends on the ongoing cost existing

Nick Moran 24:27
datasets.

Speaker 1 24:28
Yeah, the is there any existing data set? There’s just no one brought answer here. And, you know, frankly, that’s why it takes us time to figure out when we made the company, if the market is ready for the product, if there will be product market fit, so to speak, sometime in the period of us making after us making the investment because we have to figure all these things out. You know, I’m not saying we figure these things out from first principles all the time we talk to customers, you know, if it cost this much if it’s this accurate, if it sometimes gets it wrong but not all the time? Is it worth it for you? And, you know, a lot of the times the answer is no. And a lot of the times the answer is yes. And if it’s yes, we invest. If it’s not, we don’t. So it really depends. But I think the broad point is, it inevitably will be adopted in every industry. That’s at least what we think. And he’s absolutely being adopted to industries in very real ways to make decisions today.

Nick Moran 25:27
So I assume you guys have a market roadmap, probably an application roadmap, probably different tech roadmaps as well, have you ever been in a situation with a founder, where the technology they’re working on is really compelling, but maybe the you know, the pool they’re swimming in is the wrong place? And you’ve suggested or guided them to maybe an application a dataset, a market that’s ready for exactly what they’re working on?

Speaker 1 25:54
Oh, absolutely. I mean, this happens a lot. And, you know, it’s often a very collaborative process. It’s not like, you know, we come down from on high with it with the onset, we sort of have a hand or have heard that what they’re working on might not be ready for adoption today. You know, we talked to a similar company, and then call one of their customers. And they said, No, not really, until we’re past on that Intel. So we’ve sort of got a hint here or there. But then we’d sort of brainstorm new ideas with the founder. And this is a lot of fun when it happens, and you brainstorm like new applications, or new markets or something like that. So this happens all the time. And it’s a haber find them, you know, often leads to an investment, you know, maybe they come in with one idea, we’re like, well, we heard this, what do you think about that? And I’m like, Yeah, you’re right. Maybe we shouldn’t focus on this, then we brainstorm for two hours. And then we introduce them to a customer, like a potential customer. And the customer loves that new idea, like the second idea. And then we hear back from the customer that they loved it, and we’re like, Okay, well, let’s do it. And then we invest. So we sometimes invest on the second or third iteration. So yeah, absolutely happens all the time. And it’s really fun when it does.

Nick Moran 27:09
You know, an issue I tended to run into, when I was in corporate America for a number of years, was just bad data. Right? You’re pulling in data from many different sources. There’s a lot of different on prem versus cloud systems that enterprises are using, right did is just classified differently stored different differently, sometimes captured differently. So you know, they’re a simple example would be allowing, you know, alpha characters in a zip code. And yeah, and then you’ve got issues. Right. So, you know, how do you deal with? I guess, I guess the question is, how effective are AI models today, when, broadly speaking, much of input data can be flawed, and can be highly variable? Yeah.

Speaker 1 27:59
So I guess I’ll start with starting the same way, I’ve started a lot of the answers these questions. Well, we like it’s nuanced. It depends. Not all the end in this case, like, not all data is bad. So yes, if there is bad data, no, this stuff doesn’t work. But there’s not always bad data. I guess a more constructive answer. The question is, or a constructive way to talk about this is by really admitting that, yes, you know, ETL, data management is still a huge problem. It is still 80% of the job of a lot of people that are paid a lot of money to do data science and machine learning. They come in being hired to build these models, but actually, they spend 80% of their time cleaning up data. It is a it is a huge problem. And so, you know, for this reason, we still spend half of our time on infrastructure. And that’s not just new ways to deploy and manage and collaborate on machine learning models. But it’s also data infrastructure. You know, we’ve invested in databases, like a time series database called crate, we’ve invested in data catalogs, which helps you do this one company called Promethium. We’ve invested in all of the in companies that we say our data enabling is in helping you with every part of the pipeline, but even the very early parts of the pipeline, your data pipeline, or the process of building a predictive system, which is getting the data in the first place. And then interestingly, in sort of a meta sense, a lot of the solutions we’re seeing to data cleanup and ETL are actually probabilistic. In nature, as in, we’ve seen a lot of promising approaches to using probabilistic models and machine learning to clean up data and to figure out where is my data, what is it? And so for example, you know this One problem a lot of companies have today is complying with privacy regulations, newer privacy regulations like GDPR, and California regulations. And often step one is, well, where is my PII personally identifiable information or the data that’s subject to this regulation, I don’t even know where it is. And sometimes how you can find that is by running a language model, or running a bunch of sort of machine learning rules over column names. So in a sense, it’s a big problem. And machine learning can actually help solve the problem that enables more machine learning. But I want her to fall down that rabbit hole here. Well,

Nick Moran 30:41
I want to hear a bit more about data science. You know, we’ve seen significant advances in developer tools and systems for software. But I feel like we’re still at the very early stages in the evolution efficiency, scalability of data science tools, can you talk to us about you know what the next in your estimation, what the next two to five years hold for, for data science,

Speaker 1 31:04
just so much, there’s so much to be built here, like so many great tools to be built for data scientists and machine learning engineers to make their job better, faster, cheaper, more fun, there’s so much to do. And, of course, that will have so many flow on effects. And, you know, I often say like tools for data scientists today, like what they worked for software engineers 30 years ago, you know, 30 years ago, and actually, just going back to that Andreessen article for a minute, like, building a software company 30 years ago, was really, really expensive, you can sort of take that article and say, Hey, and you know, put it in a time machine back 30 years replace machine learning with software, and it would sort of read, okay, not totally right, but it would read, okay, in that you had to, you know, you sort of had to like code at pretty low levels to even know how to deploy stuff, you had to put together your own computers, you had to do all this manual work to even like, get a piece of software out, let alone the error after that, like put something on the web. And, you know, it’s worth the same face today, there’s so much to do. So to be more specific, you know, we see so many opportunities to build products, from the lowest level, at the hardware level, to interact with new types of chips, you know, interact better with FPGAs, or quantum or quantum accelerators, running clusters of these things orchestrate, you know, sending models to different computers at the edge different types of chips, or like a multitude of chips, different environments. So that’s at the lowest level, you know, the level above that just managing data, as you said, data cleaning, unification, enriching data with third party data, a lack of clear that does synthetically generating data, you know, often a data scientist has sort of hits a wall when they’re developing a model, because they ran out of data to play with, and they need a little bit more to get them all more accurate. And so synthetically, generating that data can sometimes be helpful, particularly vision data. You know, security, running models in in a secure environment where the data isn’t shared between customers, is a really difficult thing to do. But thanks to innovations like federated learning, you can probably do that, we just haven’t necessarily figured out an enterprise grade product that can do that yet. And then there’s the actual development of these models. So you know, making data labeling cheaper, making training, the training process more automated and consistent. So you train a model you it works, you put it out in the real world, and then it breaks. And then it comes back to you. When it comes back to you. Is that does it give you some information about why it broke? Like what’s the equivalent of a stack trace, so to speak in data science and machine learning. So for those who haven’t developed software before, like when a piece of software breaks, some tools will give you a stack trace, which is like what part in the code did when this thing was executing break. The sentence was a bit backwards, but you know what I mean. And, you know, finding the equivalent of that for a machine learning model is sort of exciting. And then just putting all this stuff into production and monitoring an ongoing basis. I could go on and on and on and on. But you can go to a website. But the point is, the reason I’ve sort of had so much to say in response to your question is there’s just so much to be built. And it’s so exciting. And anyone building a tool for data scientists or machine learning engineers, we want to

Nick Moran 34:39
help a lot. I feel like we can have a whole podcast on on data science. And you know, as an interested tech investor, I can’t wait to see the tool set for data scientists get to the place where they have, you know, for the software developer.

Speaker 1 34:54
Yeah. Yeah, that’d be really cool. We’ll get a lot more models out there and be able to use computer goes for a lot more things,

Nick Moran 35:01
you know, a bit of a random curveball curveball question here. But do you think your fund returns at Zetta? Follow the power law?

Speaker 1 35:12
Well, in a strict sense, where we are placed on the distribution of returns in the venture capital industry today, yeah, like, we’re on what, what the distribution of returns are? Oh, yeah. Okay. So that’s an industry sense within our portfolio. Okay, interesting. So no, we don’t actually have the dynamic etc, where, you know, a very small number of companies are doing really well. And the rest of them are not, you know, we actually haven’t lost any companies in our first fund, or our second fund at

Nick Moran 35:46
all. What was the vintage on those?

Speaker 1 35:50
14 and 17 2014? Wow. So we haven’t, and so they don’t so far. And there are many, many reasons for that, you know, one, I like to think we have a very disciplined process where we think about all different scenarios for a company. And, you know, think about how can I survive in lots of different economies, but also like, lots of different, you know, degrees of market acceptance. And then to, you know, we invest in a certain type of company that like, frankly, is just really valuable today. They have unique data. They are, they have really great people, you know, often with like, very high level education and machine learning, or otherwise, or mathematics or physics. And they have, you know, models that are doing something, or producing a prediction that’s really strategic for an industry. So they’re valuable to strategic players, but they’re also valuable to big tech companies. And so there’s a lot of sort of embedded downside risk, I guess, in the companies in which we invest, because they’re just doing something so differentiated and so valuable. That is, they’re on the cutting edge of machine learning that, you know, even if the timings not quite right, we’ll be able to be able to get a good outcome for everyone.

Nick Moran 37:09
Well, so you guys are investing in what a lot of people would refer to as deep tech, you know, different definitions of that floating around, certainly, we have our own hero stack. But you know, many of these companies are not going to monetize and grow Arr, you know, like a SaaS business or a marketplace or transactional business would, you know, what, what do you see as the major gating factors to raise, you know, each of a seed round a Series A Series B, in these, you know, much longer cycles, sort of heavy tech development approaches early on?

Speaker 1 37:46
Yeah, so I think the first thing to say is, ultimately, all companies are assessed on the same basis, whether there will be a sustainable, profitable business. And the, you know, it’s sort of a bit annoying to use like jargon or like something anachronistic, but like the rubber hits the road at some point. And that is, I don’t actually think that by this sort of Series B stage, deep tech companies valued that differently. There are a lot of traditional software companies, and depends on how deep tech is, right. And look, if you’re, if you’re smart, as a founder, you’ll always be focused on the sort of investors that have the right expectations for the type of technology they’re building, that the type of stage you’re at. In that, like, if you’re developing a drug like therapeutic, you should only be talking to investors that have developed backed a lot of companies developing drugs before develop drugs themselves, so that they know the stage. And I think for deep tech companies, and specifically companies building real machine learning models that are working and want to work in the real world, and they’re backed by like, very novel machine learning research. You know, if you’re talking to investors that really get it, they’ll understand the milestones. So anyway, your question is like, what are those milestones? We sort of published something on this in 2015, and just refer people to that cold growing up in the intelligence era. I think, at a series a stage, there’s only a slight difference between how AI first companies are assessed versus not AI first companies. And I think the that differences, investors might ask you a few extra questions about, is there real world evidence of ROI from using these predictions that you’re spitting out? So when they get on customer calls, they’ll last all the usual stuff? Like, is the team good to work with? What’s the pricing of this? Like, are you getting some ROI, but they’ll ask a couple of extra questions like, Are there other models performing for you like what’s really the value of the predicting part of the software rather than just they’re getting down in one place or the workflow part of their software? I think there’s a big difference into how these AFS companies are assessed at the seed stage. That’s what we do. So instead of trying to figure out if there’s potential Product Market Fit by matching features to what we hear customers want, you know, we want a piece of software that does this or has these features. Instead of doing that, which you would do, like in traditional seed stage software investing, we ask customers, if they, you know, if this prediction is going to be valuable to them, and sort of figure out like, what the potential ROI is, and then see the series a stage, is that actually there? If

Nick Moran 40:35
it works, how much value would that provide? Exactly? Yep, love it. And then you can, you can diligence and vet the team to see if they’ve got the capability and make it work. Exactly. Love it. So you’ve also talked about this playbook on how to build an AI first company. And it feels like it. I’ve been on the website, of course, and it feels like it’s almost like an evergreen process that you’re going to update regularly as you work with companies. I wonder if you might give us kind of the basics. You know, what do AI first companies have in their DNA? And what’s what, what’s the sequence and major building blocks required at the early stages.

Speaker 1 41:14
So the reason we say AI first companies is probably because it’s all about what they do first. And, you know, what they have in their DNA. And so to continue the DNA metaphor, what they express out of that, first is a need a strategic imperative to build a competitive advantage through data. And so this just affects every decision from the first day of the company onwards, which is, you know, how are we going to allocate capital as in like, we’ve got a little bit of money, you know, we’re gonna probably allocate a bit of that to acquiring valuable data, whether that is paying people to label data, whether that is buying a data set, whether that is you’re spending money on our own team to like go and generate some data, you you’re going to spend money on that, rather than spending money on something else, like marketing or whatever else, or, you know, hiring a bunch of software developers to build out a bunch of features in the product and outsourcing stuff, or whatever, building a website, leaving a lot of companies with meat, like just having him go around the building website, because they haven’t allocated capital. And they won’t. And they even outsource that. It’s so funny, like 20 years ago, or 15 years ago, and I was investing in technology, like, the idea that you outsource your website was so crazy. But now it’s just like, well, that’s probably one of the easier things to do. So why not? So it affects how you allocate capital, it affects who you hire, you know? Do you hire a machine learning researcher? First? Do you hire a machine learning engineer first? Or do you hire a software developer? Or do you hire a salesperson, an AI first company is probably going to hire machine learning researcher first, it affects how you develop your product, what features you can build first, you know, they are AI first companies, prioritize features that collect more data from the user of the product. So you know that, for example, ask them, Hey, we said you should contact this lead, was it a good lead? Or we gave you this recommendation to like, get up and walk around? And you know, did you do it? And therefore, you know, how did it affect your activity score and your health, like a health app or something like that, not a great example, but they prioritize features that get feedback data from customers, AI first companies, when they do their first marketing campaign, to customers, they’re really conscious of promoting the differentiated value of their product that comes from these predictive models, like, we can help you make this prediction that you can’t otherwise make reliably. So AI first, companies just take different first steps to other companies.

Nick Moran 44:02
You know, as last time you were on the show, you mentioned, you learned a lot of great heuristics and mental models from navall Raava. Khan. Just to tie it back to the original episode with you. Can you give us a couple of these that have been really valuable in helping you quickly frame up startup investment potential?

Speaker 1 44:20
Yeah, I mean, you know, being joined at the hip and getting coffee every day with navall generate, hey, Chandler has a lot of ideas. And so I can go through lots of little heuristics and ideas and random thought experiments we had and it was a really fun time, and there’s a lot there. But, you know, in a sense of how to think about what I learned about investing, it’s sort of goes back to all the basics of like early stage investing, which is access tech product team. And that’s it. You know, the, you know, Tech was really, really hard to build and product Is it really surprisingly pleasurable to use? And team? Are they just brilliant? Like, are they just like one of the smartest people you’ve met in the field? So it’s sort of like very simple heuristics. But very extreme manifestations of these things like Steve Jurvetson actually once said, If I’m surprised by something in a pitch, like if I’m, if I’ve never seen it before, I’ll probably invest in it. Because if you think about him at this stage, he’s seen 2030 years of pitches, if he hasn’t seen it before, probably is novel, his sample size is so big. And, you know, in a sense, quite normal. And I sort of would always zero in on was the simple stuff, just extreme manifestations of it like, really, again, to emphasize it, like was the tech really hard to build, was it surprisingly, pleasure pleasurable to use this product that like, everything just works so smoothly? That it was another level, you know, people use the term like surprise and delight, and the team like, I’ve enjoyed blowing your mind when you meet them. So it was sort of simple stuff, I think. And less about Navarro, more about AngelList, the thing you learn, working there, and I certainly learned working there, once you seal the numbers, and you see just the sheer volume of opportunities that come through that platform is access, like, you have to see a lot to have a chance at the early stage of a chance that he’s saying something really, really good. You really have to be saying 1000s and 1000s of companies a year to find the good stuff, to find people that is working on like really novel ideas, and that have the timing right. So yeah, there are a few things.

Nick Moran 46:46
Well, I love it. Yeah, you’re you’re a new stack. One of the things we ask our team is, you know, in what, in what way is, is this team world class? Better Than Anyone Else? One of many questions, but if it can’t be answered, then it’s it’s usually not a fit. Ash, what resources could be a book blog, video article? You know, what resource Have you found really valuable that you’d recommend the listeners? Yeah, and so

Speaker 1 47:15
I think I’ll stick to my field here, which is, you know, understanding intelligent systems. And I, what I find incredibly useful is the is a couple of people who really specialize in this field of neuro philosophy and reading their books. And so Patricia Churchland, wrote this wrote sort of their first complete textbook on this called neuro philosophy. That’s really good. Going back to Francis Crick, Daniel Dennett has written some great books, like looking at my bookshelf here, the one I liked of his last was from bacteria to Bach and back about the evolution of minds. You know, Ramachandran, his work is fascinating. And he does a lot of work on sort of what you call like phantom limb syndrome, and mirror neurons and things like that. You know, Jeff Hawkins is a more modern mind, Richard Dawkins and Minsky, and Chomsky, all of these people have done so much work in understanding the mind. And I think it for me gives their work gives me a lot of clues into what sort of intelligent systems we can develop next. And it’s not about to be clear, building a system in software that emulates the human mind, it’s about figuring out new forms of intelligence that do things we can’t do. So I have just got so much out of that body of work. And then the other body of work, or the sort of periodical I really like reading is nature methods. So this is not Nature magazine, it’s a magazine by the same Publishing Group called Nature methods. And it’s actually about tools. And, you know, tools really matter without good tools, you know, we can’t do things and tools, the levers and they help craftspeople do their work and reading about all sorts of new tool has been developed in all sorts of fields from you know, physics to biology is really fascinating. You know, when that when that come that periodical comes out?

Nick Moran 49:23
Well, Asha, what’s one thing you know, you need to get better at.

Speaker 1 49:28
So I find myself sort of writing up the same advice for founders over and over again, and I think I just put some of that into like a playbook. I think I need to consolidate some of that one, you know, so I can get it to them more quickly. And so I don’t there’s not a bit of delay every time in the inmate like right spending, you know, that night writing it up, but to so that I can improve on it and get their feedback and iterator and these are things like you know, what questions do I ask myself as opposed to an interview or, you know, how do I assess if this marketing campaign is working? Or, you know, what should I be measuring here? Or what’s the best solution for this? I think I just need to get better at codifying, improving, categorizing all that stuff.

Nick Moran 50:17
Do you have a prediction on the overunder for the quarantine year?

Speaker 1 50:23
Oh, I really don’t like to prognosticate. So I don’t have a prediction per se. But what I will say is that, even with the quarantine period, to date, you know, if you just think about it, either on a per city basis, but I’m thinking about it more on a national and global basis. It’s already been so long that we have catalyzed a structural shift in demand for products in various industries, and unemployment. And it’s going to take a really long time, perhaps years to find a new normal. So I’m not going to predict how long it’s going to go on for. But I will say it’s gone on for long enough that I think there have been structural shifts that require a new normal, so to speak, and it’ll take a long time to find it.

Nick Moran 51:11
And finally, Ash, where should listeners go for more info on you? And Zetta?

Speaker 1 51:18
Oh, sure. Um, I’m Ash, Fontana, sh fo and DNA on Twitter and LinkedIn. So you can find me there. And then Zetta vp.com. And we have the playbook there. And we publish a bunch of articles on how to build an AFS company.

Nick Moran 51:33
Love it, Ash, it’s always a pleasure to connect. I appreciate you joining us here on short notice and, you know, given us the detail on fun three.

Unknown Speaker 51:41
Thank you, Nathan. Looking forward to the next one. All right, take care.

Nick Moran 51:50
That we’ll wrap up today’s episode. Thanks for joining us here on the show. And if you’d like to get involved further, you can join our investment group for free on AngelList. Head over to angel.co and search for new stack ventures. There you can back the syndicate to see our deal flow. See how we choose startups to invest in and read our thesis on investment in each startup we choose. As always show notes and links for the interview are at full ratchet.net And until next time, remember to over prepare, choose carefully and invest confidently thanks for joining us