Rodrigo Liang of SambaNova Systems joins Nate to discuss Why AI is Still Being Underhyped, Innovating at the Hardware Layer, and Why the Future of AI is Open Source. In this episode, we cover:
- The Rise of Neural Nets and Demand for Computing Power
- Transition models vs. legacy models.
- E-Chat GPT Introduction was Game Changing.
- Predictions For The Future of Generative AI
- AI’s role in the digital age.
- Are there any professions that are at risk?
Guest Links:
The host of The Full Ratchet is Nick Moran, General Partner of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area.
To learn more about New Stack Ventures by visiting our Website and LinkedIn and be sure to follow us on Twitter.
Want to keep up to date with The Full Ratchet? Subscribe to our podcast and follow us on LinkedIn and Twitter.
Are you a founder looking for your next investor? Visit our free tool VC-Rank and we’ll send a list of potential investors right to your inbox!

0:18
Rodrigo Liang joins us today from Palo Alto, California. Rodrigo is the co-founder and CEO at SambaNova Systems, a hardware and cloud-hosting platform for AI models. Collectively SambaNova has raised over $1B from Softbank, Google, Intel and BlackRock and was most recently valued at $5B. Prior to founding SambaNova, Rodrigo spent nearly two decades across various engineering roles at Oracle and Sun Microsystem.
Rodrigo, welcome to the show!
0:46
Thank you so much. Thanks for having me, Nate.
0:48
You bet. Well, there’s been a lot happening in AI as blade. So I’m excited to go deep with you on a number of topics as well as Samba Nova. But first, I want to start with your own individual journey. So can you give us that condensed story of your upbringing and also your path to becoming an entrepreneur?
1:06
Sure, was born in Taiwan, but grew up in Brazil. And that’s probably one of the things that people asked me a lot about my company someday, no, but what is that all about? And so it’s a shout out to my where I grew up, went to Stanford for undergraduate graduate work in electrical engineering, and then graduated and started working in the early 90s. at Hewlett Packard, building high performance chips and systems for enterprise applications. I spent a number of years doing that before joining a couple of startups then the last of which was started by one of my current co founders. And this was back in 99, when he was building the first multi core chips in the world. And that’s currently on the coton, a long time professor at Stanford. And, you know, ended up at Sun Micro for 15 years sun and Oracle for 15 years building all of the enterprise class servers that sun shipped 2017 rolls around, and where there’s just immense opportunity to come out. Look at the next wave of technologies and what it took to power them. And obviously, by then, who could see the advancements that were happening in artificial intelligence and the workloads that were showing up, and so couldn’t lay and I am one other professor Chris rea at Stanford, who is just a brilliant, brilliant man and brilliant professor, there started thinking about how to create a new platform that allowed us to tackle the the workloads of the future. And so that’s and that’s how someone else got created.
2:42
Got it. And so for those hearing about Samba Nova for the first time, can you describe more about what the business does and how your approach differs than that of some of the companies that we’re all well acquainted with, such as an open AI or cohere other large language model companies, and specifically where you fit in relative to the LMS, and where you’re positioned in the stack?
3:08
Right, so we are a full stack hardware software integrated platform. We’re building highly high performance processors and servers with multi core technology back in the late 19, late 99 2000s, with one of the my current co founders, Coolio Lakota, who’s a professor at Stanford, and was there for a number of years building these high end systems for sun, and then eventually Oracle, which acquired sun before we decided in 2017, to start this company. And so I got together with one other professor at Stanford, Chris Ray, a brilliant, brilliant person, and three of us started thinking about what the next generation of computing was going to look like. And that’s the genesis of supernova.
3:52
And I guess why, at the time, was current hardware not suited for the workloads of AI. And specifically, in the models of the future back in 2017 2018, I’d love to hear more about around the gap that you’ve realized in the market, specifically at the computation layer, the hardware layer, and ultimately how to lay the foundation for what Samba Nova is today.
4:18
Right? If you think about when the internet showed up, the world of computing had to go into this very wide scale out type of modeling a lot of a lot of cores, a lot of CPUs to handle the internet traffic, right. And, and that’s really when you start to see kind of these multi core architectures from from Intel from AMD, the CPUs really take off and really ultimately was in service of a new workload, the internet, right that drove those hardware architectures. Over the last few years. What we started seeing is the rise of these neural nets and use neural nets, the really interesting part about them is that they actually don’t run on traditional CPUs very well, by this, the hardware architecture does just don’t lend themselves very well. But for some of the neural nets, what was interesting was the graphics processors were actually quite nice for a subset of the neural net exploration at the time. So this is where you see NVIDIA GPUs start to become a player in the AI early AI days, because their graphics hardware was actually a nice substitute for some of the things that were not present in traditional CPS. But as we’ve now seen, and with the market continued to evolve, and certainly in the most recent days, with these large GP keys, even moreso. And a few trends, that will, we’ll start to see with these models, the models are getting bigger. datasets were getting bigger. The amount of computing required to train good models or get was getting more and more demanding, and the cost of doing it was skyrocketing. Right? So right around 2016 2017, you saw this thrust of investment going into chip companies to try to figure out how do we actually go and create new technologies to power what the world is seeing as the workload for the next 1015 20 years. And that’s ultimately the genesis for why there was there was this excitement around building silicon, that powered artificial intelligent work, intelligence workloads, of which we certainly were part of that period.
6:43
And I feel like most of the attention is
6:47
currently dedicated towards the large language models themselves, whether the closed source, whether they’re open source, and perhaps later in the conversation, we’ll talk about which one you ultimately think well went out. But very little attention is dedicated towards the hardware layer of generative AI. And you mentioned the GPUs with Nvidia. And they’ve been doing incredibly well. But how does the actual hardware architecture itself differ in terms of what Samba Nova is building relative to the GPU and the CPU?
7:18
One of the real advantages we have, and this is research that was started at Stanford and has continued on since we started the company at some level for the last five, six years, is really, when you when you have the ability to go from scratch and think about the problem at its basic principles. Right, which is what we were able to do starting from the 1617 really thinking about the problem, but start with the workloads start with artificial intelligence workloads, or workloads, these neural nets, and think about what they want. And instead of going back to architectures that are 2025 30 years old, you know, CPUs, GPUs, those have been around for a long time and try to, instead of trying to think about how do we evolve those legacy architectures to support the new workloads? If I could start from from scratch and think about how do I create a hardware platform chips, systems, etc, to power your workload for the next 20 years? What would it look like? Right? That’s really how we how we approach this problem. At sama Nova, we started really looking at a broad range of neural nets, big small vision, language, science, all sorts of different things and start thinking about what are the trajectories that the needs these artificial intelligence models want to take, not just for today, but for the future, and then started them breaking those down into first principles of what’s happening at the computational level, and what’s required at that semiconductor level to support that. And so what you see out of the summit over chip is a combination of all that research, condensing it to what is the most efficient way for us to create a custom piece of silicon that’s focused on artificial intelligence, and allow it to scale and be flexible around a broad range of neural networks, right. And so that’s ultimately why we created our chips called a reconfigurable data flow unit. And there are two pieces to that term. Data Flow is basically the architecture around how you get the most efficient, most efficiency for computing, AI. And reconfigurable gives us the flexibility of actually changing the hardware to map itself to a broad range of models, including vision and language and science and time series, etc, etc. And so that’s the that’s at the core of one of our technology innovations and why we’re able to do what we’re able to do today as such a significant improvement in time and cost and in performance.
9:56
So how does it manifest itself in terms of costs savings. So if for example i that we’ve seen figures of $100 million to train the new models of today, right, like the recent GPT models, if you were to compare and contrast, a model that would take 100 million to train on the regular NVIDIA GPUs, what would it costs to train on Saba novas? Chips?
10:22
Well, there’s several, there are several things that are tied to that question, which I think are really important for you for your audience to understand, right. The first part of it is what you said, pre training, these models are just extremely expensive 1050 $100 million to do by yourself, what you need to do is you need one, you need the expertise to build a model, two, you need the datasets to be able to figure out how to train it. Three, you need the time and money to actually compute this thing, which will take six 912 months to train properly, right. And then after that, you have to continuously do it to keep up with industry. Right? So that investment that you see people having to spend is just not realistic for everybody in the world. Maybe the top few companies in the world can can afford to do that. And they can see generate vast enough value to do it all by themselves. But even some of the largest companies in the world, it just doesn’t make sense. So it’s too expensive for what they need it for. And so at some lenoble, we do several things, yes, we have the hardware that allows you to actually take these very large models and collapse the cost structure of it by a significant amount. But probably even more important than that is we’re forced that company will provide you not just a hardware, but the actual models themselves, because we’ve pre train them on your behalf. So if you think about these GPT models, these GPT 176 model, it may take you hundreds of GPUs to train hundreds of hundreds of hours to actually train them, right, dozens and dozens of machine learning experts that you have to hire in order to actually manage that modern data correctly. I don’t know how many terabytes of data you need to actually aggregate in order to get the right result. All of that will give you as part of the package they want, will give all of this part of the one because we pre train these models on open source domain, which we’ll talk about why we do that, so that a customer of ours comes in and they get a hot start on their AR AI journey, right. And so here’s the hardware computation, and the result, the speed and performance data about that. And the cost benefits of having something that’s significant, more efficient, more efficient. But more importantly, is the fact that someone over as a full stack will show up with a pre trained model for you state of the art for the needs you have. And you will have what’s ultimately, what’s the best in class out there. And you haven’t a one ready for you to start piping in your data so that that model because it becomes a custom model, for your business, for your people, for your organization for your customers
13:11
in the cost structure democratizes it not just for the Fortune five fortune 10 What have you but mid market companies even even startups in some cases? Yeah, exactly.
13:22
That’s exactly right, that we we view, we view AI as parallel to the internet. And maybe this transition might be even bigger than the internet. And so it’s technologies that is going to touch every industry is a technology that’s going to touch every company in every industry is one that’s going to touch every department within every company, right. So if you think about the long term potential of this technology, the way we see it is every knowledge worker on this planet is going to get it 10 Next productivity boost, because they all have their own personal assistant next to them. Whether in the legal department in the finance department, marketing department in the software coding department, in the sales department, it does not matter what you are working on. Every single person on this planet that’s doing this type of work will have their own personal assistant, doing most of this assembly work for you, analysis work for you generative work for you. And if you don’t like you just ask the machine to do it again for you. And three seconds later, you got another version of it, right? So the this is the this is the potential the technology that you are going to enable your workforce to be 10x more productive in every department in your company. And for that you need a technology that you can actually own and actually create value over the next 510 or 15 years so that you can actually use it as a competitive advantage in your industry.
14:53
Do you feel like any areas in AI right now are overhyped? So obvious, you know You made an incredibly bold statement that AI may even surpass the impact that the internet has, which I think many believe it and can make that case. But there seems to be so much entropy in the market and capital flooding in and people are trying to make heads or tails of, is it overhyped? Is it not? And if it is overhyped, what are the areas that are overhyped? So for someone who’s so deep into it, and you have been, you know, long before the craze around the applications of chat, GPT, etc. With this sudden surge that seems to be asking toting interest? Are there any areas that you feel like today have over rotated in terms of the hype cycle?
15:45
Yeah, a few years ago, I used to say that we’re witnessing the fastest industrial revolution the world has ever seen. I used to say that, right? We’re just witnessing it, because things are changing so fast. And people used to say that’s overhyped. And then chatty, chatty, PT shows up 100 million people logged on to it in a few hours. I mean, you know, you look back in history, you just have not seen things with that level of engagement in such a short period of time. Yeah. Right. So you scratch your head and say, Why? Why? Because it touches such a, if you think of the technology, or the technologies that we can think in our history, only so many things in the world. Can you touch the touches every human on the planet? Right, if you really think about internet is one right? Cell phones get pretty close, right? But here’s the thing with, with artificial intelligence in the way that’s been deployed, when we had when we had cell phones, when, when they started, who was limited by how quickly you can actually deploy cell towers, or how quickly and there’s a, there’s a physical limitation to it. Right, when internet was showing up, if you if you recall, it was limited by how quickly you could go get those DSL lines to the homes, right. And people needed those data lines, because it was just not fast enough, right. And so, so if you look at kind of the major adoption, adoptions of technology over history, cars, again, assembly of roads and all sorts of things, there was infrastructure that took time for you to actually have to deploy before the user could actually get on and get benefit. Right? Artificial Intelligence is sitting on an infrastructure that already exists on datasets that have already been accumulated, right? On applications you’re doing every day. And out of the blue, you have this thing that just suddenly super boosted everything you’re doing by 10x. The data is already sitting there. Now I have it on my fingertips. The application I’m being using already, I can just generate more reports and more insights out of the same application to 10x faster, because somebody is actually helping me generate content, right? I mean, all of that already on existing cloud infrastructure, existing hardware infrastructure, existing software, infrastructure, existing networks existing. This all exists existing. Right. So if you put that lens on, you think about the benefits of such a transformative technology, without the barriers of having to actually lay down wires, build towers, build roads,
18:40
right, all those distributions already there. Yeah.
18:43
Right. And so then we start thinking with that lens, you say, Well, is it overhyped? Or is it underhyped?
18:51
So would you take that standpoint, then that it’s actually underhyped? relative to what we’re seeing today?
18:58
I think we’re grossly under. I think there’s some areas we’re grossly under speaking about the potential of this technology.
19:06
What are a few of those? Like, what are some of the areas that you think are not being talked about enough?
19:13
So let’s let’s talk about well, they’re talking about but not talked about enough and probably not advanced at a faster pace than then I think we could. Let’s talk about healthcare, and people always talk about it. Will the machines replaced the humans? That’s not my view of it at all? My view of it is can the machines increase every humans productivity by 10x? Could you do right? And so if I give you let’s talk about medical imaging, the medical imaging, this is something that we it’s pervasive. Right now we all know in healthcare, there’s a shortage of just healthcare professionals in every discipline, right people to help primary care people to actually look at your specialty care, people looking at Radio algae imagery, people are looking at drug discovery and you know, the entire pipeline of healthcare, right? There shortage in every single one of those places. Right now, if you come back and say, Okay, let me just take a very simple, a simple but like a very narrow view of a particular discipline, most use AI to go and identify cancer cells in a particular image, okay, or requires high resolution imaging requires machine learning models that can interpret those high resolution images at a high enough accuracy, so you can actually detect it, and then requires you to be able to do it consistently with all the data that exists across the world. And so those are the things are required. Well, guess what, we actually can do that today. And we can do that today at the quality level of some of the highest best professionals, human convictions that exists today. Now, again, it’s not to replace the human but think about what if I made this technology available across the world to 1000s of clinics where you don’t have the best access to those professionals? But you still need that still? Right? How can you elevate at a societal level? How can you elevate all of this by getting that technology deployed worldwide, worldwide, so that you can get ahead of it instead of actually having to live with the fact that we just we just can’t hire those people in those regions? Right? That’s so these are things that I look at and say, Well, we’re just at the beginning, right? But you can, you can actually take this technology, and democratize not just artificial intelligence for corporations, you can democratize access to health care, access to all these different things worldwide, worldwide. Because now you can actually use these machines to replicate knowledge that already exists in some part of the world. And you can actually then deploy it in places where it’s just hard to find a human to actually cover.
22:03
I want to rewind to something that you just mentioned, you sit back in 2018, when some would posit that AI was overhyped, you know that it was all another part of the hype cycle in a way like AR VR, etc. Were you just waiting for a moment like chat GP to come to onboard 100 million users over the span of a week? Or what I guess what was your reaction when an application was launched? And finally, people realize, like, this is not hype. This is very real, because it seems it seems like you’ve had this belief for a long time, right. But people in the broad, the broad market, maybe didn’t believe it, until they actually had an application that they could interface with, and realize the power of the technology. So I’m curious, back in 2018, like how you thought about the adoption of this technology in that inflection point in the market? Like, did you know that something like Chechi PT would come? Or I guess, what gave you what gave you conviction that the market would finally realize the power of this technology and, you know, your conviction would be proven true?
23:16
You know, 2018 with just started with the convictions around three major trend lines that were undeniable. And I think that continue to be true even today, one data is growing every single day. That’s a trend line has been there for many years. And so yet more and more data every single day, right to these models have grown. In order to train high accuracy model, the model sizes are grown, whether you just you know, you see these parameter counts continue to grow. Now you see places like anthropic continue to push the sequence plankton continue to grow. And then you have you know, the see the parameter counts growing, your sequence length growing, you’ve got token sizes grown in multiple dimensions, these models are just grow, right, and you can just see what we’re seeing today, which is you can’t keep up with it. You just cannot keep up with the current infrastructure. So that was on a trendline. But the reality 2018 was, we’re just started a company we’re a couple years then. It was from the beginning was an ambitious program. Right, which, thankfully, we’re you know, we’re rippled through and we’ll be shipping for a while. But if you think about when we started, we went after three of the hardest things you could possibly do through the hardest thing you possibly do. One we already talked about here, which is we build our own custom silicon processor style and the way the microprocessor is done for high end servers, we build that high end AI processor custom for these types of problems. That’s one just building these high performance chips. It’s hard to we decided to build the compiler to eliminate me then Need to do CUDA at all right? Because at scale, one of the things that we realized was is too hard for any company, if you’re thinking about democratizing AI is too hard to have companies hire enough CUDA experts to give the program the stuff by hand, that you really needed a full end to end compiler that could actually do the mapping of the models to any hardware you wanted. In particular, our hardware at scale, once you get to hundreds, if not 1000s, of sockets, that work to actually map that is actually incredibly difficult, incredibly difficult to find people to do it. Then the third one is training these giant foundation models, not only the expensive, getting a right, is also difficult. You look at the today, you know, we can talk about Blue chat announcement that we did last Friday. But if you look at in the world of 170 5 billion parameter models, and how many people have trained that properly, so that you can attribute data, not the men, they’re just not that many, the but that’s that’s the type of inner interface interaction that people now expect. You should be able to interact with these models and get proper results back and only a handful of us that can do it. And so we embarked on this journey, where any one of those layers is a startup on its own. Right? Open AI is focused on models, right? You got your people doing compilers, and you got people doing chips, we embarked on this journey to tackle all three put them together in a very integrated platform so that you didn’t have to worry about putting it together. And so 2018, Canberra, we were nowhere close to shipping. So we’re down, just working on the tech, getting it all right, getting it optimized. And so it actually was fine. You know, we were out there talking to folks. And you know, the US government, US government has been incredibly supportive of us. And so they engage with the Spirit the early, but for the most part, we’re just not ready to have the conversations that we’re having today. That certainly as you, as you referenced, have been accelerating. And frankly, we’ve been flooded by people that have now seen what churches BP can do. And that’s one that’s one of hundreds of things that AI could do for you by that one. But it was so powerful that it opened people’s eyes to what AI could do for you. Right, except now they want their own right they want in private. And that’s where we come in.
27:21
You talked about all these challenges. Was there ever a moment that you thought Samba Nova would fail? Do you remember any of the really dark times over the past six years building the business where something felt extremely existential?
27:36
Yeah, it’s as an entrepreneur and as a startup, you have to have equal, you have to equal amounts of bullishness reality, at any given day, you’re out there, you realize you look around, and there’s an elephant stampede, and you’re in the middle of it, right. And so we are playing in a space that the biggest players in the tech industry are all fighting over, think about the largest players in the NBA is the metas. Right? The some of the biggest companies in the world. And here’s a startup who’s trying to build a technology that could potentially carve out space itself. And so any given day is just a really interesting market. We’ve got some new some really cool technology that people have now since adopted and used. And so that’s great validation. But before you do that, you always have to have a healthy dose. So will this work? Right? And what do you need to do that conviction to actually push it through, because any startup, any startup in our space in spaces like this, where you’re kind of brand new in the frontier of tech, you don’t know, you just don’t know, you doesn’t matter what you say, you don’t know what’s coming here, they’re starting to you’re replacing existing things, oh, I just got to do that widget, cheaper, better, faster. And then their startups are doing, I’m gonna push the frontier to something that the world hasn’t seen. And we have to have conviction that this is what the world wants. Right? So if you’re on the latter case, there is no way you can convince anybody that you knew for certain that all this was going to happen, but you have to have the conviction to say it’s worth exploring is worth trying is worth it’s worth the journey. But and five or six years later, we will be seeing as this is exactly what it was.
29:25
What would you say is over the past six years, what would you say is a moment that was the most impactful or positive for you, like as you reflect on your journey, thinking about some of the biggest wins that you’ve had, what stands out as being the most salient?
29:42
Well, yeah, we deal one. Deal one is huge. When somebody comes in and looks at your tag, runs it through its paces, and comes back and writes you a big check for that. There’s nothing more Concrete to accompany, certainly to entrepreneurs and founders but to accompany when you can actually tie together the innovation, the work the hard work all the innovation all the the you did to a problem that you can solve with somebody. enough that the right to the check. Yeah. And to us that was Argonne National Labs in the US government. And since then we’re throughout the national labs, Lawrence Livermore National Labs Pacific Northwest and Los Alamos and Sandia, you know, so, I mean, we’re through to others organizations these days. But that first deal, the first one was, somebody puts you through your paces. Somebody looks at everything and turns over every rock and comes in and says, Yep, it’s worth, that that technology is worth it. And here’s a checkpoint. Right? It’s an incredibly validating moment. Because until then, you don’t know, you don’t know if what you’re building is right.
30:59
Yeah, I was a founder and another life and robotics. And a lot of people asked me the same question that asks you and I give actually the same answer, because there’s nothing more visceral feeling than creating something that was up here in your mind and then manifesting it into the real world and having someone see enough value to exchange a large sum of money. It’s a very difficult feeling to try and describe because I think the only people that are truly empathetic with it are those that have actually done it before.
31:29
That’s right. That’s right. And you can do everything perfectly right? Or you can, you can imagine the product correctly, you can execute the product perfectly. You can market it perfectly. And your timing was just wrong. Right, there’s so many things that can go wrong, right? Like your timing was wrong. The market shifted. Also, I mean, all sorts of different things that could happen in the world upon an entrepreneur that you just don’t know, right? You don’t know. And especially when you have hardware involved with the cycle of development is not in months, and it’s in years, right? When we start building chips, it takes years to actually get to a tape out, they take years to go through the FAB, and then you got to build it into system, then you got to ramp the systems in the factory, then you got to ship those systems. I mean, that whole window of time, if you got it wrong, the time to correct is another several years, right. So that’s the challenge with technologies like this that you don’t know. And if you’re wrong, it could very well be the life of the company, because you don’t have enough time to correct. And so for us, we just will, the first moment when a very interview your organization’s more credible than the US government, by the way, when the US government comes and puts you through. And these are incredibly, incredibly experienced, very knowledgeable folks in the area of machine learning and artificial intelligence. They’ve been doing it for years, when they actually put you through their paces. And they are doing some most leading edge research out there. And they say, Yep, this is good. And there’s nothing I mean, it’s incredibly satisfying to know that, you know, the thing that you work hard for working out and using using it to solve some real problems.
33:27
Yeah, I mean, you guys have come a long way. Ali and recently announced bloom chat. Can you share a little bit about bloom chat, what it is, and also why you chose to utilize open source technology?
33:42
Yeah, let’s chat. So last Friday, we announced in collaboration with together that XYZ, we announced that we had Frane, a version of the 170 6 billion model called mini chat that now set new records in multilingual capabilities, right. And so really proud of that. We do think that at least in the enterprise space, where we are, most of our clients are multinational. Their business are multinational, and they operate in many different countries. And they want artificial intelligence for the businesses that enable them to operate those services in many different countries. And so multilingual is really, really important. And unfortunately, many of the current technologies are not that good for multilingual yet it’s getting better. But yeah, so we put one out there into the open source community for a couple of different reasons. One, it’s we do think motor vehicles will be important. And we want to continue to promote the advancements of many different languages across the world because we think AI is going to be pervasive, not just for English speaking countries, but every language on the planet, right. And so we have to promote those languages. But then to the second thing that I do think is really important for your audience to understand is, as these technologies are evolving, that’s Artificial Intelligence is evolving. One of the most important things that companies are having to decide, is it. This is a this is a strategic decision that organizations have to make, which is why see artificial intelligence? Do I see AI over the next 1015 20 years? Do I see it as a pool? Or do I see it as an asset. And when I say it’s an asset, it means that I see AI as a, as a vehicle that allows me to invest into it in a create value in that model, year after year after year after year. A tool is something I use, I get some benefit out of you know, I pay a little bit of money to save lots of money, right? That’s a tool, right? But if it’s an asset, am I creating value in that, in that asset, you’re if you’re if you’re because it will become a valuable, valuable piece of my business over the next 1015 years, right. And so some of we’ve been thinking about that second lap, the latter part, the we think that most companies will use AI as an asset, an asset they will want to own right. And so the way you make it an asset, you just take these models and pipe your data in your data is what your data or your customers, your products, your services, your people, your pricing, you’re right, all your secrets are all that your data, and you’re piping it into a model so that it becomes your market. As soon as you your model, you you’re going to want to own it. Right, you don’t want to put back into the open domain, you don’t want to give it to another vendor to be their IP, you want to own it. And so this is where we then arrived at the open source model, right that our belief is that people are going to want to own the model that is derived from their data. And for me to make it their ownership, I need to start with something that they can know. So why don’t we lean into the open community, which is where somebody has decided to lean into the open. And then we train those open models to a quality level that’s acceptable for businesses as their starting point from a public data set. Right. So give these companies a hot start on learning on their data, but from a very high quality model from the beginning. So blue chat is an example that we released into the open mind, we train you to inquire with incredibly high quality across multiple languages. And now we’re using it as a hot starting point that businesses can then take pipe their data in, and the resulting model, some ANOVA will give full ownership to the client in perpetuity. Even if they terminate working with us, they get to keep that model. And this is how you then are able to accrue value after year after year where with other players, if you end a relationship there, you start back at zero. Yeah, you start, right. And so we think that this is an incredibly important thing for your listeners to understand that we believe that model ownership for businesses will be incredibly important. And because of that, we have to start with an open model, an open model that your customers can feel comfortable with and verify contest and own and all those things that comes with ownership. Right that. And that’s why we now contribute to the open community because we want to see that open community be vibrant, continue to drive innovation, and that’s going to help everybody across the world.
38:30
So what do you think is the future then for closed loop models? Or sorry, closed source models such as open AI?
38:38
Well, I think that there’s gonna always be a player. If I look at the parallels, they are incredible businesses that are created during the internet age where their secret recipe was your secret recipe, I think of Google and how they do search. Right? It powers so much of what the world does. But that technology and how they do it is owned by one player. Right. And so you have that you definitely have, you know, for certain use cases in public more public use cases, and maybe generally more consumer facing use cases, you will see that right? We just think that the world of AI what’s different about this now is that the capabilities, the services that you’ll be able to provide the knowledge and insights that you will gain, right the products you will be able to build and offer and offer to your to your clients are so tightly integrated to the data you have that it’s going to define your company. And once it’s going to define your company, you’re going to want to take ownership of it. Right and that’s kind of the difference here is that the data that is yours the data It’s about your product, your customers, your service, your people, all of those things, those things that constitute the essence of your business, you have those things, you’re going to pipe all of that into a single model, which is basically the condensed version of everything, you know, in the company. I don’t think you’re gonna let that go out of your sight and give it to somebody else to make their models better. Yeah, I think every, every business is going to look at that, like, Nope, I will own that. And I will own the in perpetuity. Right, it’s something that I will retain in perpetuity. So for that, I need somebody that’s gonna give me ownership of the model. As soon as trim my data, it’s mine. And so that’s all we all say, your data your model, we start with a model for you to get a huge hotstart. So you don’t have to incur all the costs of pre training these things like you’re not in I talked about earlier, but allowing you in the end, to have a model that’s yours, with all your data pipe into you can do all the things that you’ve now at least start to get exposed to chat GPT. And those services are where you could do with these AI models, but is worse, but it’s yours, and you can keep forever.
41:08
So So to be clear, you think open AI in the long run will struggle to generate meaningful revenue, at least from the enterprise segment, because they will want to own throne model and have a proprietary,
41:20
I don’t know, it’s hard for me to say no open has done such a great job actually getting that technology out. And there’s so many different ways to monetize it. And I do think that the future is going to have a multi tier use case, there’s some things that are really private, some things are very public, just like you see with cloud, right, this hybrid people use private cloud, public cloud, maybe I think the world’s gonna be hybrid, and there will be different use cases for different things. So I can’t say that I’ll be I will struggle, I mean, they’ve done a tremendous job actually building the technology and starting to gain gain customer traction. So I can’t speak to that, but I can speak to is our value proposition, what we’re gonna speak to is that we think that there will be many people in the world who want to create a model that they own, where their data is it in the model and allows them full control of actually getting those types of insights, and capabilities within their own secure controls. Right. And, and that’s where we come in that we we train those models, we deliver those models, we manage those matters to people and, and and allows those companies to be able to actually get those services with without having to one hire the hundreds of machine learning experts, which are hard to find any way having to buy all this GPU that’s incredibly expensive, and return from scratch, and then three, having to take months and months away. I think somebody had a report, this could take as much as 18 months before you get the first train model correct and ready to deploy, where you have someone who can show up on day one, you’re ready to go? significant amount of time. And so those are barriers to doing yourself. And that’s really the only alternative that you can credibly Have you want to own the resulting model from training data.
43:12
Yeah, because we go if we could feature anyone on the show, who should we interview on? What topic would you like to speak about?
43:19
You know, I mean, we look, there’s so many people in the space and just been really amazing. And I really want to talk about the open open models, people is the CEO of put together XYZ is just a one of our partners, tremendous guy, nice. I mean, it’s, they’re advancing the state of the art, open models. And you’re going to find that organizations like that, or just squarely in the center of the open community building quality, high quality models are going to be key players, key players in our industry for years to come. And so that’s somebody that’s really driving the state of the art open open models. Alex Ratner is somebody that a co founder, CEO, snorkel or AI, do data live inside again, really interesting company that is automating a lot of what it takes for you to get create high quality data, so that you can get the best models possible. And again, this is all part of this ecosystem of how do you leverage the data that you have to create the most impactful models that can drive your business and so on data management and management side labeling, that’s a great company and somebody that used to go chapter so just off the top of my head a couple players that do some really exciting things that need to go
44:27
talk to Rodrigo Do you have any habits tactics or techniques that you would constitute it’s a secret weapon?
44:34
Ah, yeah. So tactics, techniques, you know, one thing one thing I’ve done them for many, many many years is low key you got to clear your mind especially with all the and we’ve seen this in spades when there’s so much noise any given day there’s so much stuff happening that feels like we can’t keep up so many things are changing and this is not this is just in our industry space, let alone all the you know, macro. level changes and whatever I can do as I have this just to kind of take some time and bring everything to local. It’s just kind of like, that’s real everything back into kind of what are we trying to do here? Today? Right now? Right? Yeah, just, I think it actually provides a sense of focus around like what we can do today where we can impact what we can change, right? I mean, it’s so easy for us to kind of let expansive expansive forces take us into all these things that basically can render paralysis in the world, right. And so I always try to be aware of what’s happening around us, you know, I was talking to somebody recently, somebody is mind blowing. In the six years that I’ve run this company, where I, of course, we have kind of the surge of all the excitement around AI, but then a global pandemic that shut us down for a number of years, right? And then we have none, we had the macro level issues between US and China and all the geopolitical tensions there, then we have massive supply chain problems. I mean, who like what global supply chain problems where you can get cars and all sorts of things. And then you have the Russia Ukraine war, me first war in continental Europe in a nuclear knows how long and then suddenly, recently, not too long ago, we had bank runs for a startup, that’s incredibly bad to actually have your Silicon Valley Bank, which is one of the most prolific and long standing banks in Silicon Valley to have what happened to I mean, I just think about all these things are happening, right? These are all part of what’s the hurricane that we’re all in. Sometimes, it’s, it’s nice to be able to bring it all back and center like this, it will lead to this will we can do today, this is what we can do in our little Legion in our little thing. Let’s make progress on that. Okay,
46:57
do you meditate to gain clarity? Or how do you have any thing you do specifically, to prevent the mind from wandering,
47:05
you know, exercise a lot, you know, it’s in your way, it doesn’t matter what what exercises, but I do, I do think getting out and actually making time every day. Just because some people are actually trying to fill your mind with more knowledge, more news more. I mean, there’s so much stuff to keep up with. But to begin to make time every single day, whether it’s for meditation, or for running, or playing golf, tennis, what it doesn’t matter, but I make them every day. So we can try to do something that allows you to actually take care of your body take care of mind on yourself away a little bit. From all that is happening. I think it gives us a little bit of focus, it gives us a lot of focus on what we can do, what we can impact how we can change. We’re aware of what’s around us. But look, you know, it’s not you’re not going to get the picture change and everything all at once. Right? So you take the you take the time to figure what what can you contribute? One step at a time, one day at a time. Right. And you’ve gotten there.
48:02
Yeah, and last Rodrigo, what is the best way for listeners to connect with you? And we’re Samba Nova?
48:09
Yeah, we’re, well, most of my updates show up on LinkedIn. And so people are more than welcome to connect with us connect with me remotely on on LinkedIn or summer Nova. And again, we’re also also on Twitter as well, or on the weather seminar with REI. You know, we’re, we try to keep our engagements very live and active on social media. And we’re despite the fact that we’ve raised a lot of money, and we’ve done all the media or get gotten a lot of visibility. I view as a start, I still views it as a small startup that’s engaging with books, engaging with technologists engaging with people who want to act and bring new technology into the world. And that’s at the core of the company. Yeah,
48:53
well, congrats on all this success thus far, six years in $5 billion valuation and a billion raise isn’t a bad start. So as humble as you’re being, you guys have achieved a lot in a short period of time. So thanks again for doing this. And I mean, this was incredibly insightful and a lot of fun.
49:09
Thanks so much, Nate. Yeah, it’s been a pleasure. And I look forward to having this conversation again soon.
49:15
I hope so. Thanks again.
49:22
All right, that’ll wrap up today’s interview. If you enjoyed the episode or a previous one, let the guests know about it. Share your thoughts on social or shoot him an email. Let them know what particularly resonated with you. I can’t tell you how much I appreciate that. Some of the smartest folks in venture are willing to take the time and share their insights with us. If you feel the same, a compliment goes a long way. Okay, that’s a wrap for today. Until next time, remember to over prepare, choose carefully and invest confidently thanks so much for listening
Transcribed by https://otter.ai