Summary From Microsoft to MLOps: Entrepreneur Diego Oppenheimer on Building & Investing in the Future of AI (Youtube) youtu.be
5,103 words - YouTube video - View YouTube video
Speaker 0 Diego, welcome to the inspired execution mini series.
Speaker 1 Thanks, Chad, for having me.
Speaker 0 Now you've had a phenomenal career, right? Started as a developer, you know, you were at Microsoft, you founded Algorithmia and, then that got acquired by DataRobot and they are they're doing some really interesting things in this space. And now you are part of a VC firm that is only focused on AI investments. Right? Through all this, through your illustrious career.
Speaker 0 Right? What has been like 1 defining moment, you know, or memory that kind of like I was like, yeah. That was the inflection point for my career.
Speaker 1 Yeah. Well, first of all, thank you for the kind words. I'm greatly exaggerated, but but I appreciate it. Actually, I can actually tell you when I got into, dance. So actually, you know, I I I moved from Uruguay to the US in, to go to college.
Speaker 1 And I actually started out as a mechanical engineer. And, after a semester, I actually switched into information systems and data. And it was specifically because well, actually after a year, because I did an internship with a BI company. And, I remember, I think it was like 2nd or 3rd day at the at the job. You know, my job was to, you know, just go with 1 of the consultants and serve coffee.
Speaker 1 Right? Like, I mean, I I knew nothing, at that age. Right? I mean, I I know that. Yeah.
Speaker 1 Like, yeah. Whatever. Supposed to do. Yeah. Go go look at what's going on and learn.
Speaker 1 And, and I remember, this this is gonna age me a little bit. Sadly, like, you'll probably know what I'm talking about. But just, you know, so they were pitching crystal reports. I don't know. Like Cognos.
Speaker 0 I know crystal reports really well.
Speaker 1 Yeah. Exactly. Exactly. And so, you know, and, I remember this this the the guy I was going with, the the consultant, he goes in and he he he goes into this senior executive, and he's like, give me, like, your I can't remember if it was like an order, like spreadsheet. It was like some data source.
Speaker 1 And he was like, I'm gonna tell you something about your business that you don't know. And I was like, wow. That is some confidence. Right? Like imagine going into somebody's office and just saying I looking them straight in the face and saying, I'm gonna tell you something about your business that you don't know.
Speaker 1 And he grabbed some data and he kinda like put it together in a report and kinda ball bought and, like, kind of like mushed it around and was like, you know, did you know that you're kind of like sales in this area and you're like not doing I can't remember exactly what the, like, output was. But I remember just seeing there being like, I wanna do this. Like, this this is what I wanna do. And like this this idea that I can affect business decisions, understand what people are working on based on the data that they're collecting and and interpreting it. Like, this is this is like, it was I was just, like, mind blown.
Speaker 1 And it was so intense for me that, like, not only did the internship was great, but, you know, I went back, changed my career, you know, did my undergrad, grad school, and that. And my entire career has been in data based on that since that day. And so so that was the actual like I like defining moment of, of it. I was just somebody who was extremely cocky, but knew what they were talking about, And, it was amazing. I and I was like, why would anybody wanna work on anything else?
Speaker 0 Wow. That is, that is a awesome awesome story. My god, you've, you've dated both of us by talking about Crystal Reports. Definitely. So, let's talk about the evolution of ML.
Speaker 0 You started the MLOps company, Algorithmia in, you know, 2014, long before AI was a boom. Right? I remember at DataStax, I joined in 2019. In 2020, we were talking about we wanted to be a data company and we were looking at wanting to an AI company. And we started looking at this thing called MLOps.
Speaker 0 Right? Where did you get the idea of of doing MLOps, and why did you think it was so important so early?
Speaker 1 So so, you know, I'm very much as, you know, I get to to to to blame or give credit to my cofounder whichever to, you wanna you wanna use. Oh, right. We'll get along great. But the It's the same coin. Yeah.
Speaker 1 Yeah. Yeah. Exactly. Exactly. It's like It's a different side.
Speaker 1 You know, I I 1 of the things you learn about is like, you know, being the absolute first to something is like has its challenges. You know? And, and so I think, you know, but a little bit more concrete, I was working on so when I worked on Microsoft, I was lucky enough that I actually worked on the, on the v 1 team that created Power BI and the engine behind Power BI and kind of like the analytics engine behind that. And 1 of the things that was popping up, I was a I was a product program manager on Excel, was, you know, we were building a lot of descriptive analytics tooling. And for the first time, predictive analytics were starting to kind of, like, pop up.
Speaker 1 Like, you know, something very basic. Right? Like, we're gonna go build a trend line. We're gonna go, kind of, like, figure out, like, you know, kinda like that next step. And 1 of the features that I, ended up creating and owning and kind of developing for for for Microsoft was something called automatic pivot tables.
Speaker 1 And, automatic pivot tables, if you go into Excel and kind of, like, click on it, so like
Speaker 0 It's like a big freaking feature just
Speaker 1 to Yeah. And and and it was exciting about that point was that, you know, we it was the first time I got exposed to grabbing essentially academic work. Right? Because this is Microsoft Research. Right?
Speaker 1 So people at Microsoft Research were have been working on these kind of like data pilot, automation, stuff like that. And I was like, hey, this is amazing. Like we want this in Excel. Like this is this is like exactly what we wanna ship. And we have a 1,000,000,000 users and, like, who wouldn't wanna ship that?
Speaker 1 And they're like, oh, yeah. Here you go. Here's some, like, you know, code and you can just, like, put it in. And that was the first time where they're like, well, this is like written in like MATLAB and like I'm supposed to ship this to like a 1000000000 users. Like, that's insane.
Speaker 1 Like, that's never gonna happen. Like, how do you like that just won't work. Right? And I remember at that time, my my cofounder who we've been friends for years, he'd been constantly being like machine learning is the future, like people don't understand, like, you know, he's he was doing his PhD in AI, and and he's like and it's so hard. And he's like, I'm so frustrated because it's so hard to put anything of like, it's all academic code, and nobody's, like, hardening it, which means nobody will be able to use it.
Speaker 1 Right? Nobody will be able to and and, you know, it was the first time that you are essentially dealing with, the term you know, like, you're moving from deterministic to probabilistic code, and what breaks?
Speaker 0 Yeah.
Speaker 1 Right? Like, that was it. And so the thesis behind algorithm was the future of code is probabilistic and a bunch what breaks and, like, we probably should go fix these things that break. And so some things still work. Right?
Speaker 1 You can still check-in your code and to get, you know, to get and you can still do, you know, connectivity into different data sources, but, like, a bunch of things, everything from how do you deploy it, how do you run inference, to how do you, you know, kind of, like, monitor for errors, all this stuff. Like and so we essentially kinda like the idea really was the future is gonna be machine learning, and we probably should go fix some of the kind of like traditional software engineering components that just break down when the code turns probabilistic.
Speaker 0 And so the so basically your take was there's there is an s SDLC, a software development life cycle. What does that look like in the ML world? And let's go and actually create that.
Speaker 1 Yeah. Exactly. That was that was really kind of like the, you know, the soup you know, we it's a you know, and you know this well, like, you know, it's a startup. Right? So twist and turn and, like, you know, wrong corner and, like, you hit it up.
Speaker 1 But, like, yes. If you abstract, you know, you look at 8 years from the outside, like, that's what it looks like.
Speaker 0 Yeah. You know, it's funny. We'll we'll talk about this in a second. Tell me, if you had to if you had to do this again now with generative AI being as big as it is, would you do it? Or would you say, no, the game has changed significantly that I will not do Gen AI ops.
Speaker 0 Yeah. Or or do you know, I heard about a company that said they said they're doing vector ops and I'm like,
Speaker 1 it's a little early
Speaker 0 for that. Right? I mean, let's at least get a few apps out there for the next 2 years. Right? But would you do that now with your own experience?
Speaker 0 So so
Speaker 1 so I think, so so the answer is probably yes, but I don't know if there is a, like, you know, I'm still deep in the, like, is it that different? Like, there is stuff that's different. Right? But, like, you know, like, I I was kinda joking around where it's, like, you know, you know, to me, like, people start calling it LLM ops, and I'm like, you know Yeah. Yeah.
Speaker 1 It's still you know, like, to me, this is still MLOps. You know? If you need a new market map, that's fine. But, like, like, let's start actually, like, looking at and, you know, some things did change. Right?
Speaker 1 Like, you need a different hardware or more hardware. But, like, some components change, but not, like, that much from the, like, workflow and process. And we're still using a lot of the same tooling. And so, like, to answer your question about would I go into it, like, I'm actively working with companies in that space. So I'm, like, a huge believer.
Speaker 1 I think the future of software is, you know, AI.
Speaker 0 In January.
Speaker 1 Maybe you know? And and, you know, and, like, I think now it's like, well, duh. But, like, I've been saying that for 10 years, so I get a little bit of credit for for for for that. For sure. For sure.
Speaker 1 And, you know, to me, the like, it's always about, like, what is stopping us from getting x y z into production? So, you know, the the the 1 thing that I always really enjoyed about the kind of, like, BI sector was, like, the value where the value got created was really, like, very clear. Right? Which was between the chair and the screen. Right?
Speaker 1 They looked at a they looked at something. They made an analysis and they made a business decision. That's where the value happens. Right? So Correct.
Speaker 1 All the data sources, the databases, this you know, the all the all the displays and dashboards, all of that is in service of somebody interpreting data and, you know, kind of like making that into a decision that would affect the business, you know, hopefully go up or go down, whatever, you know, direction you wanted to go. In the world of machine learning, it's always about, like, kind of like, hey. Where do we get to that prediction inference point? Right? Where we are going to either make a, you know, make a prediction, create something like, you know, automate a workflow.
Speaker 1 So, like, those value points, to get to them. Like, there's a lot of complexity in the ops to get there. And so anything in service of that, I think, is a worthy mission to attack.
Speaker 0 I will, I will take a little con just for the just for our listeners, I'll take a little contrarian view. Of course. I liked I really liked the web. And I really liked mobile. And I really liked the way cloud developed.
Speaker 0 Right? It was all focused on the kinds of apps that people would build, not they they were not very focused on DevOps and data engineering and MLOps and things like that. I think I think somehow I feel that we really need to create space for people to do experimentation
Speaker 1 Mhmm.
Speaker 0 Now. And I think you would not disagree with me there. But what you are working on now on let's talk it let's call it l l you know, let's call it Genai ops. I'm just gonna just put it as a big thing, not LLM because, you know, Genai apps are a lot bigger than LLMs. And as you do this new ops piece, you're building companies that that are that are going to ship features next year that people will use in 25 and 26 and 27.
Speaker 0 And I think a lot of people get confused in the market where they start thinking they need the ops stuff now. No. Use open AI, use Gemini if you want, Go and use langchain. Use lama index. Slap some stuff together.
Speaker 0 You know, use use our database. Right? My that's my point. Go and experiment like crazy because all the the industry is robust enough that the technology is coming. Is that fair?
Speaker 1 Yeah. I think it's fair. And actually I don't think it's even that contrarian to to to what I said. So, like, you know, 1 of the things that I think is, you know, I I kind of talked around, like, you know, if you had pitched a, you know, a startup 20 years ago, you know, you'd be like, oh, I need money for servers and, like, I need to, like, run this thing and, like, ball. And, like and now and then, you know, what happened with the cloud providers was, like, anybody with a credit card, right, could get up and running over a weekend.
Speaker 1 Right? And so, like, the cost of getting started, prototyping, getting an MVP, like, whatever that running so that has never that has not existed in AI until now. Like, we just you know, 1 could argue, like, ChatChpT was the catalyst. Right? Like, that really opened the mind.
Speaker 1 But, like, ultimately, like, we suddenly have a, you know, a, you know, ML models were single task completers. Right? Up until these kind of foundational models have kind of came out and got popular. And access to AI and machine learning was fairly complex. And so you did need the ops.
Speaker 1 You did need, like, you know, to be able to get up and running. You needed your your things now. Yeah. You're it's exactly it. Now I would argue, like, just slapping, you know, an OpenAI API into your application will not get you it will get you an experiment, and it will get you a great prototype and a great demo.
Speaker 1 It will not get you a product. And so there is still kind of like a pretty deep level of, okay, what are the new problems, Right? And like I'm I'm What are
Speaker 0 coming up after your experiment is Exactly.
Speaker 1 Exactly. And Yeah. You know, I'm very biased because, you know, I I co founded a company recently and kinda like the risk and safety, but like, you know, Yeah. 1 of the things that's, like, you know, I talk about a lot is that, you know, these models, because they are generic task completers,
Speaker 0 Yeah.
Speaker 1 That is their power and their risk. Right? They can do a bunch of stuff. And maybe I don't want it to do a bunch of stuff, so I need to put guardrails around them. And so, like, now there's and that's kind of an ops operation.
Speaker 1 It's an application ops operation where it's like, hey, I can't let this thing go off the rails and do what I don't want it to do because it won't complete my product workflow. And so I think the the ops is still necessary because that's what gets us software that's shippable.
Speaker 0 Yes.
Speaker 1 But it's where that ops is is shifted.
Speaker 0 Yes. For sure. And my take is a lot of people think about the ops before they actually do the experiment. And my take is let the experiments happen while they are people thinking about the ops. And and I absolutely agree.
Speaker 0 So let's talk about let's talk about Guardrails dotai. You just founded this company. We are you and I just enter an elevator. You punch 4, I punch 7. We've got 20 seconds.
Speaker 0 What's the pitch?
Speaker 1 So, 1 of the great things around these, like, models is they can do a lot of things. 1 of the problems with these models is they can do to a lot of things. And so how do you actually make sure that the output of these machine learning models is what you want, right, from a safety perspective, from a risk perspective, that they're operating, against your product principles, and that you can have some level of guarantees, right, that it's going to behave in a certain way. So guardrailsai, dot com is a open source library that allows you to build railing, literal railing against, you know, and guards around, you know, the outputs of LLMs to keep them on track in terms of, like, the behaviors that you want. So that's the
Speaker 0 That is awesome. I will, I'm I'm sure a bunch of our listeners are gonna go and definitely check it out. So you are now an investor. We were talking about this before we started the podcast, which is, you know, but you're you continue to feel like you're a builder. Right?
Speaker 0 You're you're incubating companies. You're helping them grow, things like that. So, tell us a little bit about how how you how Factory is doing it differently. And then, what is what is the most exciting trend that, you know, what what are you most excited about in 2024?
Speaker 1 So, yeah. So factory was founded by, you know, 2 operators. 1 of them, Christopher Ray, who's a professor over at Stanford, who's had multiple AI, you know, startups and founded multiple AI startups. And the idea was, to create a kind of interest like studio where, we could, based on research and AI research, kind of like build out what the future of AI should look like. Or, you know, maybe mold what, you know, what we, you know, what what we wanna look.
Speaker 1 And so the idea was we can look at what's kind of coming around the corner from a research perspective and be very, very academically based. Understand what, you know, what's a product. What you know, I I say what's a technology? What's a feature? What's a product?
Speaker 1 What's a company? And and try to make that determination. So my day to day is very much focused on, like, going through with our portfolio companies that journey of understanding, you know, or even pre portfolio. So, Gartner's is a good example where Shreya, the CEO, she's, you know, this insanely capable, ML engineer. And she came up with this problem.
Speaker 1 Right? Which is like, well, not came up with it. Like, she she she got obsessed with this problem. And so even previous to the launching of the open source package, you know, we spent, you know, weeks talking about what might that look like. And, you know, then, you know, build the package and then founded the company and started the company and stuff like that.
Speaker 1 And so like that, that kind of, like, work flow is kinda how we, you know, we think about building kind of the, you know, the future of companies. Another pretty, you know, kind of like at this point, today famous company is this company called Together dotai, which is also kind of like incubated inside, you know, inside Factory, which is, you know, right now kind of like probably a big deal in the world of open source machine learning and AI and Gen AI. And so our goal here, you know, from our perspective, from the, you know, the partner's perspective is exactly that. Like, how do we how do we kind of have So So that was the first part of the question. Sorry.
Speaker 1 I went long. You you talked about, like, kind of, like, what excited. So, you know, I think, you know, the the exciting thing to me now is that, you know, foundational models have really opened, like like in my opinion, you know, a positive Pandora's box of imagination. And it's 1 of these, like, to me, like, this, like, kind of, like, we will look back in time and say this was a kind of, like, mass shift, in terms of, you know, like, what the, you know, in in in the world of technology. Right?
Speaker 1 In the same way where, like, the first OS's came out, the Internet came out, like, where what we're seeing right now in my opinion is pretty, like, let's fit this new thing to what we already knew. And the next stage is, like, the next level of interfaces. Like, the next kind of, like you know? So so so what we're seeing right now still is, like and not and I don't I don't mean this in a demeaning way because I actually think there's, like, a 1000000 businesses that are fantastic where you literally can slap an API with AI onto a business, and it's gonna be spectacular. But, like, there's a whole new world of interfaces, of abstractions, of of of of human computer interactions that are about to open up based on the idea that now we can express in our own natural language with computers.
Speaker 1 Yeah. And, you know, so we're moving from this, like, you know, punch card to code to, like, moving mouse in you know, moving mice and and keyboards now. You know, we're we're interacting in R, you know, in our human language with computers, and they're interacting with us. And this is opening a whole new level of interfaces and workflows, and I think it's gonna change, you know, what excites me is that I don't know what's on the other side of this, but, like, it's it's a whole new world, and it's exciting. And so I think, like, that's the the the those new experiences are the ones that I think are are are are are close.
Speaker 1 Right? Where we're seeing them happen in life. And I think for me personally, this is the greatest technological advance I'll see in my lifetime. And so it's it's cool to be part of it.
Speaker 0 For for for sure. By the way, there's there's there's no shame in beautiful haikus. Right? There's there's absolutely there's nothing there's there's nothing, there's no shame there. But I read something the other day which is, somebody said that they brainstorm with chat GPT now.
Speaker 0 And this was actually a Google employee that actually said, they actually do it with Bard. But, you know, you keep it on, you keep talking like you're brainstorming and it's recording everything. And then you say, summarize for me. And you can go deeper and say, do some research on this. So it's kind of a it's kind of a cool thing.
Speaker 0 You're, you know, you're changing the way you work but you're also changing the way you think, right? And a lot of people don't spend a lot of time about that. It's not just changing the way you work, it's also changing the way you think. And that's your point, right? Because you're, when you, when you went from, when you went from punch cards to a, to a graphical user interface and so the green screen you actually had this, you know, you had this beautiful thing, you actually changed the way you work, Right?
Speaker 0 And so it made a it made a difference. So I'm super by the way, I would agree a 100%. This is gonna be the most excited, most excitement that anybody will hear and anybody will feel in this lifetime, right? It is going to this is this is gonna be the biggest industrial revolution that mankind has ever gone through, right? And we will, given our history, you know, as as a human race, right?
Speaker 0 We will emerge really victorious, right? It'll be ups and downs but the the I I loved what Jensen from Nvidia said. Right? Which is, when is the last time where a company became more productive they did layoffs? Yeah.
Speaker 0 Never. Whenever they get more productive, they hire more people. Right? So let's increase productivity. Let's come up with new amazing new revenue models, amazing new ways of doing business and interacting with people and everything will start taking care of itself.
Speaker 0 Right? So I have a very optimistic view of this because I really do think that, you know, people with AI, you know, humans with AI will actually be the ones that'll change. And I don't think it's that far away because unlike every other wave, this is gonna accelerate. Yeah.
Speaker 1 Look, I'm I'm I'm very much like I said this at a on a on a talk I did the other day. And like I have no stock in in open AI. So like I'm not suggesting it, but I'm just saying like if you have any of your readers, listeners here, like who have not interacted with some of these foundational models, like this should be like the first thing you do. Like, you should drop everything you do after this. I think we have a
Speaker 0 requirement here. Yeah.
Speaker 1 And just If
Speaker 0 you have not played with chat GPD, you should not listen to the podcast.
Speaker 1 Yeah. And and it's and it's amazing. I use it every day. I had multiple LLMs in my workflows on a daily basis. Everything from helping me write product specs to, questioning my product specs to, you know, helping do research.
Speaker 1 Like, I I do feel 10 x productivity because of the, you know, the the kind of like assistant. Like, it's like the kind of the promise of AI or, you know, some people had called it, like, augmented intelligence instead of artificial. And it's really true. Like, I do feel augmented. I feel I can do in 2 hours what I used to do in 6.
Speaker 1 I can, I can be more thorough? I can have better insights and, like, why wouldn't you wanna do that for yourself? Like, right? Just from a self growth perspective, like, I think it's, you know, it's kind of a you're you're tricking yourself by not working with this stuff.
Speaker 0 Oh, this this is awesome. Man, we could we could, we're definitely gonna meet offline, but I it seems like we'd have a podcast and talk about this for, like, another hour.
Speaker 1 Yeah. I love it.
Speaker 0 It's been awesome. So, Diego, I'm gonna go to the rapid fire part. And so quick questions, quick answers. What's a problem humanity is facing that you want AI to solve first?
Speaker 1 I think quicker ways of doing retraining of of of humans is really something that AI can be extremely good at and is very necessary because we're about to enter into a pretty aggressive creative destruction period, thanks to this technology.
Speaker 0 That's that's awesome. What's 1 thing in your day to day life that you want AI to automate?
Speaker 1 Anything that has to do with scheduling. I'm so bad at it.
Speaker 0 Something unrelated to AI that you're passionate about?
Speaker 1 Cooking. I love cooking and I don't think AI is ever gonna get, in the way of that. It's, it's art.
Speaker 0 Yeah, it is art. But it does actually put a really good recipe together and makes you order it from DoorDash off the Instacart.
Speaker 1 Sure. But but the the process of you experimenting and, the suggestions might come for them but like the taste and the kind of like the workflow, it's it's, you know, it's closer to the art and science.
Speaker 0 What's the what's the go to dish?
Speaker 1 My go to okay. So I grew up in South America. So, like, I am like like anything on a grill is like, you know, kind of like religion to me.
Speaker 0 Yeah. That's awesome. 1 word you would use to describe the best tech leaders. 1 word.
Speaker 1 Persistent.
Speaker 0 1 word to describe how you feel about the future of AI.
Speaker 1 Amazed?
Speaker 0 Yeah. This is awesome. Diego, it has been an absolute blast. I think our our listeners are going to love this episode. I really appreciate you making the time.
Speaker 0 And, I look forward to doing this again, but also spending some time and hanging out with you. Thank you again.
Speaker 1 Thanks for having me. Appreciate it.