Speaker 0 What is the most maybe unexpected way that you use Ai in your regular life?
Speaker 1 Yeah. I I don't know if Should say this but writing speeches for weddings.
Speaker 2 We both are answering machines to our our voice spot.
Speaker 3 Hey. How's is it going?
Speaker 4 To heart this strong pad of 1.
Speaker 3 Nice. This was I think program to talk about.
Speaker 5 Now we have it open every day to help us code.
Speaker 6 Helps us code. It's great coding. It's just making me code infinitely asked for
Speaker 1 You can describe to the Ai, what change you want to make to your Ui like builds dark mode and it'll edit all the code to implement that feature.
Speaker 7 Like legacy these tools will make humans more and more like Like narrator, like like people who are describing what they want, and then some models will actually create something that's even better than what humans would do if they're doing it themselves.
Speaker 6 And I think the fundamental like problem solving skills are always gonna be like really needed. In understanding the technology and its constraints and how to leverage it is gonna be so important.
Speaker 8 What are current Ai tools good at?
Speaker 3 Yeah general sense? Yeah. Really, really good question. I think 1 of the things that's really counterintuitive about generative her Tools. Is that they really good at what we thought they would be really bad at, which is sort of creative, storytelling work, Ai for creativity,
Speaker 0 the goal to make software so anyone can make South park from their bedroom. Example you can make a photo of me, Eric and Rihanna, playing volleyball on
Speaker 3 the beach, I like this 1.
Speaker 9 That one's that one's my favorite or I like this
Speaker 0 I like this 1.
Speaker 7 We are building truly human like Ai voices. That's like very conversational or like humans.
Speaker 1 Without Ai, the voices sounded horrible. Now it's it's just indistinguishable from a from a human voice.
Speaker 6 For us, personally, what it's amazing at is semantic search. That's something that didn't really work before. Just like taking a random piece of text and finding relevant things.
Speaker 0 All m's have. Great ability
Speaker 6 to read. They're pretty good at being able to take, like, arbitrary data and be able to answer questions about it,
Speaker 9 because there's so much new type of data, like, it's really hard to adapt. And so we are currently fine tuning all our models on these new types of data to make it as I it as possible.
Speaker 10 For fashion for very specifically, there are new terms popping up all the time. You have to like keep updating these models to know, oh, later like, for this month, the trend is Mermaid court, but for next month, maybe is Ballet court.
Speaker 11 Yeah. I tools this sort pretty good in giving you around, like an 85 to 90 percent solution. But there's a lot more fine tuning or a lot more hacks that you need to put in place on top of them to ensure that you can deliver genuine value.
Speaker 12 You can... Use a bunch of simple operations to actually do something really complicated.
Speaker 1 You need to really, like, give them structure about, like, how it should look like? And give them like, 1 particular task to do, and then they do it very well.
Speaker 12 You're able to think through the process that you go through, then you can actually engineer prompt or engineer like a sequence of steps so that you can have that entire process, be even more reliable than you would be
Speaker 1 it's important to just be very iterative in your process and just debug and tune and iterate on your prompts as you as you go.
Speaker 8 If you think you have a solution, it may not be the same solution over time. Your data can change, the actual underlying model quality can change with that, And so the biggest difference is just there's this sort of iteration required.
Speaker 13 I think the hardest part is you're trying to marry deter kind of software with prob models, and we sit right at the middle of that.
Speaker 0 It is like kind quite an exciting thing to work with because in the past, with programming, the computer really just followed your instructions to a t, and you can expect the same results given the same inputs. Now you put in the same inputs, you might get some very
Speaker 5 if we can actually introduce some randomness into our outputs, then we can explore our space a bit better, and our models will get better. From learning from all of these other choices that we can make.
Speaker 0 It's not reliable in the way that you expect it to be reliable, which is great for us because for us, we're doing entertainment. It's like, as long as it's funny, it doesn't matter. But I I guess, if you're operating a car, that seems more complicated. So it is a double edged sword, sometimes... So it can hall and make up something that wasn't intended.
Speaker 1 I would define a hallucinations as Ai generating something that doesn't exist, but looks like it might or should exist.
Speaker 3 They're still really bad at distinguishing fact and fiction. So they could create storyteller, but they're surprisingly bad, at knowing the difference between what's true false.
Speaker 2 If you're a doctor finding out what Gp decided was the diagnosis for this patient? Probably takes a ton of time to verify. And if there's any mistake, then you're in a lot of trouble.
Speaker 14 At what point do you trust the Ai over the doctor? Been a lot
Speaker 3 of effort in the industry recently to sort of prevent these hallucinations, but that's created this opposite problem, which is now. It it will often think things aren't real or pretend that it doesn't know things that it really should do. Right? It will tell you it's like never heard of that article even though. It's definitely be in the training set.
Speaker 3 Right? Like it's gotta be there. A bit like
Speaker 4 a human, you know, when you read things and you take something away from that and it... Internalize it. You can't necessarily exactly remember where you read it. So when you're using these models with real world data, it's actually even harder to dis what's what's at hall nation versus something that was a a nuance it's a piece of data. You can't ask it for citations consistently.
Speaker 4 That's that's still a challenge. And so The trustworthiness there has has some way to go. It's not enough just
Speaker 14 to say, like, hey, the accuracy metrics are better. You have to understand that... There's more at play especially for, you know, human trust, and that is a key component. If you're gonna develop a technology that people are gonna use at the end of the day.
Speaker 0 There's still like, a lot of nuance where we have to, like, ast steer them. And that's like You'll hear a lot about this human loop.
Speaker 5 It's really important to still have a human in the loop.
Speaker 11 Having humans in the loop to initially assess whether the corrections, that would needed it to be made were accurate or not?
Speaker 5 There needs to be someone supervising now, but making sure that there are no hallucinations. So there are a lot
Speaker 0 of pros and cons really about, like, figuring out the right way to steer it. That's the challenge that. Think like all the Yc companies working on the Ai or facing we're we're given this new tool to work with and we're all really just trying to figure it out.
Speaker 13 I never wanna lose sight of the fact that Ultimately, this is tech and service of humans, and we get to keep humans to say that's so what.
Speaker 15