Dec. 12, 2023

Chris Smith: How to think about adding AI to your product

The player is loading ...
Make Things That Matter

Chris Smith is a longtime engineering leader who has been in the trenches of building with AI & machine learning for years. He’s led the development of data systems & strategies at tech giants like early Google, Yahoo, and Sun; S&P 500's like Live Nation; and a wide variety of startups.

Topics discussed:

(00:00) AI industry at inflection point, causing chaos

(09:05) Machine learning, neural nets, and generative AI

(14:03) Generative AI: LLMs + broad understanding

(21:56) Open source models improve specialized problem solving

(25:06) Access to data leads to competitive advantage

(32:53) AI training improves productivity and learning speed

(42:51) Reduced investment in GPT models speeds results

(48:47) Expectation mismatch leads to brand perception risks

(53:54) Non-technical work is crucial for AI product success

(57:30) Building a computer vision product from scratch

(01:03:14) A strategic approach to refining and testing prototypes

(01:08:04) Closing learning loops

Links & resources mentioned

Find the full transcript at: https://podcast.makethingsthatmatter.com/chris-smith-how-to-add-ai-to-product/#transcript

Send episode feedback on Twitter @askotzko , or via email

Chris Smith:

LinkedIn

X / Twitter: @xcbsmith

• Bluesky @xcbsmith

People & orgs:

Dr. Marily Nika - AI Lead, Meta Reality Lab

Travis Corrigan - Head of Product, Smith.AI

Books:

Evidence Guided - Itamar Gilad

Other resources:

GPT = “generative pre-trained transformer”

Wizard of Oz experiment

Tom Chi - learning loop

Joel Spolsky: The iceberg secret, revealed

ML Ops

Computer vision

Precision-Recall curves

Leaked Google memo: “There is no moat”

Universal basic income (UBI)

Stop-loss order



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit blog.makethingsthatmatter.com

Transcript

Andrew Skotzko [00:01:57]:

Chris, welcome to the show. It's so good to have you here. I have been looking forward to having you on the show since before I even started the damn show. It's been four years now, and for years I've been going, damn it, I need to get Chris on the show. So here we are. Thank you. How you doing, man?

Chris Smith [00:02:10]:

I'm doing great. Thank you so much, Andrew. It's been a while. It's been a while. So it's good to see.

Andrew Skotzko [00:02:17]:

We were very fortunate to run into each other at that AI conference, whatever, a week or two ago.

Chris Smith [00:02:22]:

Yes.

Andrew Skotzko [00:02:22]:

And for the listeners who don't, you'll figure this out very quickly. Chris and I used to work together. He was one of the people who was incredibly, incredibly patient with me when I was switching into engineering and somehow did not kill me, which he would have probably been well justified in doing. So thanks for not killing me, Chris, and answering all of my questions with infinite patience.

Chris Smith [00:02:44]:

Oh, it was easy to do because you were brilliant. You picked up the skills faster than anyone I've ever seen in that situation before. It was really a sight to behold, and it was a thrill to be part of it.

Andrew Skotzko [00:02:55]:

Well, for the listener that doesn't know this backstory, this goes back to a startup Chris and I worked on together. Was it like, back in 2011 ish?

Chris Smith [00:03:04]:

Please tell me it wasn't that long ago. Yeah, no, it's something like that.

Andrew Skotzko [00:03:08]:

So the backstory here is, up until that part of my life, I'd been working in marketing, actually doing what we would now call, like, very growth marketing, very quantitatively driven marketing. And then I realized at one point that I actually needed to build things. And so, given we were a startup and hiring engineers is hard, we decided, hey, what if Andrew tries to become an engineer? And threw me to the wolves? And by the wolves, we mean the very supportive senior engineers such as Chris. And I've been looking forward to having you on the show, Chris, for such a long time, not only because I enjoy talking to you, but also know I've worked with a lot of great engineers in my life. But of all the great engineers I've worked with, you've always stood out in my mind as one of the best teachers, and that is a skill.

Chris Smith [00:03:49]:

Thank you. That's really nice.

Andrew Skotzko [00:03:50]:

Thank you for doing it. Because I've worked with a lot of great engineers who can't really teach it. They can't explain it. And that's something that I think you're uniquely gifted at. And so given that, and given all the hype and the insanity going on with all things AI, I was like, you know what? I think it's the time. Let's do this. So anyways, that's a long wind up, but I'm really excited to have you here, man.

Chris Smith [00:04:17]:

Absolutely. I'm delighted to be here.

Andrew Skotzko [00:04:21]:

I think we can't get into an AI conversation right now. Just to timestamp this, we're doing this in November 22, 2023. And so it's been a bit of an interesting four or five days in the AI world. It was last Friday that the most visible person probably on the planet related AI, Sam Alden, was unceremoniously fired from his own company. There's a lot of speculation running around all over the place, lots of different. We'll whatever, we'll leave that to the tech TMZ. But my question for you is, as a practitioner, as someone deeply embedded in this space and a builder, what do you make of all this and how does this actually shift things and the way you're thinking about it, as someone who builds this?

Chris Smith [00:05:05]:

Well, I do think that what we're seeing is we're hitting an inflection point in the development of the AI industry. And it's not just OpenAI that's being hit by this. We were talking before about how crews where I used to work has also seen a lot of tumult in the last week or so. And I think it's a reflection of essentially a maturing process that we're going through as the rubber starts hitting the road. And it's causing a lot of people to ask a lot of questions. It's causing a lot of assumptions to be checked, and there are a lot of unknown still. So there's a lot of scariness, I guess I would say, for lack of better perception for a lot of people, where I imagine even Sam and the people at OpenAI on the board and the employees are all sort of not 100% sure where things are going from here, and that's creating a lot of chaos would be the best way to describe it. I can only imagine for a lot of people who are outside of our industry that it looks worse than even it looks to me.

Chris Smith [00:06:21]:

But I just sort of think of this as a necessary step as we start figuring out how to really use this stuff.

Andrew Skotzko [00:06:30]:

Absolutely. So I think that's exactly where I'm excited to spend our time today in this conversation is there's so much happening in the world of AI, and there's so many great blog posts and podcasts and conferences. There's tons of stuff happening here. But I was trying to think about what would be the most useful thing we could provide listeners. And when I think about that and some of my own past experiences learning with you, you've always really helped me to learn how to think about things. Right. It's like how to make sense of things and how to have more flexible mental models that are useful and adaptive as we move forward, because things are going to keep changing. So that's kind of where I think we're going to go in this conversation, or at least that's my aspiration for it.

Andrew Skotzko [00:07:14]:

Okay, why don't we just start like. And I want to start by laying a little bit of a foundation conceptually. I don't want to assume a lot of knowledge on the behalf of the listener because I don't know where the listener is coming into this. So let's start as broad as this. We've all heard AI, we've heard machine learning. Now we're hearing generative. So could you just untangle these for us? Just parse these out. What is the difference between AI versus machine learning versus generative versus fill in the blank here, if you just start to lay that context out?

Chris Smith [00:07:44]:

So I think a lot of this is semantics, and different people will attach different semantics to different words. I think what I'll do is describe sort of the history of how we got here in terms of this whole space. And I'll do high notes and you tell me where you think we should dig in deeper. But fundamentally, all of the work that's being done in this space is all about essentially applied statistics. And there was a time where that's literally the name for this was just applied statistics, right. And then you heard terms like AI came out back in the after there was a bit of reflection and realization that we were bit distant from reaching something people would think of as intelligence. And then they backed away, and there was machine learning came out and data science came out as terms. And during that era, a lot of what the statistical models were being targeted on were what I would describe as essentially optimization problems, right? If you think about it, where picking and the ideal problem for them were situations where, even if you had a perfect person doing the job for you, instead of a computer, they were never going to be able to always get it right.

Chris Smith [00:09:05]:

They were going to fail a certain amount of the time. And if, you know, there's like a statistical certainty of failure, then, hey, that's an optimization problem. You could try to get the smallest statistical probability of failure, and that's something you could apply a lot of data and a lot of statistics to. And so that was kind of what the machine learning data science area, when those terms were very big, that was kind of the error we got to. And then this other track of this whole applied statistics space was the neural net world, right? Where instead of trying to just optimize a problem, it was trying to simulate how we think, at least the mental model we have for how someone's brain functions, right? How a brain behaves. And there was a lot of work that went in there, and that was where deep learning started to come up, was when they got to the sort of next generation of neural nets, where they were like, wait, if we just really throw a ton of compute and a ton of data at this, this neural net thing actually starts to work really well because the earlier versions of it weren't working very well. And that started opening up a lot of possibilities with things like object recognition and images and even video processing, and just sort of another level of sophistication and intelligence. And then the most exciting phase that we're in right now is this whole space of what they call generative AI, where the idea was, hey, we've got these nice little neural nets that sort of mimic how humans think.

Chris Smith [00:10:34]:

Can we build essentially a statistical model that can guess what the next word is that a human would use in a sentence? And then if you do that recursively, you can build out a whole sentence, you can build out a whole paragraph. In theory, you could build out an entire essay or a book even, although most of the efforts to do that to date have been humorous at best. But nonetheless, it created this situation where people, we were mimicking something that people are generally considered as an intelligence, as distinct from all other intelligences that are out there, is the ability to construct language, construct sentences in something that sounds meaningful and correct to the listener. That's kind of where the exciting space of generative AI has been most recently. And as we mentioned at the conference, that's sort of in transition now to a next level experience, where historically, the generative AI work has been in a single modality, which is text, right? Like someone, the inputs text, the outputs text. Now we're starting to get into multimodal situations where you have video or images or sound and text, and all of those pieces are being mixed together and then generating some kind of output that is, again, essentially built on a prediction of what a human would do in that situation. Right.

Andrew Skotzko [00:12:00]:

No, I love that. Thank you. So just in terms of, like, if you could offer, I'm curious if you can offer a rule of thumb for almost, like, helping our audience become better classifiers of the things that they look at when they look at something. Because everybody listening to this is going to have a million ideas, either of their own or from their boards or from their customers or wherever. Where's the line where something goes from applied statistics to machine learning to AI, and then to generative AI? Are there any rules of thumb in there that are useful? Yeah.

Chris Smith [00:12:36]:

I think a lot of it depends on your audience. And there used to be an old joke, a slide I would put up on presentations on machine learning, where I would have a picture of a statistician on one page and then, and then a data scientist on the other doing and literally saying machine learning. And the question was, what's the difference between these two things? And the next slide would be just a dollar sign. Right? That was the difference. Right. And it is a little bit like that. But in reality, what we tend to think about is data science is kind of the academic pursuit of evaluating how we use data and to build algorithms around data and exploiting it, and particularly large volumes of data generally. Right.

Chris Smith [00:13:22]:

Analyzing large volumes of data, or building algorithms or solutions from large amounts of data. So that's kind of the data science places that usually is tied to that moniker. Machine learning has traditionally been from this era. Before, we were thinking that we were getting close to mimicking human intelligence. Right? That's usually what that phrase is tied to, but it's where it is doing things. Usually, it's like recommenders or classifiers, where it's doing tasks that a human could do. But by themselves, those tasks are very specialized. The model is built for a very specific purpose.

Chris Smith [00:14:03]:

And while you might think that there's an intelligent human doing that one specific thing because it can't do anything else, you would never confuse it for anything resembling an intelligence. And now when people talk about generative AI or the other term that people talk about are large language models, this is where one of the things that's big with this is this whole foundation model space where you can have a generalized data set that you're training your model on. That is like just literally everything the Internet, like all of the Internet? Yeah. Encyclopedia Britannica, all of those pieces, just sort of condense them all into a model that is foundational in the sense that, yes, it may not be particularly good at any particular problem, but it has behaviors that indicate an understanding of the world in general, of topics in general. And then you can potentially work from those to fine tune them for a particular problem that you're in. And so it's that whole space of really being able to have conversations on just about anything, or at least seemingly just on anything, and getting an experience that seems intelligent.

Andrew Skotzko [00:15:15]:

Perfect. Yeah, and I appreciate that, and you laying that out. So, just to play that back, it sounds like some ways we might encapsulate that is that we've had data science and statistics for a while, and machine learning is sort of a more narrow application of these technologies to a specific use case. Like, look at this image and classify what's in it. And then where we start to get into something that we might call AI, where it's starting to resemble human intelligence and be much more adaptable and flexible across situations, rather than being so brittle and narrow. And then a step further is not just AI, but generative AI, where we're giving it a prompt, and then it's going and doing something and creating an output that resembles what a human might create. Is that a fair way to think about it?

Chris Smith [00:15:57]:

Yes. And the addendum I should put on that, and I should have put on that at the beginning, is there is this notion of AGI, artificial general intelligence, which is something that we haven't created yet. People talk about it, and historically, sort of a thing of science fiction, which is a true general intelligence. Like, when I use the word general right now, what I mean is you can hit upon it about random topics, and it gives answers that at least seem somewhat intelligent. May not be correct, but they seem intelligent. AGI is literally getting to the point where something can really think for itself, and we're definitely not there. Cool.

Andrew Skotzko [00:16:37]:

Thank you for clarifying that. Before we start to shift into application mode, which is, I think, where we're going to spend most of this conversation of, like, okay, how do we got it? We have a mental model. What do we do with it? I would love it if you could just break down in layman's terms, what a GPT is. So that acronym stands for, I believe, generative, pretrained transformer. So what does that actually mean?

Chris Smith [00:16:58]:

Yes. So there's a couple of different ways of explaining that. Probably one of the bigger pieces would be breaking down each of those words. Right? So generative. That part is pretty straightforward. It's something that generates its output from seemingly nowhere, right? And then there's the transformer bit. The transformer is really the sort of breakthrough that opened up this space, which was this notion that you could have a transformer, which is a piece of code that would take almost any input and turn it into a sequence of bytes, right? And that sounds like, wait, isn't that what every computer already does? And yes, that's true, but it provided a general mathematical model for working with that. That opened up the ability to apply both training data and outputs.

Chris Smith [00:17:53]:

You could literally train a transformer around any particular data set. You can think of it as essentially serving a similar function as, say, like the optic nerve that's taking a visual image and stimulus to our eye and then transforming it into an image in our mind, right? That's what that big sort of add on and adjunct innovation is. And then the pretrained part, that's pretty straightforward, right? The idea here is you trained it already. Training is a process of machine learning of initially, all you're starting out with is some math, right, at best. And you feed it a whole bunch of data. And the idea is it learns from that data and learn is definitely that we tend to anthropomorphize what the process is. But essentially, the statistical model gets built based on the data that you feed it, right? And so that is the idea here is you build a pretrained model that has been pretrained on such a massive data set that it saves you time on working on whatever problem you're working on, almost no matter what the older model back in the early days is like, for whatever problem you were working on, all you would start with is an algorithm, essentially, and then you would, from scratch, feed it data. And first, like, literally, when the first byte of data arrives, it knows nothing, right? All the weights are effectively zero.

Chris Smith [00:19:16]:

Nothing it does has any intelligence in it. You feed it a huge amount of data and then it knows. With a pretrained case, you don't have to start there. You can start talking to Chat GPT, for example. And it actually, sure, it doesn't really know anything about your problem, but it sure seems to be able to speak to almost any topic, at least as much as a child might be able to perfect.

Andrew Skotzko [00:19:39]:

Okay, so now that we have a bit of a mental model for what all this stuff is, let's start to shift gears here and put this mental model to work and think about how we can use this. I want to come into this from a little bit more of a top down approach in terms of thinking about this strategically as an executive might. I know you have executives you collaborate with, and you talk to many executives. I'm pretty sure most executives are at this point feeling plenty of pressure and tired of hearing about AI, aren't sure what to do with this. But before we get into how should they respond to that pressure, I actually want to ask a different question, which is strategic advantage, right? Everyone knows about AI now. We all know it's a big deal. No one has to be convinced of that anymore. But if everyone's moving this way, where can an actual strategic advantage come from?

Chris Smith [00:20:29]:

And that's a very good point, particularly because of the sort of generalized model theory. In theory, the stuff that OpenAI and Google and Facebook are building. They're good at everything. They've been trained on everything. And in reality, that's not true, right. But if it were true, then, well, that means all of my competitors have access to the same intelligence that I have access to. How do I differentiate? And I think that is the salient question. That actually is the thing that everyone's struggling with right now is like, wait, what can I do with this that someone else can't do with this, right.

Chris Smith [00:21:10]:

First of all, I think that as a data scientist, the way I always tend to think about this is, is there a data moat out there somewhere, I-E-A collection of data that I own that is mine, that nobody else has access to, that is potentially informative and useful in a way that no one else's data set is. Right. Proprietary data advantage is the way to think about. And you can use that data to essentially fine tune one of these models or to build your own large language model specifically for that data set. There's a leaked memo from Google that was called there is no moat.

Andrew Skotzko [00:21:56]:

Right.

Chris Smith [00:21:56]:

That was the title for it. And part of what came from that was after the Facebook models were, the llama models were leaked and there started to become this open source community working on llms. There was this realization that actually, for a lot of problems, if you just take one of these open source models and train it on a data set that's specialized for the particular problem that you want to work on, they could perform, if not as well as better than some of these generalized models for that specific problem. Now, what's really hard to build is a better OpenAI than what OpenAI is building. There's a handful of companies that are doing that, and that are working in that space, and they very well may succeed. But it's not like every business has an opportunity to build something that is of that scale and capability. But certainly if you've got your own data set, you can probably build something that is, as long as it's large enough and meaningful enough and distinctive enough, you can build a significant advantage in terms of what your model can do. And then of course, that's just talking about it from the engineering standpoint of like, what is it capable of? Right? There's a whole other separate problem of figuring out how to apply that in the business domain, which, as you could tell from all of the mad dashing going on right now in the business, nobody really knows the right way to apply this in any particular business domain or for any particular problem.

Chris Smith [00:23:29]:

And that's, I think the biggest area where everybody's got an opportunity to differentiate themselves is if you can understand the possibilities and understand your problem domain well enough that you can see an opportunity that everybody else doesn't see, that can be a huge differentiator that makes a lot of sense.

Andrew Skotzko [00:23:50]:

And it resonates a lot with some of the viewpoints that I've been hearing in the previous few months and also in consuming a bunch of things, getting ready for this conversation. One way I heard it put, well, I think her name is Marilyn Neca. I don't remember her name. She was on stage at the same conference that we ran into each other at. And one way she put it was in the past, like a product know you needed to solve. It was about building the right thing. And now it's really about solving the right. Right.

Andrew Skotzko [00:24:17]:

Not, not going down shiny object syndrome, actually applying the technology to the place where it can actually make a real difference. Which, as a quick aside, I think that was always the point of product. It was not to build silly things. But I guess my question though is, let's come back to the business application because that's going to also be very domain specific. But I want to go back to something you said a minute ago.

Chris Smith [00:24:39]:

Right?

Andrew Skotzko [00:24:39]:

You said this idea of if I am in my domain, if I have enough, essentially proprietary data that I can do something unique with, and do I have enough of it, and can that make enough of a difference? Can you start to actually make that a little more concrete? If I'm like, imagine that I'm a CFO who doesn't speak engineering, what is enough, right? How much data are we talking about and where is it going to make it? What do we need and what could.

Chris Smith [00:25:06]:

It actually do I think in this aspect of it, you can, to a certain degree, use your imagination. Just think of it as like, if I had two humans, one who had access to this data and one who didn't, would the human who has access to this data have an advantage? Right? And I mean that because particularly thanks to these GPT models that are already pretrained on a huge amount of data from the world, it doesn't take very much additional data to have something that is both useful and it can harness that additional knowledge and can do something useful with it. So, for example, I used to work at Ticketmaster, right? They have a huge amount of data. But by my ad tech days, back when I was working in ad tech, it's tiny by comparison. It's almost nothing. But what they have is they have information about what tickets people have bought in the past, right? And for what shows and when and where. And obviously not as many tickets have been issued as people have seen ads for sure, right? But each of those transactions carries with it a huge amount of information, because that is purchase intent, right? That is a non significant amount of money that's spent, and it is money that is not necessary to spend. People are only doing it because they really want to.

Chris Smith [00:26:41]:

So it really provides insight about what they want and what they're hoping to get from something. And it's also very transient in time, right? Like the value of good. The people who want to see band x in 2020 are not the same as the people who wanted to see it in 2010. And that's a big challenge for people in that industry. But nonetheless, it increases the value of that data because it does show how it changes over time. And you can do a lot of analysis from that. So each individual transaction, although there were far fewer transactions, each individual transaction has so much more signal in it, is the way that I would describe it, that there's a lot of. That's a huge advantage.

Chris Smith [00:27:18]:

No one else has access to that much data about ticketing transactions, right? That's something you can build a huge advantage over. I've seen models these days, thanks to the GPTs of the world, being trained with much smaller data sets, even sometimes as few as, like 1000 examples, right? If those thousand examples have enough uniqueness, enough signal, enough value, there's an opportunity there, right? But again, think about it as like, well, if I had a person who was an expert on those thousand things, would they be better than someone else at doing something, right? And if you can't convince yourself that they would be any better at it, then that's probably not a real data mode. That's not really giving you an advantage. Right. For example, if I was to feed my diary into a machine learning model, okay, that would maybe make it particularly good at predicting how I might be thinking about things or things that I might experience in a day. Right. Other than that, it's not going to be very advantageous for any particular problem, including problems that I deal with every day. It's still not going to be advantageous because everybody else deals with, or at least a large number of people deal with that same problem.

Chris Smith [00:28:38]:

And so there are other data sets out there that would be at least as informative about. It's not just about quantity. It's also about the quality of the data in particular, to what extent it is informative in a way that is proprietary. Yeah.

Andrew Skotzko [00:28:57]:

No, that makes a lot of sense. One way I heard this put recently by a guy we both know, Travis Corrigan, the head of product at Smith AI. That was really, I thought, concise was he know, look, not every product needs to have AI in it. Like, not every product should be a quote unquote AI product, but probably every company can take advantage of AI in their back office, in their operations to be more efficient and so on and so forth. So I guess my first question to you would be, I want to go back to the, imagine I'm an executive or I'm a leader in a company. We're all feeling this pressure from the market, from the world, from our board of, like, you got to do AI. Don't show me anything that doesn't have AI in it. How should I respond to that? How should I think through that? And this imperative that I'm feeling to quote unquote, do AI or aiify my product, how do we respond to this?

Chris Smith [00:29:51]:

So some of this is just textbook what you do with any technology kind of stuff, right? Like the biggest one I always think of is, what are the big challenges for your business right now? And let's make a list of them and think about which ones do we need to tackle now, which ones do we need to tackle later. Right. And similarly, opportunities. Right. Where are the biggest opportunities that lie there? That gives you an application to focus on. Right. And then the second piece of it is, okay, how can a degree of automation, a degree of scale, a degree of data, essentially data, provide me with an ability to maybe solve this problem in a way that I previously never would have considered doing it. Right.

Chris Smith [00:30:49]:

I would start there. And then the other thing that I would think about is start tactically and then move to a strategic strategy. Right. So the way that one of my colleagues describes it is start with what you. Right. So pick a particular, fairly specific problem, right. And probably an internal one that is not even customer facing. Right.

Andrew Skotzko [00:31:16]:

I'm curious as you're talking about this, could you frame this in an example you've seen?

Chris Smith [00:31:20]:

Sure. Common one is if you're in a software business, right. You have this challenge of getting your junior developers trained up to a higher level of competence, right. And you have this other challenge that they need a great deal of supervision because they're still learning, right. So this is you when you started off in this space, right. So the thing that helped you the most, right. Was time that you got with me and other senior folks, helping you learn the ropes and figure out how to do the problem. And the thing that prevented you from being limited by your lack of experience was working with them together.

Chris Smith [00:32:02]:

Right. Working with us. So we would sit down with you and whenever we were working together, didn't matter that you didn't have experience because you could leverage ours. And your fresh eyes on things were actually really helpful. Right? So there's this opportunity there. But here's the problem. You have a limited number of very senior insurers, right? You can't actually give Andrew a buddy 24/7 darn it, just not realistic, right? And also, even if you did, then Andrew has no chance to provide additional value, right. Because you're using up all of that senior engineer's time anyway, right? So now imagine if you had trained a model on all of the software that you already have, all of the work that has been done by all of your highly skilled folks or even your less skilled folks, but gone through a review process where the collective wisdom of the team has been brought to bear on all of that code.

Chris Smith [00:32:53]:

You train a model on that and pretty soon now you've got a little buddy that's sort of maybe not as good as a human, senior human, but does have all of the data that informed that senior human at its disposal that budy is available 24/7 you could wake up at 02:00 in the morning and go, wait, I got an idea. And you're like, I need to talk to somebody though, to see if it's at all feasible or if I'm crazy. Well, now you've got this bot that's available to you to help you get there. And we've actually seen in studies, I think Facebook in particular wrote a paper about this, talking about how when they applied artificial intelligence and GPT models specifically for software development, that it helped their most junior engineers not only to be productive immediately, but to learn faster and to develop their skills much quicker. And so I think that is a simple application. It's very specific, it's internal. So the great part about that is if it's a bad experience, you don't lose any customers. Right.

Chris Smith [00:33:57]:

And you can count on your employees to give you feedback, and also you're paying them. So anything bad about it, they can stomach that because you're giving them a nice check at the end of the week.

Andrew Skotzko [00:34:06]:

Right.

Chris Smith [00:34:07]:

And so it's a very good use case that's very tactical, very specific. Right. And very limited exposure to customers. So that's the easy kind of starting place. Right. And that helps to give you a model for not so much the possibilities of what you can do with machine learning, but more how you're going to use machine learning going forward. You start to learn like, oh, wait, I have to stick a model here, I have to deploy it, I have to make it available to people. I have to think about what data do I expose it to? How do I secure that data, make sure it doesn't fall into people's hands? All of those kinds of problems, the very practical, very tactical problems, they all get solved when you work on that first problem, and because it's so small and contained, it's very tractable.

Chris Smith [00:34:52]:

Right. Like, it's not really low risk way to do it. Yeah, exactly. And then step two is you start looking at where are there opportunities? That's essentially an efficiency improvement type situation. Right. So the other space that you go into after you looked at those possible efficiency improvement sort of strategies is where are there whole new opportunities of functionality that we don't currently tackle because we don't have means at our disposal to do it. Right. And then you have to start thinking about what extra means come from having this artificial intelligence, this model available to me.

Chris Smith [00:35:31]:

What is the new capabilities that that unlocks that I didn't have. Yeah.

Andrew Skotzko [00:35:34]:

Which is the much more creative, but also really ultimately hugely valuable challenge. I think that my read on it today. I'm curious if this tracks what you see. It's that latter challenge that not everyone may find. Right. Not every company will actually potentially have something that is opened up and enabled by these technologies, but everyone probably is going to get operational efficiencies. The words that were running through my mind as you were describing, the efficiency play and the sort of internal play that everyone can take advantage of is sort of like automate augment and accelerate.

Chris Smith [00:36:09]:

Right.

Andrew Skotzko [00:36:09]:

Like, okay, I have all this data. I can use it for these three types of functions, which are. This also speaks to the fear of, like, oh, are we going to replace all our people? Well, some stuff is going to be really low value and they shouldn't be doing it anyway. So we can just automate that and have them do much more interesting, valuable things.

Chris Smith [00:36:23]:

Yeah. The phrase that, again, another colleague of mine uses that I love is, you're not going to lose your job to artificial intelligence, but you might lose your job description. Right. Your job description is going to change because. And think of it, really, this is not unlike a lot of other technologies that before us. Right. They didn't really get rid of people's jobs. What they did do is make certain parts of their job unnecessary, and that then opened up the possibility for them to tackle potentially something far more valuable.

Chris Smith [00:36:57]:

Yeah.

Andrew Skotzko [00:36:57]:

I mean, this is sort of the individualized version of creative destruction.

Chris Smith [00:37:01]:

Exactly. And it happens on a small scale and it's happening on a grand scale at the same time. Right. That's the space that I think when people worry about that particular aspect of it, I'm kind of like, okay, I think the thing that we need to prepare people for is not everybody being out of work, but maybe the fact that everybody's job is going to be different. So we need to make sure that we've got a workforce that can adapt 100%. Right. And as long as we have a workforce that can adapt, you're going to be fine. Everybody's going to have something to do.

Chris Smith [00:37:29]:

They're going to have too many things to do is probably the likely outcome. No, because every time we improve the efficiency of what a person can contribute to, which is what essentially tools do. Right. Any new tool helps you with that, that increases the demand for that capacity, because, wow, there's so much more value you can unlock. Right. That's the reality that we're going to face.

Andrew Skotzko [00:37:51]:

That makes complete sense to me and tracks with my own thinking. I was on a hike recently, and we got into a whole conversation about these sort of bigger issue questions, and one person was asserting, oh, it's going to replace everybody. Everyone's going to be out of a job. The other person was taking a much more utopian view and say, oh, no, we're going to have this. And suddenly we're going to have universal, basic income. And then everybody can have this utopian existence where they get to pursue the things that they just truly enjoy, and we'll all just do art or whatever. And I was like, well, yeah, maybe. But I think the real question is, why would this time be any different? Because we have a long history of technology being making us more efficient.

Andrew Skotzko [00:38:27]:

It's increasing our leverage. And I can't remember, think of the article right now, but we were all supposed to be working 4 hours a week by now or whatever it was. I'm not talking about Tim Ferriss. I'm talking about a much older thing. And yet what we do is we just cram in more. So I don't see why this time is going to be any different.

Chris Smith [00:38:45]:

Yeah, it seems to be just the nature of things, know? And essentially what it is, is you move from a model of trying to keep a roof over your head to trying to afford all the benefits that you want to provide your family to then affording luxuries that your family doesn't even really need. But it's just fun. We'll keep moving the needle on what your job might make accessible to you, but it's not really going to replace people. But I understand the fear. I don't think the fear is entirely without substance. And I think the reason why this is perceived as different is that this is the first time we're really seeing machines behave like humans in a way that is close enough that we're hitting that uncanny valley, right? That spot where it's like spooky. It's a little spooky. Like it looks like it's kind of doing what I do, and it's doing the same things I do.

Chris Smith [00:39:39]:

And does that mean it can do anything that I can do? Right. And that whole fear, I think it's coming from a very real place. And I wouldn't want to take that away from it. I wouldn't want to poo poo people those fears on that. That's a legitimate concern. And I think that is the part that's different. The reality, though, is that comes from interpreting what you're observing when you see an AI in action, interpreting what you're observing using a mental model of intelligence that's based on human intelligence. Totally right.

Chris Smith [00:40:12]:

And these engines are not, no matter what, smarter, dumber than us, more capable, less capable. The foundation of how they work is very different from human intelligence. Even though at the bottom of the line, we've said trying to simulate neurons, even the people who, Hinman, who started that whole quest to try to mimic how the human mind works, he'd be the first to tell you these things don't work, how the human mind works. Right. And so what happens is you see it do one thing that it's doing pretty darn well, almost exactly as well as you would do, and you think, oh, my God, it can do all these other things somewhere at the same level. And it's like, yeah, that's assuming that it's a human, it's not a human, which means there are some things. It's always been better than you at. Long before we got to this point.

Chris Smith [00:41:04]:

And there's other things. It's always been worse than you at. And it's going to continue to be worse than you at. That's the nature of the beast. So while the fear is real, I think that it's coming from the wrong place. Now, maybe someday, somewhere way in the future, we'll get to the point where it's so capable and so intelligent that even the things it's bad at, it's still going to seem better than us. But we are a long way away from that right now, as anyone who's looking at applied artificial intelligence is right now. There's a significant gap between what a machine does and what a human does.

Chris Smith [00:41:39]:

Absolutely.

Andrew Skotzko [00:41:40]:

So I want to pivot here for a second. I want to focus for a few minutes on some very tactical questions. So we've been talking about higher level questions around mental models and how to think about this sort of stuff. I want to drop down into the weeds a little bit here, because a lot of folks listening to this are also like, all right, I'm also trying to build stuff with this. And so I guess I had a handful of questions I want to dig into, but one of which is AI applied artificial intelligence. This is all coming out of a pretty heavy r and D space, certainly compared to your run of the mill typical SaaS software production cycles. So I guess my first question is, how do you think we should be approaching the prototyping and derisking of things like this? If I was sitting with a PM on a typical product, and we had some product idea or feature idea, we'd be thinking about what are the risks? How do we derisk those things? And anything in the camp we're talking about has presumably a lot of feasibility risk. How should we go about addressing that feasibility risk rapidly, quickly, in a nimble way.

Chris Smith [00:42:51]:

So the good thing about this is that part of what's emerged from the GPT space is that the amount of investment you need to put up in order to start seeing results is reduced significantly because essentially someone else has already done that investment for you up front. Right. So there is already a GPT model that you can go and talk to, Chat GPT right now and say, hey, if I wanted this, what would you say in response? And you can find out what it would say in response. That's a lot faster than the old model where you have to start from scratch, rent a whole bunch of computers, feed a whole bunch of data into it, and then you get to ask that one question at the end of it right now, you can ask it right away, and you can start to get a sense for how it's going to behave and whether it's going to work. So I think having that sort of concept of actually we can get at least a sense of what this space looks like right now. And then we can explore what are the possibilities that stem from it and what is the good and the bad of it right now. We don't have to invest a huge amount in machine learning to see whether the business application might make sense. You just sort of say, pretend that it was this, but it was smarter about our particular application.

Chris Smith [00:44:05]:

Right. That's sort know the way you can start. Um, I remember this wonderful story about what happened when we first started working on voice recognition. IBM invested a huge amount of money on voice recognition, and they didn't think of this themselves as like, wait, how can I explore the product space before we actually make this huge investment? And the reality is really cheap way to explore it is you put someone at a computer and you put a microphone in front of them, right? And then you have a remote keyboard and a remote mouse, and you have someone, a human at the other end, who's listening to everything that person says and reacting as a human. And generally they're going to do better than any voice recognition model you ever build. But that sort of sets the high bars like, okay, is this useful? Will people use it? If they won't, then you can save yourself billions of dollars in research and investment, right. Very quickly you could go like, oh, yeah, nobody wants this beautiful, right? And pivot and learn what is the thing that people actually do want. So I think while we don't have an easy way to scale up to millions of people at the other end of that microphone, right.

Chris Smith [00:45:20]:

We can do experiments in the small of like, okay, here's what happens when there's a human doing this for you. Does that work? Right? And then we can similarly do another experiment of like, okay, when we don't have a human doing for you, how much is the gap in terms of quality of experience and realized value? Right? If it's close, then that's where the opportunities lie, right? Of like, wow, someone said this was really useful for them. And when we applied the machine learning model, it was not that far off from being used already, just a base GPT model. Now we just need to figure out how to refine this product to do this in a more principled fashion that really gets all the value out of it.

Andrew Skotzko [00:46:03]:

Yeah, no, I love that. That really is a great way to think about testing the value risk. Right. And sort of doing that graded wizard of Oz test is how I think of that one. But this is a bit of a left turn, I think. But are you familiar with. I think it's Joel Spolsky's essay. I think he calls it, like, the iceberg problem is.

Andrew Skotzko [00:46:22]:

Ring a bell for you?

Chris Smith [00:46:23]:

So I'm not super familiar with iceberg. I've read Joel's stuff, but I'm blanking on it now. Remind me what that problem is.

Andrew Skotzko [00:46:29]:

It's an essay, and we'll link to all this stuff in the show notes. It was an essay from 2002. I just looked it up, called the iceberg Secret revealed. And the gist of it was, it's this thing where if you show, let's say, an executive who doesn't have a background in building software, you show them some mockups that look really nice, some high fidelity mockups, or some sort of maybe clickable prototype built off said mockups, like, if you did that in figma today, that those people will often basically assume that, oh, it's like 90% done.

Chris Smith [00:46:59]:

Oh, yeah, that problem. Okay. Yes, now I remember it.

Andrew Skotzko [00:47:03]:

Yeah, that problem. I know. That is the iceberg problem. I've heard it called other things. Whatever you want to call it. It feels like we have a whole new version of that.

Chris Smith [00:47:11]:

Yes.

Andrew Skotzko [00:47:12]:

Here in AI. So, for example, we were talking about this at that conference, right? If everybody has all this pressure to show their board how we're going to take advantage of AI, so their board's like, show me, show me, show me. And of course, it's reasonable for them to ask that. It's reasonable to go do some prototyping, go explore, test out, taste the possibilities. But then there's this massive expectation gap between the expectation and the reality. What do we do about that? Because there's, like, so much pressure to, I think, as Rich Miranov recently called it, AI wash everything to kind of just, like, slap some AI on that. So what do we do about that? For real?

Chris Smith [00:47:50]:

Yeah. No, and you're absolutely right. That AI is a particularly dangerous field for this. And again, I would point back to our experiences with voice recognition on this, right. As soon as we started having voice recognition, people are like, well, it can understand what I'm saying, then it must be able to do all of these other things. Again, the mental model of human intelligence instead of machine intelligence, if it can understand the words I'm saying, surely it can do x, right? Or it can understand y, right? And that led to a backlash of people going like, wait, this is a terrible tool. And you've seen this with the GPT models as well. There's this crazy thing that before the OpenAI released Chat GPT, just before it, I believe it was Facebook, came out and had another chat client that you can talk to that was built on a GPT as well and was, in some regards, perhaps even better than what came out with OpenAI.

Chris Smith [00:48:47]:

But there was a huge blowback because there was initial excitement about, hey, this can do something for me. And by the way, this one was trained only on scholarly articles, if I remember correctly, and its intended use was for helping scholars to write their papers and do their studies. And, yeah, people were very upset and just kept noticing all the ways that it wasn't measuring up to that initial expectation that they had. So I think part of what you need to communicate is that high risk that we have to figure out a way to provide an experience that is contained. And within that contained space, we perform up to people's expectations. Right? And that bar is actually very high and not an easy thing to get to, even with all of these lovely models that are available right now. So that's the first message you have to have back, is like, wait, I got to create a very contained experience, and I'm going to need to do some work to make sure that we don't have any failures along the way that are going to undermine people's brand perception. They're going to undermine people's experience, and they're going to just decide they don't want to use this tool.

Chris Smith [00:49:56]:

You also need to figure out a go to market strategy that's going to set those expectations to places where they are actually going to be realized, right? Because people are going to jump to conclusions about what this tool can do. Whatever you come up with, it's going to be like, whoa, hey, it's great. It's AI, it can do anything, right? And you need to have a messaging strategy, a communication strategy where you're going to the market and you're saying, no, this is what it can do. If you go outside these bounds, you should expect disaster. Right. But in this one space that we've really designed for your needs, it will be exceptionally useful. Right. And I think that's the hard part.

Andrew Skotzko [00:50:37]:

That makes a lot of sense because there is such inflated expectations, particularly, I mean, everywhere, right. Boards are, they don't know what to think. They just expect something huge. And so there's enormous pressure on leadership teams and all the way through an organization to deliver. And so I think there's something there about being measured and how do we approach that that makes complete sense to me. And then also, I'm really glad you called out the customer facing side of this because if we take something like here's, I think, probably a very common case right now, maybe you're seeing this, too. Boards are putting pressure on leadership teams, saying, what do you can do with AI? Show me. Then teams go, all right, cool.

Andrew Skotzko [00:51:15]:

Hey, we're going to go poke at that. We're going to come back to you. We'll show you some stuff they do. They go, some prototype, they show it to the board. Board goes, oh, my lord, that is amazing. And then their minds explode. Possibilities go crazy. And then they push, push, push, push, push.

Andrew Skotzko [00:51:27]:

They say, you got to get this out right now. Push this in markets, put this in the market or you might have ceos doing that. And then what happens is that a prototype which is not built for real gets pushed into production and slapped with production grade everything and messaging. And then we have set ourselves up for a customer facing disaster. That's what I'm anticipating is going to happen a lot. I'm curious if that tracks to what you see.

Chris Smith [00:51:50]:

It's already happening. Right. We were talking in the beginning about cruise. Right. There was a set of expectations about what those cars could accomplish. Right. And those expectations weren't necessarily aligned with reality. And now we're seeing blowback from what that gap was.

Chris Smith [00:52:08]:

Right. And you don't want to be in that situation. And this is an organization that knew exactly about the problem and was trying to manage those expectations and yet still failed. Right. So that highlights the risk of failure, is nontrivial. It's like hard to overcome even when you know exactly what you're doing. Right. And so I think that is the message that really does need to be communicated to boards.

Chris Smith [00:52:35]:

The iceberg problem. So actually, ironically, you're describing a scenario that literally just happened at my job. Right. We presented a prototype that was made visible to the board and it was like, this is what we can potentially do. And they were elated. They loved it. They were super excited. They were like, how do we get this out the door? And I still remember the conversation I had with the RCU about it.

Chris Smith [00:53:00]:

And he's like, so how close is this to production ready? And I said, well, I cut every corner I could possibly cut and still have this demo work. So we are a long, long way from being production ready.

Andrew Skotzko [00:53:17]:

To which he responded, so how long?

Chris Smith [00:53:19]:

Yeah, exactly. I won't share that conversation because that's the price of knowledge. But I will say this, that the part that I emphasized was like, look, you want us to explore this space. You want to find out what are the applications that are useful. So it's actually good that I haven't invested in making this production ready yet, because we want to know what is the right thing to focus on. Right. And then also to emphasize, like you said, there's a giant iceberg underneath. There's a huge amount of work to making any of these things really a quality product experience.

Chris Smith [00:53:54]:

And frankly, a lot of it's not technical in nature. Like, there's a huge amount of nontechnical work to bringing one of these products out the door because of what we've all been talking, what we've been talking about here, the managing expectations, the finding a problem space where there's actually value being delivered, and also an ability to put guardrails on it to ensure that the experience is never so horribly negative that nobody wants to use the product or your brand again. Right. So there's a lot of work that has to be done there, and you should be glad that you're not doing that work until you've identified a real problem with real opportunity and figured out, in fact, what is the biggest opportunity in front of you. Like, this is the one that I want to chase after. This is the one I want to put on. This is where it's worth it. Okay? This is where it's worth it.

Chris Smith [00:54:39]:

Now we'll get to that work. And then just the other way I phrase it is like, the fact that I did a demo to you means I've done 0% of the work of making something production. Right.

Andrew Skotzko [00:54:49]:

That's the iceberg problem right there.

Chris Smith [00:54:51]:

All of that work has not been done. All we've done right now is done. The little bit of the work that essentially a designer does of figuring out, what should this look like. Right. And we've given you an idea, and really, the way that we phrase it to the board was essentially, this is more to give you something tangible to understand what the opportunities are than it is actually like a shippable product. Right. Because we need your feedback, we need your insight to figure out what we should do with this. And I could sit you down and have you absorb, like, ten years of studying statistics and get that knowledge there.

Chris Smith [00:55:29]:

Or I could just show you something tangible that makes it real for you, and then you can get a sense of what's possible. Yeah.

Andrew Skotzko [00:55:36]:

You can sketch out an experience and help them understand what might be possible to open their minds up.

Chris Smith [00:55:41]:

Exactly. And part of the fun part of this that I think is an important part of having that conversation when you're doing the demo, is show a little bit of the warts. Right. Show.

Andrew Skotzko [00:55:51]:

Don't make it too pretty.

Chris Smith [00:55:52]:

Yeah. We'll show where it goes wrong. Do examples of cases where it completely goes off the rails and might not meet the expectations that you set with the parts that did work. Right. And say, look, it does this part here great. And you can see the value. But by the way, watch when I do this. This is a mess.

Chris Smith [00:56:11]:

We need a lot to fix this part here. Yeah.

Andrew Skotzko [00:56:14]:

Managing those expectations and make it the.

Chris Smith [00:56:18]:

Full experience that you're looking for. Because I'm not saying we can't do. It's just, it doesn't do it right now because we hardly spend any time on this compared to what it takes to get this to be good.

Andrew Skotzko [00:56:27]:

I love how you're framing this up in terms of managing the expectations, being thoughtful about the investment relative to the risks, relative to the value.

Chris Smith [00:56:35]:

Right.

Andrew Skotzko [00:56:35]:

Like, don't go do that investment until you know where the value actually is. Which is why rolling that level, making that level of a bet, and doing that level of investment doesn't make a lot of sense in an early stage product until you've demonstrated the value where you know that, yeah, it's worth it to build this thing here. And I think I would double what you just said. Like, everything you just said. Yes. And double. Triple that. If you're in a context where you can't just pick up and customize one of these off the shelf models and you have to roll your own from scratch, this gets, like, five times harder.

Chris Smith [00:57:05]:

Absolutely. There is tons of tooling to help with that process. There's a whole field of MLO ops that is really maturing right now. It doesn't change the fact that there's a huge. Don't let anyone in that space sell you on the idea that this means that you don't have to do any work. Right. There are great tools. There's fantastic value that can be built from them, but they still leave a lot of work for you to do if you're building something from scratch, 100%.

Andrew Skotzko [00:57:30]:

So I actually want to ask you one of the hardest questions when I'm thinking about a product group I used to lead that was a computer vision product, and we had to build it from scratch. This is before GPT, et cetera, et cetera, in a different use case anyway, where we had to roll it all from zero, from nothing. And I remember, my God, what a lift that was. And how much longer than we thought, how much longer it took than we thought, and how much more it cost. And the whole nine, that's where I was speaking to a minute ago, when I'm like, yeah, by the way, it's five times harder if you can't pick up the off the shelf. But here's a question that came up again and again and again and honestly never found a great way to address, frankly, to stakeholders who were nontechnical. The question that would come up again and again when we were rolling our own, by the way, but I think this probably applies if you're getting to customize somebody else's is when's it good enough, when's it ready?

Chris Smith [00:58:25]:

Right.

Andrew Skotzko [00:58:25]:

And then I don't know that we need to get into precision recall curves and that whole nine, but how do you approach that question? Because that was the bane of so many people on my team's existence, and we never quite cracked it.

Chris Smith [00:58:39]:

Absolutely. And I think the most important part about that is there's a mathematical answer to that problem, and then there is the business's answer to that problem. And they're related, but they're really not the same. So it is very easy to have a scenario where you have absolutely achieved your statistical goals and you have a product that is terrible and doesn't have a good place in the market, not compelling. And specifically, maybe you've done everything to shape the product, right, but the bottom line is the model isn't performing well enough for the business case. Right. And you thought you only needed this level of precision and recall, or you thought you needed only this error rate, but actually, it turns out when you look at the business reality that it's different. Right.

Chris Smith [00:59:30]:

And that is always a source of frustration for the folks on the data science side because they hate having it feels to them like you're basically moving the goalposts. Right. It's just going to say that, oh, you told me that if I did this, if I jumped this, it'd be good enough, right? It'd be great. Yeah, it'd be great. We'd ship it and everyone would be happy, and there'd be champagne and all this stuff. And now I jumped over it, I cleared it, no problem. And now you said, no, actually, let's move the bar up, like, twice as high. Now you need to get over that one, and it really feels rough.

Chris Smith [01:00:04]:

And I think, in fairness, it's not really moving the goalposts, it's discovering where the goalposts were. Right. Now you're making a guess as to where the goalposts are. And part of what your early process should be is about trying to identify where that goalpost is. Right. And you'll do that by essentially making some educated guesses and then conducting experiments that help you to determine. Is that actually true? Right. And that is actually where a lot of your r and d costs end up really going to.

Chris Smith [01:00:34]:

Like, there's the cost of building a model and you think that, oh, the cost of building the model. That's really expensive. People told me to build a model from scratch is really expensive. It's like, yeah, but the harder part is actually figuring out where the goalposts need to be for your model. Once you know where those goalposts are, then it allows you to be so much more focused in your development of the model that your time to build it starts to get a little bit more in line with what you imagine it might be as someone who's familiar with the space. But the hard part for anybody, especially, like a data scientist who's got tons of experience in this space building these models, they still don't know the problem domain. So they don't know how hard it is to figure out where those goalposts ought to be. Right.

Chris Smith [01:01:17]:

So even though they have tremendous expertise and they can produce results, they really have a sense of how much work is it going to take me to build a model to get to this level of capability? Since they don't really know if that's where the real goalpost is, they can't forecast how long it's going to take for your product to get out the door. And you probably can't either, because it's a new space. You're doing something new that you've never done before. So you have to conduct experiments and figure out where that's going to be. And you have to be prepared for the fact that some of those experiments are going to give you results that, for lack of a better word, would be described as failure. Right. And you have to plan knowing that that's going to happen and that at that point you're going to make an adjustment. Right.

Chris Smith [01:01:57]:

I think that's the part that people struggle with because it requires acknowledging the unknown. Right. And also it's known to be costly. So it feels like a really hard thing to manage to derisk, essentially. Right? Like, how do I derisk that? It's like, well, you're going to have to spend some money to derisk is what it amounts to. And time. You're going to spend money and time to derisk it and it's not going.

Andrew Skotzko [01:02:20]:

To love your reaction. For my own learning and also for anybody listening, two of the ways that I found helpful to deal with this problem, because it's a little bit of a chicken and egg problem, you're like, well, what's good enough? And they're like, well, what's possible?

Chris Smith [01:02:32]:

You're like, well, I don't know.

Andrew Skotzko [01:02:33]:

You got to tell me what's good enough before I can figure out what's possible. So it can kind of go round and round and round. What I often did was I was just frame it up. Like, look, we know it's going to change. That's fine, but just plant a flag for now so we have something to target and then we can go figure out if that's even possible and then figure out, okay, it's possible.

Chris Smith [01:02:51]:

Yes.

Andrew Skotzko [01:02:51]:

No. If yes, is it actually good enough? If it's not, we'll figure out what to do next. So that was sort of tactic one, was just like, hey, just accept that it's going to change and plant the flag and you just got to go anyway. You have to start somewhere.

Chris Smith [01:03:04]:

You absolutely have to plan for that to change. This is a reality. Even if you think you've done all the experiments right and you know exactly where the goalposts ought to be, you're going to find out you're wrong. And that's just a reality of technology.

Andrew Skotzko [01:03:14]:

And the second one, and this is when I was asking you earlier about how to think about derisking the feasibility and doing prototypes and this sort of thing, which is really what I was speaking to, was rolling your own model, where you're building a model and you're exploring all these paths that you don't know what's going to work. One tactic that we did because we kept running into this problem where it's like, oh, I think it's going to work. And then three sprints, four sprints, five sprints later, you're like, okay, how's it going? It just wasn't going anywhere. And so one thing we ended up doing that was helpful was saying, all right, we accept that it's uncertain. You cannot tell me in advance how long it's going to take. I accept that is just the reality. So instead, we asked a different question. We said, all right, how long am I willing to invest before I say we got to bail out and try something else?

Chris Smith [01:04:02]:

No, essentially what you did was define an experiment. Give that experiment budget and time, right? And then you said, at the end of it, we'll be able to make a decision. And that decision is going to be, do we want to put more time and budget into this or do we not? Right. That's essentially the challenging situation that you're trying to address. And that really is a derisking process. Right. Because, yes, you're going to guarantee you're burning through this amount of money and this amount of time, or at least almost certainly going to get there. Maybe you'll have success before you get there, but probably.

Chris Smith [01:04:32]:

Right. And you arrive at that point and it's like, okay, well, we have derisked it in the sense that we have a better idea than we did before of how close we are to our goal, and we have a better idea of how much more we're going to need to invest to get a better idea. Right. Those are usually the two outputs of that experiment is you start to go, well, okay, I know I need to do this much to figure out, to get an even better idea of what the risk is. And also I know what the risk is. Like, I have a better idea of what the risk.

Andrew Skotzko [01:05:02]:

Yeah, it's almost like setting. Hopefully your experiment just yields straight up results you can evaluate in the context of what you're trying to do. But then it's almost like putting a stop loss order on it where you're like, okay, hey, if we hit two months and we have no idea, we're just going to stop this and try something else. Which reminds me, and this will be the last one I'll share here. I'm reminded of a breakfast, you and I, a very early morning breakfast you and I had years ago in the middle of that product. When I called, you panicked and was like, I need advice.

Chris Smith [01:05:31]:

Yes.

Andrew Skotzko [01:05:32]:

And you graciously got breakfast with me, like the next morning, and you framed it in a way that I've never forgotten, which was basically what we're dealing with here. And again, the context is we were rolling our own model from scratch with a ton of proprietary data. It's a story for another time, but it was this idea that, look, what you're really doing is it's kind of a time bounded optimization problem. You have this amount of time, given your budget and how many you have on your team, and then it became a question of, like, all right, given what I just said about putting stop losses on this, how many shots on goal can you get in the time remaining? And then what can you do to accelerate that cycle time? Like, how can you make it faster, easier, cheaper to take more shots on goal, knowing that we have no idea what's actually going to work?

Chris Smith [01:06:17]:

Absolutely. No, that's very wise.

Andrew Skotzko [01:06:19]:

You were saved my sanity.

Chris Smith [01:06:21]:

Yeah. And that is always the important problem. Our team, our data science team has developed this process that I really like of setting a relatively short sprint, actually, for these iterations where we essentially take a shot and see what happens. Right. But then having basically a week after that shot has been taken to analyze the outcomes and to think about what is the next phase of it and to really write out a thoughtful, effectively a report or a white paper on what was done the previous sprint. And I think where a lot of teams get into trouble, specifically with these machine learning models, is in the anxiety of trying to get something out to market quickly. They skip that reflection and analysis phase. Right.

Chris Smith [01:07:13]:

And they just go, okay, this wasn't on the mark. Let's go. Next step. Right? Don't pause to reflect on what did we learn? What should we be doing? Let's think about what we really analyze the data that we're seeing. Not just go, it failed or it succeeded, but actually, what did we learn from all of this? Because you've invested a huge amount of time and energy in terms of, you've got some of your best people usually working on it, highly skilled individuals. You've got a huge amount of domain examination that's being done. It's like, okay, well, you've done all that work. The payoff isn't a yes or no, right? The payoff is a much more extensive understanding, and you need to give yourself the room to explore that before you start investing again.

Chris Smith [01:07:57]:

Right. Why would you keep investing if all you're going to get is like this yes or no signal, right? It's a lot more than mean to.

Andrew Skotzko [01:08:04]:

Use the language that I learned from Tom Chi, one of the original kind of OG discovery coaches who used to run a bunch of stuff at Google X. You got to close the learning loop, right? You have to have that last phase to extract the insights so that you have something to take action on. And the way we operationalized that in the case that I was just describing after that breakfast, was we sort of approached it almost like a research group at a university might, right, where we had people independently going off and trying stuff for a cycle. And then they'd bring back their results and their analysis, and they'd swarm on it as a team and rip it apart mentally and debate it and generate new ideas to try and then repeat, repeat, repeat, all in this sort of time bounded optimization context. And to tie that for any of my fellow product nerds who are up to date with the latest product frameworks and all that, what I basically did was what has now been published in the fantastic book, evidence guided by Itamar Galad. And Itamar basically talks about, it's almost this idea of a cyclical approach of using like, an ice framework for ranking ideas. And then you go and you test them and you get new evidence, update your confidence scores, and flush it back through the cycle and rinse and repeat. It's basically what we did, and he's just formalized it really well.

Andrew Skotzko [01:09:17]:

So if anyone's looking for a way to operationalize this and you need a framework, check out his book, evidence guide. I've done it. It works.

Chris Smith [01:09:25]:

Ed, I want to highlight something you talked about there, about the process that I think is really important, too, which was the notion that everybody comes together to look at the results. So, yes, one person, or whoever the team was that was leading that investigation, they produce the results, right? But then everybody comes and looks at it and thinks about it and says, okay, where do we go next? What have we learned from this? You want all the different domains of knowledge and application on the team. You want to bring them all to it, because there's often insights that you're going to get from all the different parts of the team that are going to help you to make the next research breakthrough. And you need that feedback. Too many teams, I find, like, they're too insular. They got the researchers that are the people who are working on the research process, and they present the paper and they all look at it and they read it, and they're going like this, and it's like, yeah, but have you brought in the product marketing people who are going to have to take this thing out to the market later? Have you got their feedback? They might tell you, wait, I get it. You guys are seeing this one deficiency. This isn't a problem, though.

Chris Smith [01:10:32]:

We can manage this problem on how we package the product, right? Doesn't even matter. Don't fix this. This is fine. Fix this other thing, by the way, which I have no idea how I'm going to solve on the product marketing thing unless you solve it on the machine learning side. Please focus on that part of it. And you're like, oh, I didn't even know that was really a big deal. Okay, great. You need that full feedback cycle where a whole team looks at it.

Chris Smith [01:10:54]:

The way I would phrase it is there's always this challenge, and I remember I've talked to you about this several times, of the data scientist who has the domain knowledge of how to use the statistics. And then there's the other group of people who understand the problem domain, the actual place where you're trying to apply it. And it's really hard to teach the data scientists all about the problem domain, and it's really hard to teach the people who know the problem domain all about the data science. But if they work together, they can leverage each other's knowledge and get to a much quicker result than if they're working independently. You want to bring all of those parties in. I can't emphasize that enough because I've seen that done too often, too wrong.

Andrew Skotzko [01:11:38]:

Yeah, silos end up, they don't work, so let's not do that.

Chris Smith [01:11:44]:

Well, in fairness, they are good for when you're doing the actual initial investigation of trying to bring out those results that you present to the team. Right. Let's leave someone be and let them go off and explore. But when you're evaluating them, when you're evaluating the results and trying to understand it and analyze it, this is a team exercise.

Andrew Skotzko [01:12:01]:

That's a good time to swarm.

Chris Smith [01:12:02]:

Yeah, for sure.

Andrew Skotzko [01:12:03]:

And I only wish that itamar's book, evidence guided, had existed at the time because, damn it, I basically reinvented a worse version of his framework. So I had a worse version of basically the same thing. His is just go do that. It works better. So it's better thought exactly than the one that we rolled our own. All right, well, Chris, this has been so fantastic. Thank you for being here, for sharing all of your wisdom, your experience, and helping us think about this better, because we know this is going to be a big deal in all of our worlds going forward. So first and foremost, thank you so much.

Andrew Skotzko [01:12:36]:

And I just wanted to first ask, you know, how can listeners be helpful to you? And where can folks find you online if they want to follow up or if you want to point them anywhere?

Chris Smith [01:12:47]:

Sure. You can certainly find me online. Let's see, I got XCBsmith on the platform formerly known as Twitter I'm also on Blue sky with xCBsmith and I'm on Facebook. You can email me at cbsmith@gmail.com.

Andrew Skotzko [01:13:06]:

What do you want to leave folks with just to wrap things up today?

Chris Smith [01:13:10]:

That there is a tremendous amount of unknowns on the table and that comes with opportunities and failures. But it also comes with that's where the biggest successes come from. So you just got to lean into all that fear and trepidation that everyone has and see that chaos as an opportunity. That's the reality. But think of it that way, as the opportunity comes from the chaos, so there's going to be some chaotic outcomes. That's just the way it is.

Andrew Skotzko [01:13:39]:

Beautiful.

Chris Smith [01:13:39]:

All right. Thank you so much. It was really great to talk to you.