Kent talks with Dax Raad about building OpenCode in a crowded coding-agent market: why dev tools are still a consumer-style product, how fast shipping can make good products feel worse, and what "product skill" actually looks like when agents remove friction from implementation.
They dig into onboarding, progressive disclosure, listening across many user requests for the real pattern, and why slowing down can be the right move—even when competitors ship faster.
Dax has spent years building tools developers actually use; on OpenCode he's thinking hard about product process while the space moves at breakneck speed. This episode is a practical look at product deterioration (not just code rot), bottom-up adoption for dev tools, and how coding agents change who decides what gets built—without replacing the need for taste, restraint, and clarity about what problem you're solving.
You'll hear concrete examples from OpenCode's terminal UI and onboarding, parallels to Kent's Epic Workshop app, and a grounded take on inference pricing, hype, and when "ship messy and fix later" does and doesn't hold up.
Homework
Guests
Transcript
Kent C. Dodds (00:01)
Hey, what's up everybody? I'm Kent C. Dodds and with me is my buddy Dax. How are you doing Dax? Super, it's good to chat with you. So this, in this season of the Chats with Kent podcast, we are talking about product engineering and becoming a product engineer. ⁓ So Dax, you have ⁓ some really awesome takes on that you post actively on X.
Dax (00:07)
Good, how are you?
Kent C. Dodds (00:26)
that frankly is a large reason why I decided that this is the right direction for us to go as software developers is developing some product sense. ⁓ And so to load up our context window, I guess, bad joke, ⁓ on all of this, I think it would be useful for people to get to know a little bit about your background and what you're actively working on and why this matters so much to you, why you care about this.
Dax (00:41)
You
Yeah, so like I said, my name is Dax. ⁓ I've been building products and companies for most of my career. ⁓ The past year or so I've been working on a project called OpenCode. It is a coding agent that spans from your terminal to a web app to a desktop app. ⁓ quite a lot of product surface that we can cover also in a space that is moving quickly with lots of competitors, kind of different
directions that we could go in. the idea of like the product process is definitely top of mind for us.
Kent C. Dodds (01:29)
Yeah. you have some of the most picky customers, which are developers and you're working in ⁓ probably the hardest part ⁓ with developers and that is developer workflow, ⁓ which like really, really difficult to get people to try a new workflow and like figure out what the right workflow is. And there are probably, there are certainly people using open code who have a different idea of how that workflow should be or what workflow works for them.
Dax (01:33)
Right.
Mm-hmm.
Kent C. Dodds (01:58)
⁓ How does that complicate what you're trying to do?
Dax (02:03)
Yeah, I think ⁓ that is difficult, but I would say the flip side is we've bought out the flip side, just we've been in the dev tool space for a while. and I think me personally, it's been five or six years where I've been trying to build stuff that, developers use. The thing that we've learned the most is it's really a consumer business. You kind of have to look at it as though you're building a consumer product.
⁓ Even though your monetization might be, you B2B, like you want eventually a business to pay you because they're developers or using your tool, ⁓ you don't really sell in that direction. You can, of course, any business can go top down, but generally to build something wide reaching, it kind of has to be bottom up. So it's very much like a consumer app, like the same mindset as why would someone, you know, download, I don't know, the X app for the first time, or why would someone ever download Instagram for the first time?
⁓ How do you get them to the place where they get the app and they're excited by it? How do you get them there quickly? So even though, yes, it is developers and they're quirky and they have certain expectations, at the end of the day, main thing to get, the first thing to get right is that ⁓ initial experience, which is not very developer specific.
Kent C. Dodds (03:23)
Yeah, I was actually gonna point that out if you didn't. So like for a B2B sort of app, that initial experience is not necessarily the most important thing. Like your goal is to sell it to the boss. And so like your focus and your attention is kind of in a different place. But if it's a consumer app, which I think is an interesting call out, yeah, that makes sense. That onboarding experience has just got to be delightful. Like a really, really great experience.
using it for the first time and playing with it. then that way, hopefully, you know, going bottom up, ⁓ you get developers at the company using it and they talk to their boss to get their boss to pay for it. And then the boss is like, why don't we just, you know, buy a team license or whatever. Yeah, that makes a lot of sense.
So Dax, I actually wanted to ask you about this post specifically. Actually, when I read this post was the moment where I was like, okay, yeah, I really need to talk to Dax and I should just make a whole podcast season. So here's the impetus for this season ⁓ that you posted three hours ago from the time we were recording this. And it says, we're doing a lot of introspection on our product process. I really think you should be too. The default place for our new coding agent abilities to go to.
Dax (04:31)
Mm.
Kent C. Dodds (04:38)
is to work on the wrong things. Products going from good to bad faster than ever. I don't know that there are many people who could disagree with this in using software in the last little while. And honestly, I think this has kind of been the prevailing attitude since Facebook's fail fast and stuff like that. we're just, you know, ship it fast, you find out the problems and then you iterate. And ⁓ everybody had a different level of... ⁓
⁓ tolerance for how much they can ship to customers that actually isn't ready or isn't working. What's going into your introspection of your product process and what level of ⁓ speed versus stability are you targeting ⁓ over at OpenCode?
Dax (05:17)
Yeah.
Yeah, I think there's this thing of self deteriorating. There's of course the code base side. So ignoring product features like your code, can deteriorate. That's definitely a real thing. But in this post, I was more talking about the product experience deteriorating. ⁓ There's so many different dynamics going on. if we, if I look at my own career, I never felt like I was working on my product skill. I always really just felt like I was working on getting better at programming, getting better at these like very specific technical things that you can point to and call as a skill.
kind of list on your resume. ⁓ know, people put like product, okay, I'm getting, you know, we know the product is a skill. And we maybe think of that as like design or like kind of like specific technical things under it. But there is this real invisible skill called product that ⁓ you can get better over time. And I think I personally have maybe under estimated, like my own journey through that, because I mostly have seen myself as getting better technically.
But I also think I've been kind of secretly getting better at product as well, which really only happens if you're kind of involved in the whole loop where you are building something, dealing with the feedback from users, them yelling at you, being able to like kind of fully close a loop on your own. If you are in that situation, you probably are naturally getting better at product. I think a lot of people have not been in that situation historically. I think a lot of organizations split these roles up, arguably for good reason. I think
most engineers that are working probably have product managers that they're working with and are kind of historically explicitly been told here is what we're going to work on. And the engineer's job was kind of to figure out how difficult that would be pushed back if it's too difficult, kind of figure out that side of things. ⁓ I think it's easy to feel like the product manager is not doing much in that process because you're just kind of seeing tasks and they feel more like a project manager if anything. ⁓ But with coding agents, ⁓
Kent C. Dodds (07:19)
Mm-hmm.
Dax (07:22)
Some people, some engineers are now feeling like, the technical side, I'm not thinking about as much. I mean, I have my own opinions on whether that's how you should be seeing things or not. But granted, people do see it that way, which means engineers are now like naturally going up this chain into deciding or like having ideas on what should be, what should go into the product, having feature ideas, like kind of pushing things out just because maybe they can push it out faster than before. ⁓ And I think in this process,
I think software engineers kind of need to introspect here and realize that maybe we're not that good a product. ⁓ Because I think a lot of the things that we're seeing is just products get so many features so quickly these days, like great products that I like, I go back to them six months later. And it's not that the features don't make sense or shouldn't be in there, but they're just kind of not arranged correctly. So the product starts to feel bad.
Kent C. Dodds (07:58)
Thank
Hmm.
Dax (08:19)
And I think in this analysis, you kind of realize, okay, there maybe is a real product skill. When the question is like, what is that skill? You know, like, what do I need to arrange correctly? Like, how do we need to approach this differently? So yeah, I think we're basically trying to codify that more as like, this is a real thing. And we shouldn't underestimate how rare it is that somebody has experienced to do that. Well, that makes sense.
Kent C. Dodds (08:45)
Yeah, that makes complete sense. I've got a lot of different threads I'd like to pull on that. So one is, ⁓ figuring out what that skill actually is and how to develop it is ⁓ very difficult. I've been teaching people how to develop technical skills for over a decade and I've a variety of ⁓ difficulties or difficult challenges in doing that.
Dax (08:50)
Mm-hmm.
Kent C. Dodds (09:15)
This new skill that I'm trying to figure out how to teach is definitely the most amorphous or like, you know, just smoke that I like trying to figure out how to put a shape to. And so that's part of the reason why I'm doing this podcast. And I hate to throw products under the bus, but if you can give me, you know, give us an example of a particular product and like, what about that product?
Dax (09:22)
Mm-hmm.
Kent C. Dodds (09:45)
just kind of evolving ⁓ without much ⁓ intention. ⁓ Some example might help us codify or really solidify what this means.
Dax (09:57)
Yeah, honestly, I would just probably talk about our own products, just because that's kind of what's top of mind. I think what's been shocking is, you know, open code has only been around for eight months. And I would say this type of deterioration, I'm like shocked at how I'm not saying anything severe, unrecoverable or drastic, and maybe people don't even notice. But in terms of like where my bar is, I kind of look at our product, I'm like, wow, we've like, really built a lot of features that we never
Kent C. Dodds (10:00)
Perfect.
Dax (10:28)
I think the way to think about this is there is a, you can kind look at your product in terms of almost like a life cycle. If think about from a new user, new user comes and tries out your product for the first time. That's the onboarding process. In that process, you need to identify, know, all of us working on products, there's probably a lot of features in it we love. We love like all different things that we built into all the different capabilities it has.
But for onboarding, we kind of have to pretend like none of it matters except for one thing. What is the main single concept you want people to get when they onboard? If you think about a product like ChatGBT, it's just an input box. Type in anything you want in the world. Get back a response. That's the main thing. If you think about open code or cloud code, very similar. Be able to prompt and kind of have a code change being made and have that impress you. All the other features do not matter for onboarding.
So in that onboarding phase, it's very easy to leak over time more stuff into there. Like the first version of product is innately minimal, so maybe the onboarding is very clear. But as you add more features, you have to make sure you don't mess up the onboarding. Very easy to do that. ⁓ And I think we've gone through a few cycles where we've accidentally broken our onboarding and we have to go back and try as a new user and realize we overcomplicated it without really even thinking. ⁓
Kent C. Dodds (11:36)
Hmm.
Dax (11:47)
The next phase of the life cycle is someone is using your app and they're starting to get more advanced. What is the order in which they discover the other features? If it's just a flat list of, okay, you have your onboarding feature, and then you have 99 other features that are kind of at the same level and they're kind of thrown at it, that's another place where you start to feel like stuff is bloated. And if you look at the open code, TUI at least, ⁓ I think we're doing a decent job on onboarding. It's very simple. You press control P, that opens up the command dialogue.
Kent C. Dodds (12:07)
Hmm.
Dax (12:17)
And this has basically all other features that you can do in OpenCode. There's no hierarchy. There's no natural discovery for these things. ⁓ You're just kind of thrown at a junk drawer, more or less, ⁓ because we get excited about a feature. We get excited about how quickly we can implement it. And then we don't do the hard part of understanding that there needs to be a natural pathway to get to this feature that we intentionally designed.
Kent C. Dodds (12:41)
Hmm
Dax (12:43)
as opposed to just throwing it in this kind of catch-all drawer. And that catch-all drawer is fine for some stuff, like sometimes you just need to ship things to unblock people. But if you're not going back and doing this process of reorganizing it, that's when products start to feel bloated and people don't even discover these great features that you have. Yeah.
Kent C. Dodds (13:01)
Yeah,
yeah, that's a great point. So I had a similar problem with my Epic Workshop app where there are a lot of features in it and especially with agents, it just became really easy to add features. I like, frankly, it had a pretty sizable list of issues that people had opened asking for features. So that's not like bad features. Yeah, yeah, exactly. So I think that there probably are some products where they're just like checking.
Dax (13:19)
Yeah.
Yeah, they're not made up. Yeah.
Kent C. Dodds (13:29)
anything that they can in the product and see what, know, because it's so easy. So, we're not just talking about product restraints and like knowing what not to include. It's also, like you said, organization. So, my onboarding was a problem because like I had all these features and I just wanted to shove it in their face. And so I said, okay, we'll make a tutorial. And that tutorial is still part of the onboarding experience. But what's even better than that is like,
Dax (13:31)
Yeah.
Kent C. Dodds (13:55)
progressive disclosure, I guess, like to use the skills. ⁓
Dax (13:58)
Yeah,
that's a key phrase all the time. We're always talking about progressive disclosure. I think with a lot of these things, think of it almost like eating healthy. We all know how to eat healthy. We roughly know the foods we need to eat. It's not like we're too stupid to understand that. That doesn't necessarily mean we're always eating healthy because there's a discipline or an exertion portion of actually doing the thing. Progressive disclosure isn't a great example of that.
We might roughly know what that is. But remembering to always do that every single day, think is a that's where like the difficulty comes in. Yeah.
Kent C. Dodds (14:34)
Yeah, yeah. I had one feature that I was really, really excited about. And so I just made it so it would pop up a notification the first time somebody saw, like opened up the app and say like, oh, have you seen this feature? And in some cases that's useful and like you can have people subscribe and to be notified of your new product features, the people who are like really, really into your product. But I think being noisy just,
just because you're excited about a feature probably is not the right ⁓ direction to go. And instead, just make it like, if they get into, or maybe put yourself in the mind of the user, like when would this feature be useful to them? when you can detect that situation where they're going to run into that problem or whatever, that's when ⁓ that notification shows up or just some sort of ⁓ point in the right direction.
I think it just requires a lot of user empathy.
Dax (15:32)
Mm-hmm.
Yeah. Yeah. I mean, it's very difficult because you have to be kind of brutal about the things that you're excited about. It's really hard to get into the mindset of a brand new user who like on some level probably doesn't even care about what you're doing. So, and like maybe they're going to take a little bit of time out of the day to like give you a shot. And that's such an extreme bar. Like you have to really be, yeah. Kind of brutal about
Kent C. Dodds (15:52)
Yes.
Dax (16:04)
I love all my features, but really, if they don't use 99 of them, and they only ever see this one, that's kind of all that matters. ⁓ But again, it's hard to remember to do that every single day.
Kent C. Dodds (16:15)
Yeah, yeah. So like we've talked about a couple of different ⁓ aspects of what makes a good product actually a good product. So there's, we talked a little bit about product restraint. Maybe we can talk about that a little bit more too. This progressive disclosure and onboarding I think ⁓ matters a lot. ⁓ But yeah, let's talk a little bit more about the product restraint. How do you know when somebody like...
Dax (16:28)
Mm-hmm.
Yeah.
Kent C. Dodds (16:41)
You get an insane number of issues from developers with all sorts of different, like, you probably look at some of them and you're like, I don't even know how you could think that is a good idea. ⁓ but some of them ⁓ are probably pretty subtle or difficult to know how to make that fit. How do you like determine where a user request fits in all of those?
Dax (16:44)
Mm-hmm.
Yeah.
Yeah,
yeah, I think this is another one of those kind of invisible skills get better over time. You know, talking to users, everyone's we all know we should be talking to users. And that's kind of like a, you know, binary thing. It's a trade is a binary thing. Like you're either not doing it, which is bad, or you are doing which is good. But you actually just because you are doing it doesn't mean you're doing it well. It actually is a skill in itself where you learn how to talk to users and you learn how to like come away from those conversations with something useful. ⁓ Roughly speaking, there's I would say there's like a kind of like a
gradient. It's a, you sometimes the user comes to you with an idea, and it's spot on. It's just something that you're missing. Maybe you were even aware of it. You're just kind of waiting for a real data point ⁓ to like push you to actually do it, which I think is a good way to look at things. It's very easy to make up features that nobody wants. And if
Kent C. Dodds (17:54)
Maybe an
example of that is what James just did yesterday or the day before with like offline syncing of Daytona ⁓ boxes and stuff. Maybe you can explain that a little bit.
Dax (17:57)
Mm.
Mm-hmm.
Yeah,
so that's a good one to look at. So that's part of the next level. So of course, there's like the simple ones or someone tells you what they want. It makes sense. It's fine. ⁓ There's a next level in which a bunch of people are all explaining a problem they have or complaining about something. And there may be like to hyper focus on like the locality of that problem. And you realize that maybe there's, you know, 10 different people all saying different things. But if you take a step back, they're maybe are all saying the same thing.
⁓ So in OpenCode, we're right now working on adding support for ⁓ all kinds of different sandboxing approaches. So you can run OpenCode agents from a central place and spawn agents that are running in the cloud or running in a Git work tree or running in a Docker container. ⁓ We had such a wide range of issues that existed that kind of eventually led us to building these. There were companies talking about security issues. You know, they don't...
really want these things running directly on their laptops. were developers trying to do a lot of different things at once saying that they want to get work trees. There were a bunch of people trying to use cloud sandboxes for massively scaling out work. These all seem like different things, but we realized these are all just a special case of the workspace concept. And if we implement that once, it's kind of going to solve everyone's issue. ⁓
Kent C. Dodds (19:23)
Hmm.
Dax (19:23)
So this is another key thing in terms of product. We could have went and added Git WorkTree support. We could have went and added Docker support. We could have went and done each one of these things. But we had a little bit of patience and said, there seems like there's something similar to these. And we waited until we had the clarity to understand that there's a single thing we can build that implicitly solves this and a bunch of other stuff too. ⁓ Yeah, that's another part of the word restraint. It's sometimes just a timing thing. Like wait till you have clarity on
on the problem before you actually build it, because you'll save yourself time that way, and the end result tends to be better. ⁓ Sometimes it's painful, especially, and I think it's not going to call out. The environment right now is crazy, because everyone is shipping so fast. There's new stuff coming out every day constantly. And if you're in a competitive space, which we are, ⁓ every one of our competitors is shipped like native Git work for you support. They're all like so far down this road, and it's been painful to be slow to that. ⁓
So yeah, just not getting pulled into that when you do have clarity, I think is also hard.
Kent C. Dodds (20:26)
I think that's one of the most important skills ⁓ for building products is knowing when it's time to abstract or like to build the solution. ⁓ I have six sisters and ⁓ like four sisters in law. ⁓ and every single one of them has asked me to build a software solution for something or help them with the software solution for something. ⁓
We're very entrepreneurial family, I guess. But ⁓ what's interesting is like every single time they come to me with this, I'm always telling them, well, like asking all the questions that most of us know, like did you validate, know, like that this is actually a problem and stuff. And most of the time I just say, you gotta do it the hard way first. You just like, because if you don't, you're building a solution to a problem you don't have clarity on. And I think maybe that's,
Dax (20:56)
Yeah.
Mm-hmm, yeah.
Kent C. Dodds (21:22)
that's related but different from product restraint is like problem clarity. And how do you develop that clarity on a problem? And slowing down is maybe one of the ways that we do this and like do it the hard way first. Okay, maybe we don't have a really nice workflow for this particular thing just yet, but like, is it impossible? Is the work around really that bad? And like, how long can we defer solving this ⁓ with like a built-in solution?
so that we can develop some clarity on what the actual problem is and build the right solution.
Dax (21:56)
Yeah, I think the word slowing down right now, it's very scary to everyone, but I think it actually might be the right mindset to have. ⁓ just cause we're in an where it feels like everyone should be speeding up as much as possible. And the idea that you would actively like, cause that's what it feels like right now. I feel like if we don't think about it at all, we're all just naturally just going to keep going faster and faster and faster. So it feels like the work is to convince ourselves to slow down a bit. And like you said, like do things the hard way or like tolerate stuff, not being
tolerate not shipping a solution, because you're still exploring the space for longer. It feels very hard to do that right now, because everything out there is telling you that you're to get left behind if you don't do that, or your competitors are going to out ship you. And yeah, even for us, like I said, we've been thinking about this for the past week, what is our product process. And yeah, it really feels like we're just all very overstimulated. And what's interesting is
Our company is pretty conservative with AI use. would say, especially if we're working on a coding agent, ⁓ if you've seen us talk about it, we're pretty reserved with how much we just let them go do their thing. Even then, even with us being conservative, we're still looking back and being like, even we weren't able to escape this. We're kind of falling into the same patterns. Yeah, it's just hard right now.
Kent C. Dodds (23:13)
Hmm.
Yeah, that's ⁓ actually really fascinating. ⁓ Being able to, yeah, like you said, slow down in an environment where everybody's telling you to speed up. I think like in January, February for myself, I jumped on the cursor cloud agents train and I was just, I tore through so many issues in my backlog on so many projects. And honestly, it was great.
Dax (23:42)
Mm.
Kent C. Dodds (23:44)
I think what made that work was that those issues existed for a reason. So there already was some product decisions and thought behind them. ⁓ Once I got through those issues though, I started thinking of all these other things that I could do. And actually, if people check out my personal website, ⁓ I have a blog on there. It's a pretty significant piece of software. It's not like your typical developer portfolio site, but I just started having all these other ideas.
And if you look at the last like six blog posts on there, there's like one blog, each blog post is a new feature that I added to the website and how I did it with the agents and everything. But like every one of them has ⁓ stuff that I missed, like at the bottom, like I didn't do a very good job of this and that caused the whole site to fall over or whatever. And it's all just because this urgency that you just kind of feel when the implementation happens so fast.
Dax (24:21)
Nice.
Hmm.
Hmm.
Kent C. Dodds (24:39)
You're just like,
I'm the bottleneck now and I just got to move fast and ends up resulting in a suboptimal solution. I've got actually two blog posts where there were followup blog posts where I was like, I did it this way yesterday and it was not the right approach. So I'm doing it this way today where if I just kind of slowed down a little bit then maybe I could have done it right the first time.
Dax (25:02)
Yeah. Yeah. I mean, it's, ⁓ yeah, it's, it's, it's so weird. I've never like thought about my own psychology as much as I do lately, cause it feels like it actually was like, that's the bottleneck where, ⁓ like our brains are just being pulled in this direction without us really consciously deciding that that's how we want to operate or how we want to work. ⁓ yeah, it feels almost like, ⁓ this thing came out and we're all kind of roughly like addicted to it. So we're trying to figure out what a relationship to it, like what a healthy relationship to it should be.
Yeah, I think another form of this restraint thing is sometimes there's an idea for a feature and the feature maybe even makes sense and solves a real problem. But another aspect to look at it or another dimension to look at it through is, do you want to maintain this feature forever? Like it's not something that you just do. It's like something that kind of continuously pops up. It also might sit in a spot where it kind of interacts with every other feature in your app. So anytime you're trying to make a change somewhere else, this feature keeps kind of
Kent C. Dodds (25:49)
Hmm.
Dax (26:00)
popping up and being a little ugly and kind of getting in your way. And the right call there might be, we're not gonna have this in our app and it makes our app less good, but for the long-term health of the product, it's the right call, it's just gonna get in our way too much. And this is the type of thing that when you're in this mode of like, I got all these ideas, I'm gonna ship, the agent's never gonna push back and be like, hey, Dax, like slow down, like this feature maybe isn't a good idea, know, it's just gonna match your momentum.
Kent C. Dodds (26:03)
Hmm.
Dax (26:30)
So I just find myself doing that way less and I shouldn't, I should be doing that more if anything, because there's more volume going through the whole system.
Kent C. Dodds (26:37)
Yeah, so you said, I agree with you, though I wonder, you mentioned the agent's never going to ask you to slow down. Because you're building a coding agent, ⁓ I wonder if it wouldn't make sense for you to, maybe one day, let's get some clarity on the problem first, but ⁓ eventually have the agent have some product sense. Do you think that is something that's possible, where the agent can be like, let me get a holistic picture of this product,
I'm not like you could probably solve that problem with just a slight change here or we've got all these problems. Let's consolidate them into one simple solution. Do think that agents will ever be able to do that?
Dax (27:16)
Yeah.
Yeah, I think this is more of like an area of interest type of thing. So I think for me, I am mostly interested in kind of thinking of these agents as extending my own. If I'm trying to go in a direction, this is like letting me go in that direction way more aggressively or way, way, way faster, which means if I am pushing in a bad direction, my mistakes are going to be magnified. ⁓ I think that that's just kind of
Kent C. Dodds (27:36)
Mmm.
Dax (27:46)
With the area I'm interested in, that's kind of just the fact. I'm trying to build stuff that augments, ⁓ which means there's a lot of potential for using it incorrectly or pushing in a bad direction. And there are people that are trying to build tools with AI that it's more about putting, I don't how to describe it, it's more about having another person next to you that's also trying to do something. ⁓
So what you're describing is more like that. Like I am not the expert at product. Maybe this thing can help me, ⁓ you know, develop good product sensors, stop shipping things that don't have that have a bad product sense. I think one of the reasons I'm not super interested in that is I'm a little bit skeptical in the LMS capabilities of doing that. It's kind of like trying to get them to write a good joke. It kind of falls in the same category to me. ⁓ I have never seen them been able to write a good joke. ⁓
Kent C. Dodds (28:35)
Thank
Dax (28:40)
Similarly, like, so I don't like trust them in that way. Like I don't trust them in terms of judgment or tastes or sense and trying to have them be good or help me be good at product falls in that category. Not saying it's impossible. It's just the way I feel about them. I'm not like drawn to try to get them to do that. Yeah.
Kent C. Dodds (28:57)
Yeah,
yeah, that makes sense. I think that ⁓ for myself, I will sometimes have a couple options of how to build a particular feature or fix a particular bug or something. And I'll take those to the LLM and just make sure that I'm not missing something.
Dax (29:07)
Mm-hmm.
missing edge cases, edge
cases and stuff. Yeah, I did all the time. They're great partners in that way. ⁓ The other thing I love about them is, so I'm talking about all these areas that let us kind of push in the wrong way too fast. I think an area where I'm like, which is a complete win is ⁓ like debugging, like giving it like a heap snapshot and be like, ⁓ tell me where this memory, like why is there a memory leak or what's taking up so much memory? Stuff like that, you should just take so much time to the point where we wouldn't do it.
And there's actually no downside to just letting an agent go crazy with debugging an issue and helping you figure out what it is. So yeah, I'm definitely not in this camp of saying they're like negative in any way. ⁓ But again, it's like, yeah, it feels like just something we're addicted to and we got to develop some good habits around. Yeah.
Kent C. Dodds (29:59)
Yeah,
it's interesting. I asked you like, do you think the agents will ever be able to do something? Because another thing you posted recently caught my attention and actually a lot of people's attention. This one got 14,000 likes, ⁓ which I just noticed that just now, but you're saying, please shut the F up. I don't care. And what you're talking about your quote posting. Yes, it has 14,000 likes, almost a thousand reposts. It's crazy. here's the quote post was,
Dax (30:19)
That guy's 14,000 likes, I didn't realize that. ⁓
Kent C. Dodds (30:28)
from somebody who's saying the 24, I can't not laugh at this post. The 24 to 29 year old engineer will soon become the most valuable asset in technology. Yeah, I suppose when all people who are older are dead, yeah, anyway, yeah, pre-AI principles, post AI speed is an undefeatable combo. I'll bet you this post got like a crazy amount of engagement. They're probably gonna make bank on that post.
Dax (30:58)
Well, if you look at the reply, he's promoting his product, of course. ⁓
Kent C. Dodds (30:58)
But
Amazing,
amazing. And it just so happens I'm 26. It's, yeah, youth. ⁓
Dax (31:07)
Yeah, it's,
yeah, so I quote to it and be really frustrated just, you know, it's not even the specific thing that he said, it's just, again, the environment we're in, it feels like everyone is just trying to operate. Like they're trying to transport their brain into a year from that what they think a year from now is going to be, and it's trying to operate day to day as though it's there. There's like, I understand where that comes from. But I think
The reality of that is every single day we just see prediction, prediction, prediction is what the future is going to be like. And it gets pretty tiring. And the truth is to be able to actually be someone that like can really imagine the future in that way, you have to get so many, so many assumptions, right? And doing that involves like really understanding yourself well, really understanding your biases, really understanding your insecurities, because most people when they make predictions, myself included, we're trying, we kind of happen to predict a future. It's very
positive for like our personal traits. It's like people like me are going to be winning in the future, you know, that kind of tends to be the natural place our predictions go. So that there's kind of useless. And this is great example of that where it just sounds like he's like what we say 2427. That's what he said. Like, why not like 18 to 22? Like, why not that range? If the premises like people that are young can adapt to these tools better. So yeah, we're just kind of inundated by
Kent C. Dodds (32:06)
Yup.
Dax (32:32)
Here's what things are gonna look like. So you need to start preparing today for it. But nobody really knows.
Kent C. Dodds (32:34)
Hmm.
Yeah, yeah, completely agree. The and what's interesting is some people will get paralyzed by the idea that we're going to hit AGI and then nothing matters. Like, you know, none of your skills matter and stuff. That might be true. That might not like. And if it is true, maybe it's in maybe it's next year. Maybe it's in 20 years. Who knows? I really don't know. But the fact is that you can't plan for that. And so I think that the version of you who
Dax (32:46)
Yeah.
Kent C. Dodds (33:04)
⁓ plans on that not being the future and instead just plans on continuing to develop something like product sense, which actually was really valuable 50 years ago and will continue to be valuable. I think that's the sort of skill that you can develop now. And unless we do hit AGI, which who knows, it's gonna be still a really valuable skill to have.
Dax (33:16)
Yeah.
Yeah. And I think that now is actually a crazy good time to even isolate what that's like, like we said, like, like we've been talking about it. We're kind of all thinking about what even is this thing. ⁓ I think that's happening because of the amount of noise and really the only way we, also the amount of degrees of freedom we have. ⁓ another thing that I've been realizing so much is we were accidentally doing good product work because of our inability to ship features as fast.
Kent C. Dodds (33:57)
Hmm.
Dax (33:57)
So
in the past, when someone came to me being like, hey, we should build this, in my head, I'm like, that's going to take me so long. It's gonna take me like four weeks. I don't want to do that work. I'm going to come with every single argument as to why they shouldn't, why we shouldn't ship this feature. And they're like, really convinced me. And we have to kind of really get on the same page about what it should be, why it's valuable. And we kind of like look at it from every single angle. That was, we weren't doing that on purpose to get ourselves to ship good features. It was kind of natural.
way that we ended up interacting with each other. yeah, yeah, yeah, it was like, it was like a, even though these are like negative traits, like laziness, I think it was creating a good equilibrium for things. ⁓ and because we weren't intentionally doing this, now that laziness is less, not really laziness, we're still equally as lazy. Just now that the pain of being asked to do something is a lot lower, we're not having those
Kent C. Dodds (34:28)
Yeah, there's an element of laziness, but also just, yeah.
Dax (34:53)
discussions as much and we're being less disciplined about that. yeah, now is a really great time to like really think about those things and how you operate. Because if you don't, I don't think you end up in a good place. It's it's fine. I think a lot of these things where it on one hand, it is so great that like anyone or team can kind of touch any part of our code base pretty easily now because it can ask the agent questions, it can like, you know, have it have it do the work.
But man, there's so much to shipping something good more than just the literal implementation. Like having the context of why are things away things, why are things away right now? Like what's the history? What's the direction we're trying to go in? Is this new thing we're putting in aligned with both of those things? ⁓ Yeah, there's just so many like little details that are, have been invisible forever, I think. Yeah.
Kent C. Dodds (35:44)
Yeah, yeah, and
these are, think, are really durable skills that will be useful even when AI costs what it actually costs.
Dax (35:49)
Mm-hmm.
Right. Yeah. I mean, like you said, it was useful 50 years ago. And I think this is what, uh, it's kind of how I was describing. Like, I feel like I was accidentally getting better at this because it is just the most fundamentally true thing. Like you're going to naturally experience these things no matter what time it is, no matter what technology you're using, no matter what programming language you're using. Um, you will just get better at this because it is like a real thing. And yeah, so I don't imagine that this goes away at all.
with AI, especially not this and the proof is we're all thinking about this more than we ever have.
Kent C. Dodds (36:26)
Yeah.
Yeah, yeah,
I think that's good. And a while ago, you posted a couple of paragraphs, basically talking about all the big AI labs and how they're ⁓ subsidizing the costs for us. they're just, you said effectively that ⁓ they're saying they'd rather die than lose this race, ⁓ which is really interesting in that like,
It is existential for them, or at least they feel that way. ⁓ And as a result, the costs for us are a lot, they're arbitrarily low, or they're lower than they actually would be. At least that's my interpretation of the state of the world. So ⁓ my thought is, eventually, the VCs are gonna run out of money to invest and subsidize all of this for us.
and we will need to be a little bit more cognizant of our tokens. And so the skills that you develop now will be, as a product engineer, will be a lot more valuable in that case too, because you're not wasting tokens on bad ideas.
Dax (37:40)
Yeah, I think my view on this is a little bit detailed. ⁓ It's hard to explain it all just because we're so deep into the inference space. So we know there's a lot of stuff that's going on and it is really kind of crazy and chaotic. ⁓ On one hand, the build outs for data centers to provide more inference, ⁓ I think these are things that will help costs go down. I think there's definitely a healthy margin. Like right now, when we look at the prices for tokens,
there's a really healthy margin in there. I would say like 50 to 60 % on average. So the floor, you know, maybe they can get half as half as cheap as they are now. ⁓ There's cheaper models that are way cheaper than the frontier models are like nearly as good. So there's like great forces kind of pushing the prices down. So I think that ⁓ will be there. And I think the inference build outs will help that. On the flip side, there's also crazy disorders in the market that make it really hard to even understand what's rational right now. ⁓
Kent C. Dodds (38:17)
Hmm.
Dax (38:38)
So a lot of these, these like $200 a month subscription plans, those definitely, I think, go well beyond the floor of where inference costs can go. And they distort the market in so many different ways. Like one, obviously pricing, but two, even the way we're using these tools, like, yeah, sometimes I prompt like a one line change that I know exactly what the changes I know what file it's in. And I just prompt it like, is that really behavior that is going to continue? Sometimes I'll like,
Kent C. Dodds (39:00)
Yeah
Dax (39:08)
have it randomly do stuff with no real intention in mind. Is that something that I would do if there's more lockdown budgets, right? Also, people's expectations are kind of crazy now, given those plans exist, they expect to be able to freely use these tools to a crazy degree for every possible problem, which also, you know, then that flows into what their products look and feel like, because maybe they're outsourcing too much of their thinking. And it's possible that it's entirely a price thing. Maybe if it was a $500 a month plan, whatever it is.
Kent C. Dodds (39:16)
Hmm.
Dax (39:37)
⁓ people would outsource less of their thinking and they kind of land in a better place. So it's so hard to tell what's normal right now because of these disorders. ⁓ I think these all get figured out eventually. ⁓ I don't necessarily think it's a, ⁓ things are overly subsidized now and the prices are going to go dramatically up later. I think it's more of that. ⁓ there's just, yeah, it's just, it's just noisy. We can't really know what the current behavior is and whether it's going to last. Yeah.
Kent C. Dodds (39:40)
Hmm.
Yeah,
I can definitely see a future where ⁓ the prices pretty much stay where they are and they make it sustainable by bringing costs down through like data center build outs and stuff that that makes sense. You said something earlier that made me think that you probably have an interesting ⁓ response to this take. But one thing that I've been thinking about is ⁓ not quite vibe coding, but like building out a feature or fixing a bug or whatever.
Dax (40:15)
Mm-hmm. Yeah.
Kent C. Dodds (40:36)
building out some software where you can see that the code isn't quite perfect or certainly not the way that you would do it, ⁓ but shipping that anyway because you know that in six months, if you need to go back into that code base, the model will be good enough that it can just fix it. It will be improved even that it can just go fix the mess that was created over there. How do you balance? ⁓ I mean, like we were saying, we wanna slow down and stuff, but once you get to the...
Okay, we've decided we want this product, we need to ship this thing, like that actual implementation. How do you balance the speed of implementation with ⁓ making sure the code is quality, ⁓ at least sufficient for that feature to be reliable and recognizing that in the future, if it really mattered, you could come back and clean it up later.
Dax (41:07)
Mm.
Yeah, so I try to look at this in terms of a world without AI and kind of how I'd be operating, because a lot of times there's actually nothing new that's going on here. ⁓ Even pre-AI, we'll know we have to build a feature and we'll realize that there's three different ways to do it. One way is extremely hacky but fast, another way is kind of in between, and a third way is like pretty slow but is the right thing to do long term.
Kent C. Dodds (41:50)
Mm-hmm.
Dax (41:52)
No, we would never say always do one always do two or always do three. It's depending on the situation, depending on the context of everything that's going on. You make a judgment call on which one you ship, but you know for sure that what two and three are you maybe understand that, Hey, I know how one can become too later. Uh, it's not totally blind. Like it's not that you were unaware that, uh, you've made this decision. So I kind of look at it the same way now. Um,
Sometimes there's a feature that is important and I know it's in a place with a pretty low blast radius that even if it's wrong, I know the bounds at which the boundaries that exist. I know the worst case scenario for this isn't that bad. ⁓ I generally don't use a justification of there's gonna be a better model in the future. ⁓ The models have gotten better, but I think for me fundamentally.
They haven't changed. I mean, I think they're better at following instructions or better at getting to what I'm trying to get it to do. But in terms of like raw intelligence, it doesn't feel like they're like necessarily now writing code the way I need them to. A great example is the other day I had something very basic I needed to do. was maybe like, it's like a single function implementation more or less. And it was doing three loops instead of one. And typically, you I would say like,
Okay, maybe this is fine, but it actually in a place where performance mattered and like the length of the end could get pretty big. So was like, okay, for the hell of it, let me like use this as a chance to see if the model can get to the right answer. And I spent there just because I work on a coding agent. So it's good for me to like use these as experiments to try to learn about them more. ⁓ I spent 20 minutes prompting, trying different models. None of them could get to the efficient solution. I had to do it manually. So
Yeah, these things are a lot better, but no, they're not better in this way where they can kind of solve some of these issues for you for sure. So yeah, judgment calls still matter, developing good judgment still matters. Make decisions like specifically for each thing. There's not this global thing of we always do with the hardware, we always accept the bell, I'm says.
Kent C. Dodds (44:03)
So what I'm hearing, Dax, is that, yes, everything's changed, but actually nothing's changed.
Dax (44:08)
Yeah,
it feels that way. I think every day I can feel myself relying on, I I have like 15 years of experience at this point, I feel myself relying on those 15 years of experience. I don't feel, you know, I don't know if you probably remember this, but you remember when we all knew like, CSS float hacks to like get stuff to go to the right place and left and right. And then Flexbox came out and there was actually no reason at all for us to ever really remember those things. And they kind of faded out of our memory.
That's kind of what like a game changer looks like where like your previous stuff just disappears in your brain and it's almost like a never matter that you knew it. I don't feel that way with this stuff. I feel like I'm relying on the pains of my past like lessons learned from my past. ⁓ So that way, yeah, it does feel like nothing has changed.
Kent C. Dodds (44:45)
Yeah.
That's a really good point. Like when I'm working with an agent, ⁓ whether I am setting up something to work in the background, like we're on the cloud or something, or working more synchronously with the agent, in either case, as I'm developing the plan or steering the agent, I am like referencing my past experience constantly. we don't want to go that direction because this problem, know, or, you know, we want, or I'm saying make sure to use this abstraction instead of that one or whatever. Like that past experience actually matters
Dax (45:16)
Mm-hmm.
Kent C. Dodds (45:25)
really like quite a lot because I, and I don't know, maybe this is cope and I'm just like hoping that it matters still, but like, I feel like if I didn't do that steering, the end result would be pretty like a lot worse.
Dax (45:32)
you
Yeah, no, I think and there's so much debate about this, but when I see ourself day to day, even with how conservative we are, we're still seeing the negative, like not made up. I'm not just like looking at the code being like, it's ugly. And that's kind of, I'm just kind of being annoyed at that for no reason. Stuff that impacts end users, stuff that makes our team work less well together, stuff that makes it less fun to work on things day to day. It's definitely all really there. I have a question for you. Do you feel like, you know, we've
all gone through different eras of our life where we've like exerted certain levels. Do you feel like you're exerting less these days? Or do you feel like, like what's your like, how hard do think you're working component different points of your life?
Kent C. Dodds (46:21)
Yeah, that's a good question. ⁓ Well, so like there's the addictive side of all of this where you're just like, just one more prompt, one more prompt. like, especially if you've got multiple agents working on different projects or in different areas of the projects, like it's just, okay, prompt this one. that one's done. Now prompt it now this one. And so it's so easy to stay up really late. I did that the other night. it's just, ⁓ so like there's that side of things, but I don't think that that's necessarily, ⁓
Dax (46:26)
Mm-hmm. Right, right.
Mm-hmm.
Mm-hmm.
Kent C. Dodds (46:51)
Yeah, I think it's just exciting and it's kind of fun to build. ⁓ I think what you're getting at though is like this, having an implementer freeing me up from having to do the implementation, that should make me work less, right? Like I no longer have to do that part. ⁓ And I also don't think that, I don't know, I do feel like I'm shipping more than I used to, that's for sure. ⁓ But I...
Dax (46:54)
Yeah.
Yeah. Yeah.
Kent C. Dodds (47:19)
do feel like I'm working as hard or even harder than I did like five years ago. Yeah, that's interesting.
Dax (47:22)
Yeah.
So I would almost use that as proof that there is a lot of value that exists in your head. I maybe we can't articulate exactly like what it is, but yeah, I'm the same. I'm exerting more than I ever have. I'm like more confused than I've ever been. I'm like trying harder than I ever have. And to be fair, like I am working on like the biggest thing I've ever worked on. So that's like a confounder. But I think it's a general intro of everyone. I think everyone's just like...
just exerting way more than the normal. And that's like real human exertion. That's like not stuff that we're doing for no reason. And they has pulled that out of us. So I think it's like a weird perspective to have that these things are like taking over or like replacing or like solving problems for us when across the board, we're all just kind of like, ⁓ like all the time. ⁓ Anyway, we're doing that for a reason. Like there's real value that that
Kent C. Dodds (48:01)
Hmm.
Dax (48:21)
this work needs from us.
Kent C. Dodds (48:23)
Yeah, that's a really interesting perspective. I don't know that ⁓ we could say the same for everybody. ⁓ I definitely, you hear a lot of talk where I'm like, I got 16 agents running at once. And then they just, never say what they have those agents doing or like what value is being created in their lives or our lives. ⁓ And so like that actually is one of the reasons I started. ⁓
Dax (48:43)
Yeah.
Kent C. Dodds (48:49)
posting on my blog about the specific things that I'm shipping because I wanted to make sure like, yes, I do have agents running all the time, but like I'm having them create valuable work. So there are definitely people who are shipping more, they're shipping. And I do think that there are people who already have product sense who are shipping better and everything. ⁓ But yeah, there's definitely a class of people who are kind of in the other camp. And I wonder if you have any tips for that class of people, like how to... ⁓
Dax (48:52)
Mm-hmm.
Yeah.
Kent C. Dodds (49:20)
Yes, work harder, but also deliver better.
Dax (49:23)
Yeah, we're talking about the positive side of all this, where there are people in situations where they're highly motivated, they're maybe self-directed, they're able to be self-directed, like they're kind of working on things that they personally think matter. And yeah, this category of people are now like probably working harder than ever. But there's like the negative side of this, where if you are in an environment where you have no reason to be super motivated, you just kind need to like do your tasks and move on with it. I think this zone is probably
extremely negatively impacted by AI because already, it's, you know, it's not like these types of environments produce great work. But now the motivation is to like prompt the agent hit enter and like move on. It's like higher than ever because you're usually aren't motivated. ⁓ And like, I think my thinking is more for people managing people in these environments, like you have to be it's like really bad now. It's probably the case where I've talked about before, there's people some people on our team that have
Kent C. Dodds (50:16)
Hmm.
Dax (50:21)
prior to joining us, they kind of scrubbed their previous companies in this way where it just felt like 80 % of team were just like throwing slop over the fence. And then like 20 % of team were just like, the same 20 % that was always trying to get everyone to care a bit more. They're now dealing with like 10x the volume of pain as they normally deal with. And so they're getting burnt out and quitting. So it's like, I don't
The stuff, the negative side of this stuff feels like it's probably pretty bad in companies and environments that just don't have great motivate, like motivating environments. yeah. So I don't know, I don't know where things are going to go. I think we're so early where maybe the feedback loop from all this hasn't actually hit. ⁓ I think, although I am starting to see early signs of it, ⁓ like, again, ourselves, like we've been trying to use these things as much as possible. ⁓ and we're kind of now looking at it being like, we have to be a lot more intentional.
Kent C. Dodds (51:03)
Hmm.
Dax (51:14)
I'm seeing some other companies also now pull back because they're just getting ⁓ like the volume of like security issues and stuff being thrown over. So they're like, okay, we have to define an official policy around this. Like what do we actually want from these tools? ⁓ Yeah. So I think it'll get re-corrected. Like I said at the beginning, we're all figuring out what a relationship to this thing should be. I don't think anyone's really figured it out and we're in like this weird phase of it, but ⁓ yeah, hopefully at some point.
Kent C. Dodds (51:23)
Hmm.
Dax (51:44)
we kind of end up with a good balance for everything.
Kent C. Dodds (51:47)
Yeah. And hopefully people can stop predicting the future. ⁓ Dax, I do have one last question for you that I'm going to ask for ⁓ a specific action that somebody could take to improve their product sense or user empathy or whatever. But before I ask you that, ⁓ or before you answer that, is there anything else that you think people really should know about developing really good products or anything we didn't really talk about today?
Dax (51:50)
Yeah.
Yeah, would say the only thing I would say is ⁓ you first has to convince yourself that it really matters. ⁓ I think there's another force in the world that's kind of, I think it's kind of around the same energy around, ⁓ you guys are to be here, it's going to solve everything. ⁓ There's just so many things out there trying to convince you that like, this stuff doesn't matter. It doesn't matter to be good at product. It doesn't matter. You can ship something, like look at all these companies that are successful with like crappy products. ⁓
There's so many ways to rationalize or excuse not being good at this. So yeah, first you have to commit yourself to the idea that it does matter. I personally think it matters. All people I admire that I think are doing great, it matters to them and you can see that in their work. I don't think our stuff is good enough, like I'm trying to get to their level. So yeah, I think for any of this to work, you have to really believe it, because it's gonna be challenged all the time.
Kent C. Dodds (53:10)
That's a really awesome perspective. So action for you all to take is ⁓ figure out how to make yourself believe it matters if you don't already.
Well, thanks so much, Dax. I really appreciate your time today and ⁓ just, ⁓ yeah, just love the things that you're sharing on X and giving us a peek into your experience in trying to figure out how to build a good product when the agent is just trying to drag you into building everything.
Dax (53:42)
Yeah, yeah, no, thanks for having me. was good. Yeah, like said, this has been top of mind for us, so it's nice to be able to clarify my thoughts by talking to you. So yeah, it was good.
Kent C. Dodds (53:51)
Cool. Thanks everybody. We'll see you all in the next one.
Dax (53:54)
See ya.



