We were preparing yet another estimate. It was a greenfield product, nothing too fancy. We used our default approach, grouped work into epic stories, and used historical data to produce a coarse-grained time estimate per epic.
We ended up with a 12-20 week bracket. Unsurprisingly, our initial hip shot would probably be close to that.
The whole process took maybe half an hour. Maybe less.
Then we fell into an AI rabbit hole. Should our estimate be lower since we will generate a good part of the code?
AI in Early-Stage Product Development
We could discuss the actual impact of AI tools in established and complex code bases. Even more interestingly, we could discuss our perceptions.
Yet, for a greenfield project and not-very-complex functionality, generating swaths of code should be easy enough.
After all, it seems that’s what cutting-edge startups do these days (emphasis mine):
The ability for AI to subsidize an otherwise heavy workload has allowed these companies to build with fewer people. For about a quarter of the current YC startups, 95% of their code was written by AI, Tan said.
Garry Tan is the CEO of Y Combinator, so most definitely a highly influential figure in the startup world. And probably quite knowledgeable of what YC startups do, let me add.
If that’s what the best do, we should follow suit, right? That’s why we got back to our initial estimate and tried to assess how much we can shave off of it, thanks to the technology.
It’s Not About Coding Speed
A lot of the early-stage work we do at Lunar Logic has already shifted to the new paradigm. The code is generated. Developers’ jobs have evolved. It’s code-review-heavy and typing-light. That is, unless you count prompting.
Yet, it’s possible to generate entire features, heck, entire apps with AI tools. So we should be faster, right? Right?
One good discussion later, we decided to stick with the original estimate nonetheless. The gist of it? It was never about coding pace.

Yes, you can generate a lot of code with a single prompt, and with enough preparations, you can make its quality decent. But AI is not doing the discovery part for you. It does not validate whether what you’re building works.
It won’t take care of the whole back-and-forth with the client whose vision is most definitely somewhat different from what they’re going to get. And even if they were able to scope their dream precisely, the First Rule of Product Development applies.

It’s a completely different experience to imagine a product and to actually interact with it. No wonder people change their minds once they roll up their sleeves and start using the thing.
The Core Cost of Product Development
After building (partially or entirely) some 200 software products at Lunar, we have enough reference points to see patterns. Here’s one.
What’s the number one reason for the increased effort needed to complete the work? Communication.
Communication and its quality.
- Insufficient clarity before starting a task triggers rework down the line.
- Waiting for feedback increases context switching and thus makes the team inefficient.
- Inadequate knowledge of the business context results in building the wrong thing.
- Lack of focus in communication is a direct waste of everyone’s time.
Should I go on? Because I totally could.
In practice, I’ve seen efforts where poor communication added as much as 100% to the workload. It went down to all the rework and inefficiencies triggered by a lack of clarity.
When such a thing happens, we might have been wrong about the actual number of features or the size of some of them, and it wouldn’t have mattered. At all. Any such mistake would be covered many times by the bad communication overhead. And then some.
AI Does Nothing to the Quality of Communication
Before we move further, a disclaimer: I understand that there are many AI tools designed around human-to-human communication.

While there’s still work to catch up with regular technical conversations between developers, things like meeting summaries can be useful. Although I’d love to see usage data, how many of these summaries are read? Like, ever.
The communication I write about is a different beast, though. It’s not notetaking. It’s attentive listening, creative friction, and collective intelligence. It’s experience cross-pollination.
With that, AI is of little to no use. And yet, this is the critical aspect of any effective software project.
What’s more, there’s little you can know about the quality of communication before the collaboration starts. Sure, you get early signs. But you know what it really is once you start working together.
Start Small
One of the reasons why I’m a huge fan of starting collaboration with something small—like a couple of weeks kind of small—is that we learn what communication will look like.
It’s a small risk for our clients, too. After all, how much can you spend on a couple of people working for two weeks?
Once we’re past that initial rite of passage, we know how to treat any later estimates. Should we assume there’s going to be a significant communication tax? Or rather, we could shave some time here and there because we all will be rowing in the same direction.
One of our most recent clients is a case in point. Throughout the early commitment, he actively managed stakeholders on his end to avoid adding new ideas to the initial scope. He helped us keep things simple and defer improvements till we get more feedback from the actual use.
The result? Our estimate turned out to be wrong. We wrapped up the originally planned work when we were around 75% of the budget mark.
Communication quality (or lack thereof), as much as it can add a lot of work, can remove some, too. That’s why it’s the most underestimated factor in estimation (pun intended).
A post on estimation is always a chance to share our evergreen: no bullshit estimation cards. After a dozen years, I still hear how they get appreciated by teams.
If you like what you read and you’d like to keep track of new stuff, you can subscribe on the main page.
I’m also active on Bluesky and LinkedIn too, with shorter updates.
I also run the Pre-Pre-Seed Substack, where I focus on early-stage product development (and, inevitably, AI).

































