Construction projects get reasonably accurately estimated every day. Probably because:
1. People have been doing it long enough that estimator is a job description.
2. That it is a job description, means money is spent on estimation.
3. Money is spent on estimation because getting it wrong can cost money.
I think the problem with software estimation is that there is usually no culture of estimation, usually very little direct incentive to get it right, and no regime of formal training for those responsible.
To put it another way, software does not have standard documents for estimating.
Risks, risks & risks.. That's my #1 priority on communicating estimates.
Overall this is a nice short summary on the topic. The one thing I would add that I found very helpful on larger projects is communicating the risks & unknowns. I suggest listing them out at the start of the project & update their status as you work on it.
I've worked on teams where it's done with a simple color (red, yellow or green) on how confident we are on the task estimate based on risks/unknowns. This is the simplest way in my opinion.
I also like Basecamp's Hill Charts - https://3.basecamp-help.com/article/412-hill-charts
In 25 years in industry, I have never seen estimates proven to be valuable or important, mostly used for dumb purposes, and generally are a waste of time.
At this point I feel Kanban style priorities and limiting work in progress to be only useful approach.
When some product person dumps a huge task in the priorities, break it down into smaller deliverables. No estimates are really needed.
My first job I worked was for a small agency. I was kind of a young go getter and was able to very quickly pump out bad (in hindsight) yet mostly functional code very quickly.
Sales people figured out they could come to me specifically for estimates because they would be shorter than other developers estimates and more likely to get the sale. I was young and dumb and didn't connect that often times I wouldn't be the one developing the project. The other developers got angry with me after they caught on to why they were getting impossible timelines.
I started multiplying my personal estimates by three. The sales people were less than pleased and eventually started going elsewhere for their estimates as greener developers were hired.
Building software is often like building a house. When construction starts, progress appears to happen fast. You put up walls pretty fast so it starts looking like a real house in a short time. Later, construction progress appears to slow down significantly as work shifts to detail work (wiring, plumbing, etc.) which doesn't change the appearance from the street much.
With software, the basic UI can take shape quickly. Some rudimentary functionality sometimes comes along quickly as well. Then all the detail work (error handling, logging, performance enhancements, etc.) makes progress appear to slow significantly.
My take is (and I somwhat agree with the article on this) that if you know requirements, interfaces and tasks precisely enough to give a reliable estimate, then that means that the majority of actual development was already done.
But thats just not the point where people typically want estimates, they want them much earlier.
You can only estimate job you’ve done before.
Everything else is just guessing.
Software engineering is not uniform: agencies working on similar projects may have good estimates.
R&D and novel approaches - usually takes whatever time it needs and then some more.
ALL STORY POINTS ARE EQUAL BUT SOME STORY POINTS ARE MORE EQUAL THAN OTHERS
Skim read the article, why is it "But None are useful" and not "And none are useful"?
The value is in the activity itself, like planning, rather than the final document.
This remains great reading for software estimation: https://www.researchgate.net/publication/247925262_Large_Lim...
I'm barely even interested in talking about the subject anymore - estimating software is only useful as an alternative viewpoint to help understand what you are building. Nobody accurately estimates complex software in any meaningful way. Even min/max/most-likely estimation isn't that useful, but ok to think about.
Hidden complexity and simplicity lurks everywhere. "3 month projects" turn into 3 day projects because of an unknown tool. Or a simple problem (think fermat) turns out to be insanely complicated.
The core issue is that when you create a function, or module, or whatever, you never need to do it again. So you are always creating new things. If you are writing the same code again and again, you are bad. Now, the past 10 years or so, that was less true as we wrangled with yaml configs and gluing together pipelines and modules and services. But I think AI is really good at that stuff, so I think we'll be back to always working on original things.
And that doesn't even take into consideration legacy codebases - good luck estimating changes to those ;)
If none are useful, then why does my manager keep asking me for them?
Software estimates also often fail to account for external factors. You may have a fairly good sense for how long some feature will take to get to code-complete. However, code reviews on many times are nobody’s job, so they happen whenever (and you end up with a Volunteer’s Dilemma of completing them). Then CI, linters, etc. may have various failures. Some of these may be legitimate, others may be flakiness your team has decided to ignore, or even infrastructure issues that prevent them from running at all. There’s enough X factor in external sources alone to make any estimate you provide total nonsense.
I mean saying ALL estimations are wrong is exaggeration.
As they say, "Even a broken clock is right twice a day"
This has been my experience. I don't disagree with the content of the article, really just the exaggerated title.
Every now and again we do get a 100% correct estimation. It doesn't happen often. In fact it's quite rare. But its more than never.
I hate it when people quote Hofstadter's "law" in a literal sense. It was never made to make any statement about timeline estimation. It was just an example of a recursively-defined law. Honestly I'm not even sure how many of the other quotes are also taken out of context.
In my experience, a substantial portion of poor estimates comes from:
1. An initial estimate is made that is fairly accurate.
2. Someone in management says "that's too long, we've got to find a way to bring in that date"
3. The estimate is revised to assume the best case possible execution time in each task, that resources will be available as soon as they are needed, that everything is fully parallelizable.
4. The estimate now magically matches the target date.
5. Actual execution time is close to the original estimate.
People make bad estimates because (other) people don't want good estimates.