I’ve been reading Darrell Norton’s blog posts on the second edition of Kent Beck’s “Extreme Programming Explain, 2nd Ed.” I disagree with Darrell and Kent’s preference for estimating backlog items in real time as opposed to using magnitude or using ideal time.
I’ve seen a number of Agile projects using real time and the team consistently ran into problems. First, I thought one of the cornerstones of Kent Beck’s first edition was that the estimation of software effort is notoriously difficult. Darrell points out that business people and customers appreciate seeing things in real time as it is more transparent. This is true, but magnitude (points, for example) is used intentionally to remove the illusion of accuracy real time story estimates create. At Danube, we’ve since moved to magnitude to take customer focus off the illusion of “accurate backlog item estimates”.
Product Owners are disappointed and trust erodes when a product burndown chart’s trendline extrapolation doesn’t match up with the estimated effort remaining. In the example below, consider the team’s velocity of 71.5 “hours” per sprint. Would it surprise you to know that this team had more than one member working full time with three-week iterations? It didn’t surprise me, but it surprised our customer who could not get past the fact that 71.5 hours did not match the 300+ hours per iteration they purchased. So what’s more confusing, using an amorphous magnitude backlog effort estimate unit, or using “real time”? Velocity is valuable for estimating the effort a team can handle in a given sprint and to extrapolate a completion date, and that doesn’t require a specific unit of effort.
BTW, this chart was created by the upcoming release of ScrumWorks 1.2.