Estimating the cost of software is, at best, an educated guess. Although most try to pretend this is not the case, yet despite all the new ideas and models, software is still costed in the same way it was 20 years ago. Due to the complexities involved in software cost estimation, it ultimately relies on the judgement and informed opinion of the team.
A typical software estimation process follow the given procedure:
1. Deconstruction of the software based on functional boundaries is carried out and the result is often a requirements document. Given the nature of human understanding, this document is usually a ‘best first guess’ of what the client had in mind.
2. Programmers estimate how long it would take to build each component.
3. Some contingency factor is then applied. This contingency factor (also known as ‘transactional complexity’, or ‘risk assessment factor’) are gross multipliers often by up to +/- 150%.
The result is a crude software estimate that can vary by +/-50% of the final cost. This is such a large margin of error that it is considered as a good guess and nothing more. In comparison to a real science, we know the speed of light to an accuracy of plus or minus one part in 300 million.
There are many models for software estimation available and prevalent in the industry. Researchers have been working on formal estimation techniques since 1960. Early work in estimation which was typically based on regression analysis or mathematical models of other domains, work during 1970s and 1980s derived models from historical data of various software projects. Among many estimation models expert estimation, COCOMO, Function Point and derivatives of function point like Use Case Point, Object Points are most commonly used. While Lines Of Code (LOC) is most commonly used size measure for 3GL programming and estimation of procedural languages, IFPUG FPA originally invented by Allen Alrecht at IBM has been adopted by most in the industry as alternative to LOC for sizing development and enhancement of business applications. FPA provides measure of functionality based on end user view of application software functionality. Some of the commonly used estimation techniques are as follows:
Lines of Code (LOC): A formal method to measure size by counting number of lines of Code, Source Lines of Code (SLOC) has two variants- Physical SLOC and Logical SLOC. While two measures can vary significantly care must be taken to compare results from two different projects and clear guideline must be laid out for the organization.
IFPUG FPA: Formal method to measure size of business applications. Introduces complexity factor for size defined as function of input, output, query, external input file and internal logical file.
Mark II FPA: Proposed and developed by Mark Simons and useful for measuring size for functionality in real time systems where transactions have embedded data.
COSMIC Full Function Point (FFP): Proposed in 1999, compliant to ISO 14143. Applicable for estimating business applications that have data rich processing where complexity is determined by capability to handle large chunks of data and real time applications where functionality is expressed in terms of logics and algorithms.
Quick Function Point (QFP): Derived out of FPA and uses expert judgment. Mostly useful for arriving at a ballpark estimate for budgetary and marketing purposes or where go-no go decision is required during project selection process.
Object Points: Best suited for estimating customizations. Based on count of raw objects, complexity of each object and weighted points.
COCOMO 2.0: Based on COCOMO 81 which was developed by Barry Boehme. Model is based on the motivation of software reuse, application generators, economies or diseconomies of scale and process maturity and helps estimate effort for sizes calculated in terms of SLOC, FPA, Mark IIFP or any other method.
Predictive Object Points: Tuned towards estimation of the object oriented software projects. Calculated based on weighted methods per class, count of top level classes, average number of children, and depth of inheritance.
Estimation by Analogy: Cost of project is computed by comparing the project to a similar project in the same domain. The estimate is accurate if similar project data is available.
Agile Estimation decoded:
What is a Story Point ?
It is a subjective unit of estimation used by Agile teams to estimate User Stories.
What does a Story Point represent ?
They represent the amount of e ffort required to implement a user story. Some agilists argue that it
is a measure of complexity, but that is only true if the complexity or risk involved in implementing a user story translates into the e ffort involved in implementing it.
What is included within a Story Point estimate ?
It includes the amount of e ffort required to get the story done. This should ideally include both the development and testing e ffort to implement a story in a production-like environment.
Why are Story Points better than estimating in hours or days ?
Story point estimation is done using relative sizing by comparing one story with a sample set of previously sized stories. Relative sizing across stories tends to be much more accurate over a larger sample, than trying to estimate each individual story for the effort involved. Teams are able to estimate much more quickly without spending too much time in nailing down the exact number of hours or days required to finish a user story.
How do we estimate in points ?
The most common way is to categorize them into 1,2, 4, 8, 16 points and so on. Some teams prefer to
use the Fibonacci series (1, 2, 3, 5, 8). Once the stories are ready, the team can start sizing the first card it considers to be of a “smaller” complexity. For example, a team might assign the “Login user” story 2 points and then put 4 points for a “customer search” story, as it probably involves double the effort to implement than the “Login user” story.
This exercise is continued till all stories have a story point attached to them.
In Conclusion, We estimate by:
<<< best case - x worst case - y estimated x*(60%-65%) + y*(40%-35%) Who should be involved in Story Point estimation ?
The team who is responsible for getting a story done should ideally be part of the estimation. The team’s QAs should be part of the estimation exercise, and should call out if the story has additional testing effort involved. For example supporting a customer search screen on 2 new browsers might be a 1 point development effort but a lot more from a testing perspective. QAs should call this out and size the story to reflect the adequate testing effort.