Software Economics:
            Most software cost models can be abstracted into a function of five basic parameters.
1.       Size
2.       Process
3.       Personnel
4.       Environment
5.       Quality
Size:  The size of the end product, which is typically qualified in term of the number of source instructions or the number of function points required to develop the required functionality.
Process: The process used to produce the end product, in particular the ability of the process to avoid non-value-adding activities
Personnel: The capabilities of software engineering personnel, and particularly their experience with the computer science issues and the applications domain issues of the project.
Environment: The environment, which is made up of the tools and techniques available to support efficient software development and to automate the process.
Quality: The require quality of the product, including its features, performance, reliability, and adaptability.
      The relationships among these parameters and the estimated cost can be written as follows 
             Effort = (Personnel)(Environment)(Quality)(SizeProcess)
Several parametric models have been developed to estimate software costs. All of them generally abstracted into this form. One important aspect of software economics is that the relationship between effort and size exhibits a diseconomy of scale. The diseconomy of scale of software development is a result of the process exponent being greater than 1.0. Contrary to most manufacturing processes, the more software you build, the more expensive it is per unit item.
The abscissa represent the life cycle of the software business engaged in by the organisation. The three generations of software development are defined as follows:
1.       Conventional: 1960’s and 1970’s, craftmanship. Organisations used custom tools, custom processes, and virtually all custom components built in primitive languages. Project performance was highly predictable in that cost, shedule, and quality objectives were always underarchieved.
2.       Transition: 1980’s and 1990’s, software engineering. Organisations used more-repeatable processes and off-the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the components (>30%) were available as commercial products, including the operating system, database management system, networking, and graphical user interface.
3.       Modern practices: 2000 and later, software production. The book’s philosophy is rooted in the use of managed and measured processes, integrated automation environments, and mostly (70%) off-the-shelf components.
                One critical problem software cost estimation is a lack of well-documented case studies of projects that used an iterative development approach. Although cost model vendors claim that their tools are suitable for estimating iterative development projects, few are based on empiral project databases with modern iterative development success stories. It is hard enough to collect a homogeneous set of project data within one organisation; it is extremely difficult to homogenize data across different organisations with different process, languages, domains, and so on. The exact deor definition of a function point or a SLOC is not very important, just as the exact length of a foot or a mater is equally arbitrary. It is simply important that everyone uses the same definition.
                There have been many long-standing debates among developers and vendors of software cost estimation models and tools. Three topics of these debates are of particular interest here:
1.      Which cost estimation model to use
2.      Whether to measure software size in source lines of code or function points
3.      What constitutes a good estimate
About 50 vendors  of software cost estimation tools, data, and services compete within the sofware industry. There are several popular cost estimation models, as well as numerous organisation-specific models. COCOMO is also one of many of my software economics arguments and perspectives.
The measurement of software size has been the subject of much rhetoric. There are basically two objective points of view: source lines of code and function points. Both perspectives have proven to be more valuable than a third, which is the subjects or adhoc point of view practiced by many immature organizations that use no systematic measurement of size.
Many software experts have argued that SLOC is a lousy measure of size. However, when a code  segment is described as a 1000-source-line program, most people feel comfortable with its general “mass”. If the description were 20 function points, 6 classes, 5 usecases, 4 object points, 6 files, 2 subsystems, 1 component, or 6000 bytes, most people, including software experts , would ask futher questions to gain an understanding of the subject code. So SLOC is one measure that still has some value.
The use of function points has a large following, including Capers Jones, who cities the hazards associated with using SLOC metrics for object-oriented programs. The International Function Point User’s Group, function in 1984, is the dominant software measurement association in the industry. The primary advantage of using function points is that this method is independent of technology and it is therefore a much better primitive unit for comparisions among projects and organizations. The main disadvantage is that the primitive defintions ara abstract and measurements are not easily derived directly from the evolving artifacts.
The general accuracy of conventional cost models has been described as “within 20% of actuals, 70% of the time.” This level of unpredictability in ul the conventional software development process should be truly frightening to every investor, especially in light of the fact that few projects miss their estimate by doing better than expected. This is an interesting phenomemon to be considered when sheduling labor-intensive efforts.
The process described in the above fig is not all bad. In fact, it is absolutely necessary to analyse the cost risks and understand the sensitivities and trade-offs objectively. It forces the software project manager to examine the risks associated with achieving the target costs and to discuss this information with other stakeholders. The result is usally various perturbations in the plans, designs, process, or scope being proposed.
In summary the chapter can be discussed in the following:
Ø  It is concieved and supported by the project manager, architecture team, development team, and test team accoutable for performing the work.
Ø  It is accepted by all stakeholders as ambitious but realizable.
Ø  It is based on a well-defined software cost model with a credible basis.
Ø  It is based on a database of relevant project experience that includes similar processes, similar technologies, and similar people.
Ø  It is defined in enough datail so that its key risk areas are undertood and the probability of success is objectively assessed.
1.       Explain breifly about Software Economics.
2.    Explain breifly about Pragmatic Software Cost Estimation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enable Notifications OK No thanks