SOFTWARE QUALITY ASSURANCE RESPONSIBILITY CENTER

INFORMATION TECHNOLOGY MANAGEMENT

LG Group

Software Development Centre (India)

2nd Floor, Embassy Diamante

34, Vittal Mallya Road

Bangalore - 560 001

INDIA

 

 

Function Points

&

COCOMO

 

 

Author: Raghav S. Nandyal

VERSION CONTROL

Version Number

Author

Remarks

CMM_CNSL_ESTM_LG-EDS_1.0.0_2 MAR 97

Raghav S. Nandyal

Initial Creation

 

This document provides two well-known estimation techniques called Function Points and COCOMO.

 

Please address your questions on this document or comments to :

 

Raghav S. Nandyal

Head-Information Technology Management

LG Group

Software Development Centre (India)

2nd Floor, Embassy Diamante,

34, Vittal Mallya Road

Bangalore 560 001 INDIA

Tel. +91-80-221-5022 ~ 25

Fax. +91-80-221-5026

 

Contributors:

Mr. Raghav S. Nandyal

 

 

March 2, 1997

 

 

Table of Contents

 

ESTIMATION TECHNIQUES

1. FUNCTION POINTS

2. FUNCTION POINT METHODOLOGY

2.1 BACKFIRE METHOD

2.2 EXPECTED VALUE FOR FP

3. COCOMO

 

ESTIMATION TECHNIQUES

 

1. FUNCTION POINTS

Function oriented software metrics use direct measures of the process, the product and the resources applied. It is then normalized with an indirect value that indicates program "functionality". It was first proposed by Albrecht who suggested a productivity measurement approach called function-point method. These are derived using an empirical relationship based on countable measures of software’s information domain and assessments of software complexity.

 

In order to compute function points, five measurement parameters indicated in the table shown below has to be completed.

 

Measurement Parameter

Count

Weighting Factor

Function Counts

Fi

Simple

Average

Complex

IT

 

x

3

4

6

=

 

OT

 

x

4

5

7

=

 

UI

 

x

3

4

6

=

 

IU

 

x

7

10

15

=

 

EU

 

x

5

7

10

=

 

FUNCTION COUNT TOTAL = å Fi

 

 

The following rules govern the value for count :

 

Number of user inputs (IT):

Description

Count

Data Screen

1 IT

Multiple screens accumulated and processed as ONE transaction

1 IT

Two data screens with the same format and processing logic

1 IT

Two data screens with the same format but DIFFERENT processing logic

2 IT

Data screens that is both input and output

1 IT, 1 OT

Data screen with multiple functions

1 IT

Automatic data or transactions from other applications

1 IT

 

Number of user inputs (IT) contd. ...:

Description

Count

User application control input

1 IT

Input forms

1 IT

An update function following a query

1 IT

Individual selections on menu screen

0 IT

User maintained table or file

1 IT

Duplicate way of screen input using say, hot keys.

0 IT

 

 

Number of user outputs (OT):

Description

Count

Data screen output

1 OT

Batch report

1 OT

Screen error message format associated with input type

1 OT

Start screen display or end screen display

1 OT

Transaction file crossing the application boundary

1 OT

Automatic data or transactions to other applications

1 OT

Single error message on a screen

0 OT

Error message sent to an operator as a result of an input transaction

1 OT

Backup files (only if requested by the user)

0 OT

Selection menu screen output with save capability

1 OT

Output to screen and to printer

2 OT

User maintained table or file

1 OT

Output files created by application for technical reasons

0 OT

 

Number of user inquiries (UI) :

Description

Count

Online input and online output with no update of data in files

1 UI

Inquiry followed by an update input

1 UI, 1 IT

Help screen input and output

1 UI

Selection menu screen input and output

1 UI

Hot key screen selection for input and output

1 UI

 

Number of internal user data groups (IU) :

Description

Count

Logical entity of data from user viewpoint

1 IU

Logical internal files generated or maintained by the application

1 IU

 

Number of internal user data groups (IU) contd. ...:

Description

Count

User maintained table or file

1 IU

Files accessible to the user through keyword(s) or parameter(s)

1 IU

File used for data or control by sequential (batch) application

1 IU

Each hierarchical path through a database, derived from user requirements

1 IU

Intermediate files (e.g. work file for sorting)

0 IU

File created by technology used. (e.g., index file)

0 IU

A master file only read by application (e.g., application initialization file)

0 IU

 

 

Number of external user data groups (EU) :

Description

Count

File of records from another application

1 EU

Database shared from other applications

1 EU

Logical internal file from another application used as a transaction

0 EU, 1 IT

Each hierarchical path through a database, from another application derived from user requirements

I EU

2. FUNCTION POINT METHODOLOGY

The function point computation follows a combination of the IBM and the Software Productivity Research, Inc. (SPR) suggested methodologies since it is more accurate. The primary difference between the IBM and SPR Function Point methodologies is in the way they deal with complexity. The IBM techniques for assessing complexity are based on weighting 14 influence factors on a scale of 0 to 5 and evaluating the numbers of field and file references. While this is NOT recommended due to a high level of subjectivity, a weighting factor based on complexity of the 5 measurement parameters is followed. This complexity is further refined as below.

 

The SPR technique for dealing with complexity separates the overall topic of "complexity" into two distinct questions that can be dealt with intuitively and can be answered by anybody knowledgeable with the technology domain. They are :

 

The above questions are rated on a scale of 5 as below.

 

Problem complexity (PC):

Simple algorithms & Simple calculations

1

Majority of simple algorithms and calculations

2

Algorithms and calculations of average complexity

3

Some difficult or complex algorithms

4

Many difficult or complex algorithms involving complex calculations

5

Value for PC

 

 

 

Data complexity (DC) :

Simple data with few variables

1

Numerous variables, but simple data relationships

2

Multiple files, fields, and data intersections

3

Complex file structures and data intersections

4

Very complex file structures and data intersections

5

Value for DC

 

 

Now, a combined IBM, SPR strategy is used to determine the final function point.

 

PC+DC

2

3

4

5

6

7

8

9

10

Complexity Multiplier (CM)

0.6

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

 

FUNCTION POINT = FUNCTION COUNT TOTAL x COMPLEXITY MULTIPLIER

FP = FUNCTION COUNT TOTAL = å Fi x CM

 

2.1 BACKFIRE METHOD

The backfire method for estimating function points is based on empirical relationships discovered to exist between source code and Function Points in all known languages. This method is based on tables of average values. It is useful for doing retrospective studies of projects completed long ago, and for easing the transition to Function Point metrics for people who are familiar with lines-of-code metrics.

Assembler

320 (Statement per Function Point)

DB Languages

40

C

128

Object Oriented

29

COBOL

107

Query Languages

25

ADA

71

Generators

16

 

2.2 EXPECTED VALUE FOR FP

After finding the optimistic, most likely and pessimistic value from historical database for the "computed value" of function point using the method suggested in section 2.0, the expected value for FP is then determined using the relation :

 

E = (a + 4m +b)/6

a = Optimistic value of FP

m = Most likely estimate of FP

b = Pessimistic value of FP

E = Expected value for FP

 

3. COCOMO

COCOMO is an hierarchy of estimation models which can be used to effectively determine "effort estimation in person-months" and "development time in chronological months".

COCOMO comes in three flavors or models. They are elaborated below :

COCOMO Model 1 : The basic COCOMO model is a single-valued, static model that computes software development effort (and cost) as a function of program size expressed in estimated lines of code (LOC).

 

COCOMO Model 2 : The intermediate COCOMO model computes software development effort as a function of program size and a set of "cost drivers" that include subjective assessments of product, hardware, personnel, and project attributes.

 

COCOMO Model 3 : The advanced COCOMO model incorporates all characteristics of the intermediate version with an assessment of the cost driver’s impact on each step (analysis, design, etc. ) of the software engineering process.

 

COCOMO models can be applied to three classes of software projects. They are called organic, semidetached, and embedded. Organic mode applies to relatively small, simple software projects in which small teams with good application experience work to a set of less-than-rigid requirements. Semidetached mode projects are intermediate in size and complexity. Teams with mixed experience levels must meet a mix of rigid and less-than-rigid requirements. Embedded mode projects must be developed within a set of tight hardware, software, and operational constraints.

 

The basic COCOMO equation take the form :

E = a (KLOC)b

D = c (E)d

The values of a, b, c and d are determined for software projects using the table below :

Software project

a

b

c

d

Organic

2.4

1.05

2.5

0.38

Semidetached

3.0

1.12

2.5

0.35

Embedded

3.6

1.20

2.5

0.32

 

The cost-driver attributes given below are rated on a six-point scale that ranges from "very low" to "extra high". On the basis of the rating, an effort multiplier is determined from tables published by Boehm and the product of all effort multipliers results in an effort adjustment factor (EAF). Typically the range of values lie between 0.9 to 1.4. In practice, depending upon the complexity of the software project and few EAF values are averaged using Wide-Band Delphi estimation technique and the value of E is determined as :

E = a (KLOC)b x EAF

D = c (E)d

 

Illustration :

Let us say that the software is a semidetached project with the following :

a = 3.0, KLOC (estimated) = 33.3, b = 1.12

E = 152 person-months

c = 2.5, d=0.35 would yield

D = 14.5 months (chronological months)

These values can then be used by a project planner to recommend the number of people as :

N = E/D

N = 11 people (Approximately)

 

In reality, the planner may decide to use only four people and extend the project duration accordingly.