Abstract: A method and system for estimating size of a testing project corresponding to one or more software applications is disclosed. The method includes identifying a set of test parameters for the testing project, counting number of each test parameter from the set of test parameters, and measuring the size of the testing project. The size of the testing project is measured in Test Units (TUs) based on the number of each test parameter. A TU is defined based on predetermined weights of the set of test parameters.
SIZE AND EFFORT ESTIMATION IN TESTING
APPLICATIONS
BACKGROUND
The present invention relates generally to testing in software projects, and more particularly to effort estimation of functional manual testing projects.
There are several approaches for effort estimation of software development and maintenance projects. Majority of functional manual testing projects estimate the size of the project in terms of number of test cases. The test cases are divided into Simple, Medium, and Complex (SMC) test cases. The effort associated with each type is multiplied with the number of each and a buffer is added to arrive at the testing project effort.
However, the number of test cases may not be a good indicator of the size of the project. This is because each project has varying definition of SMC which is subjective and person dependant. Project managers define SMC more in terms of effort that most often depends on his/her past experience.
Further, there is no standard model for effort estimation of independent testing projects.
Therefore, there is a need for an effort estimation method in independent testing projects which is uniform across projects and persons handling the testing projects.
BRIEF DESCRIPTION
In accordance with an aspect of the present technique, a method for estimating size of a testing project corresponding to one or more software applications is provided. The method includes identifying a set of test parameters for the testing project, counting number of each test parameter from the set of test parameters, and measuring the size of the testing project in Test Units (TUs) based on the number of each test parameter. The TU is defined based on predetermined weights of the set of test parameters.
In accordance with another aspect of the present technique, an effort estimation system for estimating a testing effort of a testing project corresponding to one or more software applications is provided. The system includes a database, an interface, and a calculation module. The database stores predetermined weights of a set of test parameters that define a Test Unit (TU), and at least one value of a predetermined TU rate. The interface receives number of each test parameter from the set of test parameters, and is for setting values of a set of project specific parameters. The calculation module is coupled to the database and the interface. The calculation module calculates the size of the testing project in Test Units (TUs) based on the number of each test parameters, and the testing effort based on the size of the testing project.
DRAWINGS
These and other features, aspects, and advantages of the present technique will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
FIG. 1 is a flowchart depicting a method for estimating size of a testing project, in accordance with an aspect of the present technique;
FIG. 2 depicts a table providing example predetermined values of test parameters, in accordance with an aspect of the present technique;
FIG. 3 depicts a table that shows contribution of adjustment parameters to an adjustment factor, in accordance with an aspect of the present technique; and
FIG. 4 is a block diagram of an effort estimation system for estimating a testing effort of a testing project corresponding to one or more software applications, in accordance with an aspect of the present technique.
DETAILED DESCRIPTION
The following description is full and informative description of the best method and system presently contemplated for carrying out the present technique which is known to the inventors at the time of filing the patent application. Of course, many modifications and adaptations will be apparent to those skilled in the relevant arts in view of the following description in view of the accompanying drawings and the appended claims. While the system and method described herein are provided with a certain degree of specificity, the present technique may be implemented with either greater or lesser specificity, depending on the needs of the user. Further, some of the features of the present technique may be used to advantage without the corresponding use of other features described in the following paragraphs. As such, the present description should be considered as merely illustrative of the principles of the present technique and not in limitation thereof, since the present technique is defined solely by the claims.
The present technique relates to a method and system for estimating size of a testing project. The testing project can be part of an 'end to end' software development project or an independent testing project, and can involve testing of one or more software applications. In independent testing projects, a testing team may not have information about the size of development effort of, for example, the software development project. Further, aspects of the present technique are specifically applicable to manual functional testing projects. Manual functional testing projects are carried out in multiple phases including a requirements study, test planning, Test Case Generation (TCG), and Manual Test Execution (MTE). Estimation of the size of the testing project occurs during and after the test planning phase. Hence, the size of life cycle activities such as TCG and MTE are critical. TCG refers to developing Test Cases (TCs). MTE refers to execution of those TCs that are written during TCG. MTE may also include execution of other TCs provided by another group such as a client or vendor for the testing team. Aspects of the present technique are applicable to TCG, MTE, and combinations thereof.
FIG. 1 is a flowchart depicting the method for estimating the size and effort of the testing project. At step 102, a set of test parameters for the testing project are identified. At step 104, number of each test parameter from the set of test parameters is counted. At step 106, the size of the testing project is measured in Test Units (TUs) based on the number of each test parameter. A TU is defined based on predetermined weights of the set of test parameters. At step 108, a testing effort for the testing project is estimated based on the size of the testing project.
The set of test parameters are identified from test requirement documents, use cases, or written test cases for the testing project. Test parameters are uniformly measurable for the testing project and are independent of perception of an estimator, who estimates the size and effort for the testing project.
For TCG, the set of test parameters include number of test steps, number of queries for test data creation, number of front end validations and number of validation checks in accordance with an aspect of the present technique. The test parameters can be identified at a TC level, a scenario level, or at the application level. Examples of test steps, such as in a Graphical User Interface (GUI) application testing include entering a value in a text box, selecting a value from a list, navigating to a next screen, and the like. Test data refers to data or records that are available in a test environment before a TC is initiated, for example, data populated in an Unit' file, and a set of user records created in a database prior to TC execution. Test data may require writing Data Definition Language (DDL) statements such as Structured Query Language (SQL) 'Insert' statements for creating the test data for the particular TC. Validation is the test step involving validation of one or more outputs generated by the application, for example, validation of content, arithmetic or formulae validation or database validation or combinations thereof.
For MTE, the set of test parameters include number of test steps, number of DDL statements, for example, SQL statements for test data creation, number of front end validations, number of arithmetic or formulae validations, and number of database validations in accordance with another aspect of the present technique. Front end validations refer to validation of front end fields like text boxes, list box contents, and
displays. Database validations refer to validations requiring comparison of result of an event with an expected result. The expected result may be data stored in the database that has been manipulated according to logic of the application.
In addition to the set of test parameters, a set of project specific parameters are also identified. Project specific parameters include clarity of requirements, prior application knowledge, domain complexity, and combinations thereof. Project specific parameters may not be uniformly measurable for the testing project as these parameters are based on the perception of the estimator about prior application knowledge or skill sets of the testing team, domain complexity and adequacy of requirement documents. The project specific parameters are identified from a test plan that includes information on skill sets of the testing team, and domain complexity of the testing project. Based on the perception of the estimator, a value is assigned to each project specific parameter. For example, for clarity of requirements, the value is set to zero if a requirement document can be mapped to the TCs. The value is set to one if requirements corresponding to the requirement document need clarifications with a development team that developed the application. However, the value is set to two in case the requirements corresponding to the requirement document requires discussions with business analysts.
The parameter prior application knowledge quantitatively accounts for base knowledge that the testing team has for carrying out the testing project. For example, the value of this parameter is set to zero if the testing team has previously worked on similar applications at development or testing stages. The value is set to one if the application under test is of a new type, or the testing team is new to the assignment.
Domain complexity quantitatively accounts for domain experience of the testing team in carrying out the testing project. For example, the domain complexity value is set to zero if the testing team has executed projects in the application domain. The value is set to one if the domain is new to the testing team, but experience or knowledge required for executing the testing project is available within an organization to which the testing team belongs, and therefore can be transferred to the testing team. However, the value is set to two if the domain is new to the organization.
On identification of the set of test parameters, the number of each test parameter of the set of test parameters is counted. The count for each test parameter is determined through the requirement documents, use cases, TCs, or combinations thereof. The count values can be obtained at the TC or application level.
Based on the obtained count, the size of the testing project is measured. For this purpose, a unit of measurement of the size of the testing project referred to as the TU is used. In an aspect of the present technique, the TU is defined in terms of each of the identified test parameters,
FIG. 2 depicts a table providing example predetermined weights of some test parameters. The TU can be defined individually in terms of each test parameter. For example, it can be seen from the table that one TU is equivalent to five test steps in terms of number of test steps only. Similarly, one TU is equivalent to one validation check in terms of number of validation checks only. This implies that one TU is equal to five test steps or one validation check.
[0001] The predetermined weights of the test parameters are determined from a sample of prior executed testing projects. For example, the predetermined values in the table of FIG. 2 are determined from the prior executed testing projects with a sample size of 30. For illustrating the measurement of the size of the testing project, let the values of the test parameters for the particular test case be {x, y, z,....} and the corresponding predetermined weights be {a,b,c,....}. Then, the number of test units is calculated by the equation:
NoofTUs=- + ^ + -K (1)
a b c
To account for the contribution of project specific parameters to the size of the project, an adjustment factor is calculated. FIG. 3 depicts a table showing the contribution of the project specific parameters to the adjustment factor based on the values of the project specific parameters. Based on the values of the adjustment parameters as described earlier, the contribution of the adjustment parameters to the adjustment factor is determined. The value of the adjustment factor is set to one when
there is no contribution of the adjustment parameters to the number of TUs. For calculating the adjustment factor, the individual contributions are added to the set value of one. For example, if the value of all the three adjustment parameters is equal to one, then the value of the adjustment factor is calculated to be 1.2. The adjustment factor is multiplied by the number of TUs to obtain an estimation of the size of the testing project.
The size of the testing project is further used to estimate the testing effort or the effort required for carrying out the testing project. In an aspect of the present technique, the effort is estimated by dividing the size of the testing project by a predetermined TU rate. TU Rate is a unit of productivity unit, which is measured, for example, in TU per hour. The approach followed in setting the predetermined TU rate for the testing project is to determine an average productivity of a sample of prior executed testing projects. For this purpose, size of each testing project from the sample is measured in TUs. The effort spent (for example, in hours) for each testing project execution is recorded. The productivity value of each testing project from the sample is then calculated by dividing the size by corresponding productivity values. An average of these productivity values is then calculated to obtain the predetermined TU rate. A similar approach is followed to set the values of the predetermined TU rate for both TCG and MTR
FIG. 4 is a block diagram of an effort estimation system 400 for estimating the size of the testing project. The system 400 includes a database 402, an interface 404, and a calculation module 406 coupled to the database 402 and the interface 404. The database 402 stores predetermined weights of the set of test parameters that define the TU. The database 402 also stores the values of the predetermined TU rates. The interface 404 receives the number of each test parameter. The interface 404 also receives the values of the project specific parameters. In an aspect of the present technique, the interface 404 is a user interface that enables the estimator to manually input the test parameter numbers. In another aspect of the present technique, the interface 404 is capable of directly receiving the test parameter numbers from another application, program or software code. The calculation module 406 calculates the number of test units for the testing project using the number of the test parameters and
the predetermined weights of the set of test parameters. The calculation module 406 further estimates the size and testing effort of the testing project based on the project specific parameters and predetermined TU rates as described earlier.
The method and system described above estimate the size and effort of the testing project in a simplistic and effective manner. The method and system are applicable to an independent testing project. Further, aspects of the present technique make the effort estimation independent of the complexity or size of test cases, thereby providing an objective approach which is not person dependent. This is because the number of test parameters in a particular test case is not person dependent.
In an aspect of the present technique, the method and system described above is implemented in the form of a computer program product. The computer program product includes a computer readable medium that includes program instructions for identifying the set of test parameters for the testing project, counting the number of each test parameter from the set of test parameters, and measuring the size of the testing project in Test Units (TUs). The measurement is based on the number of each test parameter. The TU is defined based on predetermined weights of the set of test parameters.
While the following description is presented to enable a person of ordinary skill in the art to make and use the technique, and is provided in the context of the requirement for a obtaining a patent. The present description is the best presently-contemplated method for carrying out the present technique. Various modifications to the preferred aspect will be readily apparent to those skilled in the art and the generic principles of the present technique may be applied to other aspects, and some features of the present technique may be used without the corresponding use of other features. Accordingly, the present technique is not intended to be limited to the aspect shown but is to be accorded the widest cope consistent with the principles and features described herein.
Many modifications of the present technique will be apparent to those skilled in the arts to which the present technique applies. Further, it may be desirable to use some
of the features of the present technique without the corresponding use of other features.
[0002] Accordingly, the foregoing description of the present technique should be considered as merely illustrative of the principles of the present technique and not in limitation thereof.
CLAIMS:
1. A method for estimating size of a testing project corresponding to one or more
software applications, the method comprising:
identifying a set of test parameters for the testing project;
counting number of each test parameter from the set of test parameters; and
measuring the size of the testing project in Test Units (TUs) based on the number of each test parameter, wherein a TU is defined based on predetermined weights of the set of test parameters.
2. The method of claim 1, further comprising estimating a testing effort for the testing project based on the size of the testing project.
3. The method of claim 2, wherein estimating the testing effort comprises dividing the size of the testing project by a predetermined TU rate.
4. The method of claim 3, wherein the predetermined TU rate is calculated from productivity values of a sample of prior executed testing projects.
5. The method of claim 1, wherein the set of test parameters include at least one of a number of test steps or number of Structured Query Language (SQL) queries for test data creation or number of front end validations or number of arithmetic validations or number of database validations or combinations thereof for manual test execution,
6. The method of claim 1, wherein the set of test parameters include at least one of number of test steps or number of data definition language statements for test data creation or number of validation checks or combinations thereof for test case generation.
7. The method of claim 1, wherein identifying the set of test parameters comprises identifying the set of test parameters from at least one of test requirement documents or use cases or written test cases for the testing project.
8. The method of claim 1, wherein measuring the size of the testing project
comprises:
calculating number of TUs based on the number of each test parameter; and
multiplying the number of TUs by an adjustment factor.
9. The method of claim 8, wherein multiplying the number of TUs by the adjustment factor comprises calculating the adjustment factor based on a set of project specific parameters.
10. The method of claim 9, wherein the set of project specific parameters includes at least one of clarity of requirements or prior application knowledge ordomain complexity or combinations thereof
11. The method of claim 9, wherein each project specific parameter from the set of project specific parameters is assigned a value by an estimator of the size of the testing project.
12. An effort estimation system for estimating a testing effort of a testing project corrresponding to one or more software applications, the system comprising:
a database for storing predetermined weights of a set of test parameters that define a Test Unit (TU), and at least one value of a predetermined TU rate;
an interface for receiving number of each test parameter from the set of test parameters, and setting values of a set of project specific parameters; and
a calculation module coupled to the database and the interface for calculating size of the testing project in Test Units (TUs) based on the number of each test parameters and the testing effort based on the size of the testing project.
13. The effort estimation system of claim 12, wherein the testing project is a
functional manual testing project.
14. The effort estimation system of claim 12, wherein the testing project includes
at least one of Test Case Generation (TCG) or Manual Test Execution (MTE) or
combinations thereof.
15. A computer program product for estimating size of a testing project
corresponding to one or more software applications, the computer program product
comprising a computer readable medium comprising program instructions for:
identifying a set of test parameters for the testing project;
counting number of each test parameter from the set of test parameters; and
measuring the size of the testing project in Test Units (TUs) based on the number of each test parameter, wherein a TU is defined based on predetermined weights of the set of test parameters.
| # | Name | Date |
|---|---|---|
| 1 | 1881-CHE-2006 ASSIGNMENT.pdf | 2012-01-05 |
| 1 | 1881-che-2006-form 5.pdf | 2011-09-03 |
| 2 | 1881-CHE-2006 CORRESPONDENCE OTHERS.pdf | 2012-01-05 |
| 2 | 1881-che-2006-form 3.pdf | 2011-09-03 |
| 3 | 1881-che-2006-form 1.pdf | 2011-09-03 |
| 3 | 1881-CHE-2006 POWER OF ATTORNEY.pdf | 2012-01-05 |
| 4 | 1881-che-2006-drawings.pdf | 2011-09-03 |
| 4 | 1881-che-2006-abstract.pdf | 2011-09-03 |
| 5 | 1881-che-2006-claims.pdf | 2011-09-03 |
| 5 | 1881-che-2006-description(complete).pdf | 2011-09-03 |
| 6 | 1881-che-2006-correspondnece-others.pdf | 2011-09-03 |
| 7 | 1881-che-2006-claims.pdf | 2011-09-03 |
| 7 | 1881-che-2006-description(complete).pdf | 2011-09-03 |
| 8 | 1881-che-2006-abstract.pdf | 2011-09-03 |
| 8 | 1881-che-2006-drawings.pdf | 2011-09-03 |
| 9 | 1881-CHE-2006 POWER OF ATTORNEY.pdf | 2012-01-05 |
| 9 | 1881-che-2006-form 1.pdf | 2011-09-03 |
| 10 | 1881-che-2006-form 3.pdf | 2011-09-03 |
| 10 | 1881-CHE-2006 CORRESPONDENCE OTHERS.pdf | 2012-01-05 |
| 11 | 1881-che-2006-form 5.pdf | 2011-09-03 |
| 11 | 1881-CHE-2006 ASSIGNMENT.pdf | 2012-01-05 |