Sign In to Follow Application
View All Documents & Correspondence

System And Method For Evaluating Software Application

Abstract: Disclosed is method and system for evaluating quality of a software application. The system may be configured for receiving one or more instances of defects, one or more instances incidents, and an outage duration corresponding to each sub-time interval of predefined time. Further, weights may be assigned for each instance of defect, each instance of incident, and each instance of outage. Based on the weights assigned, a gross score may be determined for the software application. Further, the gross score may be divided by a size factor to obtain a normalized score. The normalized score may be further multiplied with a predefined time weight to compute a time weighted score for each sub-time interval. Further, a summation of the time weighted score may determine a total score for the software application. The total score may be compared with a reference score in order to determine quality score for the software application.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 February 2014
Publication Number
46/2015
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
ip@legasis.in
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. JOSHI, Hemant Prakash
Tata Consultancy Services Limited, TRDDC, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune-411013, Maharashtra, India

Specification

CLIAMS:WE CLAIM:

1. A computer implemented method for evaluating quality of a software application, the method comprising:
receiving a defect data, an incident data, and an outage data associated with the software application,
wherein the defect data indicates one or more instances of defects identified in the software application during at least one of a software testing phase and software development phase,
wherein the incident data indicates one or more instances of complaints for the software application during a software production phase,
wherein the outage data is indicative of non-workability of the software application for a time window during the software production phase, and
wherein the defect data, the incident data, and the outage data are received for one or more sub-time intervals of a predefined time;
assigning a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data, respectively, wherein the defect weight, the incident weight, and the outage weight are assigned based on a predefined weight matrix;
determining, by a processor, a gross score for each sub-time interval based on a sum of weighted defect count, weighted incident count and weighted outage duration;
dividing the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval, wherein the size factor is indicative of a ratio of size of the software application and a benchmark size of a reference software application;
computing, by the processor, a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix;
determining a total score for the software application based on summation of the time weighted score of each sub-time interval, wherein the total score is determined for the predefined time; and
comparing the total score with a reference score in order to determine a quality score for the software application, thereby evaluating the quality of the software application.

2. The method of claim 1, wherein the defect data, the incident data, and the outage data are received from a defect management database 302, an incident management database 304, and an outage management database 304, respectively.

3. The method of claim 1 further comprises transforming the defect data, the incident data, and the outage data into a standard format.

4. The method of claim 1, wherein the time weighted score is computed by multiplying the normalized score with a predefined time weight assigned to each sub-time interval in the predefined time weight matrix.

5. The method of claim 1 further comprises generating a report depicting a trend of quality scores determined for a series of time instances corresponding to the software application.

6. A system 102 for evaluating quality of a software application, the system comprising:
a processor 202;
a memory 206 coupled with the processor 202, wherein the processor 202 is capable for executing a plurality of modules 208 stored in the memory 206, and wherein the plurality of modules 208 comprising:
a receiving module 210 configured to receive a defect data, an incident data, and an outage data associated with the software application, wherein:
the defect data indicates one or more instances of defects identified in the software application during at least one of a software testing phase and software development phase,
the incident data indicates one or more instances of complaints for the software application during a software production phase,
the outage data is indicative of non-workability of the software application for a time window during the software production phase, and
wherein the defect data, the incident data, and the outage data are received for one or more sub-time intervals of a predefined time;
an assigning module 212 configured to assign a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data, respectively, wherein the defect weight, the incident weight, and the outage weight are assigned based on a predefined weight matrix;
a computing module 214 configured to:
determine a gross score for each sub-time interval based on sum of weighted defect count, weighted incident count and weighted outage duration;
divide the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval, wherein the size factor is indicative of a ratio of size of the software application and a benchmark size of a reference software application,
compute a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix, and
determine a total score for the software application based on summation of the time weighted score of each sub-time interval, wherein the total score is determined for the predefined time; and
a quality scoring module 216 configured to compare the total score with a reference score in order to determine a quality score for the software application, thereby evaluating the quality of the software application.

7. The system 102 of claim 6, wherein the defect data, the incident data, and the outage data are received from a defect management database 302, an incident management database 304, and an outage management database 304, respectively.

8. The system 102 of claim 6 further comprising a data transformation module 218 configured to transform the defect data, the incident data, and the outage data into a standard format.

9. The system 102 of claim 6 further comprising a reporting module 220 configured to generate a report depicting a trend of quality scores determined for a series of time instances corresponding to the software application.

10. A computer program product having embodied thereon a computer program for evaluating quality of a software application, the computer program product comprising a set of instructions, the instructions comprising instructions for:
receiving a defect data, an incident data, and an outage data associated with the software application,
wherein the defect data indicates one or more instances of defects identified in the software application during at least one of a software testing phase and software development phase,
wherein the incident data indicates one or more instances of complaints for the software application during a software production phase,
wherein the outage data is indicative of non-workability of the software application for a time window during the software production phase, and
wherein the defect data, the incident data, and the outage data are received for one or more sub-time intervals of a predefined time;
assigning a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage duration, respectively, wherein the defect weight, the incident weight, and the outage weight are assigned based on a predefined weight matrix;
determining a gross score for each sub-time interval based on sum of weighted defect count, weighted incident count and weighted outage duration;
dividing the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval, wherein the size factor is indicative of a ratio of size of the software application and a benchmark size of a reference software application;
computing a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix;
determining a total score for the software application based on summation of the time weighted score of each sub-time interval, wherein the total score is determined for the predefined time; and
comparing the total score with a reference score in order to determine a quality score for the software application, thereby evaluating the quality of the software application. ,TagSPECI:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
SYSTEM AND METHOD FOR EVALUATING SOFTWARE APPLICATION

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification describes the invention and the manner in which it is to be performed.

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application does not claim priority from any patent application.

TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to software assessment, more particularly, a system and method for evaluating quality of a software application.

BACKGROUND
[003] For determining the quality of a software application, various software evaluation methodologies have been disclosed, and are being practiced, in the existing art. The quality is defined as a degree of deviation of the software application or a software product from the stated and implied needs of various stakeholders associated with the software application or the software product. The instances of deviation from the stated and implied needs are considered as defects in the software application. There may be different types of defects which may occur at different stages/phases in a software development lifecycle (SDLC) of the software application. In a broader sense, the phases may be broadly classified into two phases such as pre-production phase and post-production phase. The pre-production phase may comprise development phase, testing phase, and deployment phase, whereas the post-production phase may comprise maintenance phase and evaluation phase.
[004] In the existing art, the software evaluation methodologies are focused on considering data associated to the pre-production phase in order to determine the quality of the software application. Further, these methodologies may only rely on analyzing existing performance of the software application without considering historical performance of the software application. Thus, the quality determined gives a partial or incomplete measure of the software application. Due to such incomplete measurement of the quality, it may become difficult to judge the quality of the software application which may be required and/or desired at an organizational level.
SUMMARY
[005] This summary is provided to introduce aspects related to systems and methods for evaluating quality of a software application and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[006] In one implementation, a system for evaluating quality of a software application in a computer network is disclosed. The system may comprise a memory and a processor coupled to the memory for executing plurality of modules present in the memory. The plurality of modules may comprise a receiving module, a data transformation module, an assigning module, a computing module, a quality scoring module, and a reporting module. The receiving module may be configured to receive defect data, incident data, and outage data associated with the software application. Further, the defect data, the incident data, and the outage data may be received for one or more sub-time intervals of a predefined time. The defect data may indicate one or more instances of defects identified in the software application during software development phase and software testing phase. Further, the incident data may indicate one or more instances of complaints for the software application during a software production phase, and the outage data may be an indicative of non-workability of the software application for a time window during the software production phase. The assigning module may be configured to assign a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data respectively. The defect weight, the incident weight, and the outage weight may be assigned based on a predefined weight matrix. The computing module, at first, may be configured to determine a gross score for each sub-time interval. In one aspect, the gross score is the sum of weighted defect count, weighted incident count, and weighted outage duration. In one embodiment, the defect count, the incident count, and the outage duration are obtained from the defect data, the incident data, and the outage data respectively. The computing module may be further configured to divide the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval. In one aspect, the size factor is proportional to a ratio of size of the software application and a benchmark size of a reference software application. Thereafter, the computing module may be configured to compute a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix. In one aspect, the time weighted score may be computed by multiplying the normalized score with a predefined time weight assigned to each sub-time interval in the predefined time weight matrix. Further, the computing module may be configured to determine a total score for the predefined time based on summation of the time weighted score of each sub-time interval. The quality scoring module may be configured to compare the total score with a reference score in order to determine a quality score for the software application. The quality score may thus facilitate holistic evaluation of the quality of the software application. Further, the reporting module may be configured to generate a report depicting a trend of the quality score determined for a series of time instances. The trend depicted may represent a historical performance of the software application for the predefined time.
[007] In another implementation, a method for evaluating quality of a software application in a computer network is disclosed. The method may comprise receiving defect data, incident data, and outage data associated with the software application. Further, the defect data, the incident data, and the outage data may be received for one or more sub-time intervals of a predefined time. The defect data may indicate one or more instances of defects identified in the software application during software development phase and software testing phase. Further, the incident data may indicate one or more instances of complaints for the software application during a software production phase, and the outage data may be an indicative of non-workability of the software application for a time window during the software production phase. The method may comprise assigning a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data respectively. The defect weight, the incident weight, and the outage weight may be assigned based on a predefined weight matrix. The method may further comprise determining a gross score for each sub-time interval. In one aspect, the gross score is the sum of weighted defect count, weighted incident count, and weighted outage duration. In one embodiment, the defect count, the incident count, and the outage duration are obtained from the defect data, the incident data, and the outage data respectively. The method may comprise dividing the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval. In one aspect, the size factor is proportional to a ratio of size of the software application and a benchmark size of a reference software application. Thereafter, the method may comprise computing a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix. In one aspect, the time weighted score may be computed by multiplying the normalized score with a predefined time weight assigned to each sub-time interval in the predefined time weight matrix. Further, the method may comprise determining a total score for the predefined time based on summation of the time weighted score of each sub-time interval. The method may comprise comparing the total score with a reference score in order to determine a quality score for the software application. The quality score may thus facilitate holistic evaluation of the quality of the software application. Further, the method may comprise generating a report depicting a trend of the quality score determined for a series of time instances. The trend depicted may represent a historical performance of the software application for the predefined time.
[008] Yet in another implementation, computer program product having embodied thereon a computer program for evaluating quality of a software application in a computer network is disclosed. The computer program product may comprise instructions for receiving defect data, incident data, and outage data associated with the software application. Further, the defect data, the incident data, and the outage data may be received for one or more sub-time intervals of a predefined time. The defect data may indicate one or more instances of defects identified in the software application during software development phase and software testing phase. Further, the incident data may indicate one or more instances of complaints for the software application during a software production phase, and the outage data may be an indicative of non-workability of the software application for a time window during the software production phase. The computer program product may comprise instructions for assigning a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data respectively. The defect weight, the incident weight, and the outage weight may be assigned based on a predefined weight matrix. The computer program product may comprise instructions for determining a gross score for each sub-time interval. In one aspect, the gross score is the sum of weighted defect count, weighted incident count, and weighted outage duration. In one embodiment, the defect count, the incident count, and the outage duration are obtained from the defect data, the incident data, and the outage data respectively. The computer program product may comprise instructions for dividing the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval. In one aspect, the size factor is proportional to a ratio of size of the software application and a benchmark size of a reference software application. Thereafter, the computer program product may comprise instructions for computing a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix. In one aspect, the time weighted score may be computed by multiplying the normalized score with a predefined time weight assigned to each sub-time interval in the predefined time weight matrix. Further, the computer program product may comprise instructions for determining a total score for the predefined time based on summation of the time weighted score of each sub-time interval. The computer program product may comprise instructions for comparing the total score with a reference score in order to determine a quality score for the software application. The quality score may thus facilitate holistic evaluation of the quality of the software application. Further, the computer program product may comprise instructions for generating a report depicting a trend of the quality score determined for a series of time instances. The trend depicted may represent a historical performance of the software application for the predefined time.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0010] Figure 1 illustrates a network implementation of a system for evaluating quality of a software application is shown, in accordance with an embodiment of the present subject matter.
[0011] Figure 2 illustrates the system, in accordance with an embodiment of the present subject matter.
[0012] Figure 3 illustrates a working of the system in detail, in accordance with an embodiment of the present subject matter.
[0013] Figure 4 illustrates a method for evaluating quality of a software application, in accordance with another embodiment of the present subject matter.
DETAILED DESCRIPTION
[0014] Systems and methods for evaluating quality of a software application are described. The quality of the software application or a software product may be measured based on guidelines provided by predefined standards such as International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC) 250NN series of standards. The (ISO)/ (IEC) 250NN series is also called as SQuaRE series. According to embodiments of present subject matter, the system may be provided for collecting data from different phases/stages of a software development lifecycle (SDLC) of the software application. From each phase, the data i.e., defect data, incident data, and outage data may be collected or received by the system. By considering the data from all the phases associated with the lifecycle of the software application, the system may be enabled to cover each aspect of the software application for determining its quality.
[0015] The stated and implied needs of applications or products may change over time. The applications or products too may change over time to support these changes. The quality of the application also therefore changes from time to time. The quality of the application therefore may be defined for a predetermined instance of time, hereinafter referred as time ‘T’. However, while determining the quality of an application at time T, the historical quality levels of that application may not be ignored. The defects, incidents and outages that have happened in the past may also have influence on the quality perception of the application. However, this influence might not be constant over time. Recent events (i.e. defects, incidents or outages) might have a higher influence than events occurred in the past.
[0016] Applications may have a short or a long history. While some applications may have entered the production phase very recently, others may be in production use for over 5, 10 or even 20 years. When considering the historical quality of applications, the full production use time may not always be relevant. The window of time during which the historical events have an influence on the current quality of the application may be referred to as the Influence Window. The influence window may vary from application to application, and may be as small as one month to as large as 3 years. The influence window may be measured backwards from the time instance at which the quality is to be determined (i.e. from the time ‘T’). The specific influence window duration defined for an application is referred to as the predefined time.
[0017] The predefined time is divided into multiple equal parts, hereinafter referred to as a sub-time interval, which may correspond to different levels of influence over the quality at current time. These parts are numbered sequentially in reverse chronological order from time T. Figure 5 illustrates a typical representation of the influence window comprising one or more sub-time intervals of the predefined time T. For each sub-time interval, a weight is associated which is proportional to the influence of the events of that sub-time interval to the quality of the application at time T. This is referred to as the predefined time-weight matrix.
[0018] For the data collected i.e., the defect data, the incident data, and the outage data, weights may be assigned. Specifically, the weights may be assigned to each instance of defects, each instance of incidents, and each instance of outages identified from the defect data, the incident data, and the outage data. Each instance of defects, each instance of incidents, and each instance of outages may correspond to different sub time intervals of the predefined time, based on the detection date and time of the defect, and date & time of the occurrence of an incident or an outage. The weights may be assigned based on predefined standards/weight matrix associated with the software application. After assigning the weights, the system may be configured to perform different quantitative measurement for evaluating the software application. At first, the system may determine a gross score for each sub time interval based upon the sum of weighted defect counts, weighted incident counts and weighted outage duration for that sub time interval. The system may further determine a normalized score for each sub time interval by dividing the gross score by a size factor of the software application. According to some embodiments, the size factor is proportional to the ratio of size of the software application and a benchmark size of a reference software application. Further, the system may compute a time weighted score for each sub time interval by multiplying the normalized score with a predefined time weight corresponding to each sub time interval. The predefined time weight corresponding to each sub time interval is stored in a predefined time weight matrix.
[0019] After the computation of the time weighted score for each sub time interval, the system may determine a total score for the predefined time (or influence window) based on summation of the time weighted score of each sub time interval. After determining the total score, the system may determine a quality score for the software application by comparing the total score with a reference score. Thus, the quality score may be utilized in order to evaluate the software application.
[0020] While aspects of described system and method for evaluating quality of a software application may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
[0021] Referring now to Figure 1, a network implementation 100 of a system 102 for evaluating quality of a software application is illustrated, in accordance with an embodiment of the present subject matter. The system 102 may be configured to receive defect data, incident data, and outage data associated with the software application. Further, the defect data, the incident data, and the outage data may be received for one or more sub-time intervals of a predefined time. The defect data may indicate one or more instances of defects identified in the software application during software development phase and software testing phase. Further, the incident data may indicate one or more instances of complaints for the software application during a software production phase, and the outage data may be an indicative of non-workability of the software application for a time window during the software production phase. The system 102 may be configured to assign a defect weight, an incident weight, and an outage weight to the defect data, the incident data, and the outage data respectively. The defect weight, the incident weight, and the outage weight may be assigned based on a predefined weight matrix. The system 102 may be configured to determine a gross score for each sub-time interval. In one aspect, the gross score is the sum of weighted defect count, weighted incident count and weighted outage duration that occurred within the sub time interval. In one embodiment, the defect count, the incident count, and the outage duration are obtained from the defect data, the incident data, and the outage data respectively. The system 102 may be further configured to divide the gross score by a size factor of the software application in order to determine a normalized score for each sub-time interval. In one aspect, the size factor is proportional to a ratio of size of the software application and a benchmark size of a reference software application. Thereafter, the system 102 may be configured to compute a time weighted score for each sub-time interval based upon the normalized score and a predefined time weight matrix. In one aspect, the time weighted score may be computed by multiplying the normalized score with a predefined time weight assigned to each sub-time interval in the predefined time weight matrix. Further, the system 102 may be configured to determine a total score for the predefined time based on summation of the time weighted score of each sub-time interval. The system 102 may be configured to compare the total score with a reference score in order to determine a quality score for the software application. The quality score may thus facilitate holistic evaluation of the quality of the software application. Further, the system 102 may be configured to generate a report depicting a trend of the quality score determined for a series of time instances. The trend depicted may represent a historical performance of the software application for the predefined time.
[0022] Although the present subject matter is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. In one implementation, the system 102 may be implemented in a cloud-based environment. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2…104-N, collectively referred to as user devices 104 hereinafter, or applications residing on the user devices 104. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0023] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0024] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 may include a processor 202, an input/output (I/O) interface 204, and a memory 206. The processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0025] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with a user directly or through the user devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0026] The memory 206 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 224.
[0027] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks, functions or implement particular abstract data types. In one implementation, the modules 208 may include a receiving module 210, an assigning module 212, a computing module 214, a quality scoring module 216, a data transformation module 218, a reporting module 220, and other modules 222. The other modules 222 may include programs or coded instructions that supplement applications and functions of the system 102.
[0028] The data 224, amongst other things, serves as a repository for storing data processed, received, and generated by the processor 202. The data 224 may also include a size factor database 226, a reference score database 228, a weight matrix database 230, and other data 232. The other data 232 may include data generated as a result of the execution of the processor 202.
[0029] In one implementation, at first, a user may use one of the user devices 104 to access the system 102 via the I/O interface 204. The user may register themselves using the I/O interface 204 in order to use the system 102. The working of the system 102 may be explained in detail in Figure 3 explained below. The system 102 may be used for evaluating quality of the software application.
[0030] Referring to Figure 3, a detailed working of system 102 is illustrated, in accordance with an embodiment of the present subject matter. In one implementation, the system 102 may be provided for evaluating quality of a software application. For evaluating the quality, the system 102 is configured for collecting data from each phase of lifecycle of the software application. According to embodiments of present subject matter, the phases of the software application may comprise a software development phase, software testing phase, and a software production phase. The terms “software development phase”, “software testing phase”, and “software production phase” may also be referred as “development phase”, “testing phase”, and “production phase”, respectively. The development phase and the testing phase may comprise defect data. Similarly, the production phase may further comprise an incident data and outage data associated with the software application.
[0031] Further, a database may be used for storing the defect data, the incident data, and the outage data collected from their respective phases of the lifecycle of the software application. For example, a defect management database 302, an incident management database 304, and an outage management database 306 may be configured to store the defect data, the incident data, and the outage data, respectively. In one implementation, the defect management database 302, the incident management database 304 and the outage management database 304 may belong to plurality of software development organizations. Further, the defect data stored in the defect management database 302 is collected from various sub-phases of the development phase and the testing phase. In one example, the sub-phase of the development phase may comprise a “review phase” and the sub-phases of the testing phase may comprise “System Integration Testing (SIT)”, “User Acceptance Testing (UAT)”, and “Non-Functional Testing (NFT)”. Further, the defect data collected may indicate instances of defects identified in the software application during the development phase (along with their sub-phase) and the testing phase (along with their sub-phases). Further, the incident management database 304 may be configured for storing “incident data” collected during the production phase of the software application. The incident data collected may indicate instances of complaints reported by a user during the production phase of the software application (also referred as user incidents), or instances of complaints identified automatically by an external application and system monitoring tools (also referred as system incidents). Next, the outage management database 306 may be configured for storing “outage data” recorded during the production phase of the software application. The outage data, among other things, also includes the time of the outage and outage duration associated with the outage. The outage duration may indicate a non-workability of the software application for a time window during the production phase. Further, the time window may be in terms of seconds, minutes, and hours during which the software application is detected to be in the non-workable condition. The databases 302, 304, and 306 may be updated at regular time intervals or continuously, and the data from these databases may be collected in the Other Data 232 at periodic intervals or on a continuous basis which may be used for evaluating the quality of the software application.
[0032] According to some embodiments of present subject matter, a receiving module 210 may be configured to receive the above discussed data i.e., the defect data, the incident data, and the outage data collected in their respective databases 302, 304, and 306, respectively. The data collected by the receiving module 210 may be stored in the other data 232. Since, the data i.e., the defect data, the incident data, and the outage data is received in different formats, it may be required to transform the data into a uniform/standard format. According to embodiments, a data transformation module 218 may be configured to transform the defect data, the incident data, and the outage duration into the uniform/standard format.
[0033] Further, the data i.e., the defect data, the incident data, and the outage data may be received by the receiving module 210 for one or more sub-time intervals of a predefined time. Thus, by receiving the data from each phase of the software lifecycle, the system 102 provides a wider coverage for evaluating the quality of the software application. In one aspect of present subject matter, considering the predefined time as a year, the one or more sub-time intervals may be a week, month, a quarter, and the like.
[0034] Further, an assigning module 212 may be configured to assign weight to the defect data, incident data, and the outage data for each sub-time interval. According to some embodiments of present subject matter, the weights are assigned based on a predefined weight matrix which is shown below in table 1. Further, the predefined weight matrix may be stored in the weight matrix database 230.
Phase Event Phase Weight Sub-Phase Sub-Phase Weight Classification Classification Weight Severity/Impact Severity/Impact Weight Weights
Assigned
Development Defect 0.2 Review 0.5 Functional 1 High 0.9 0.09
Development Defect 0.2 Review 0.5 Security 1 Medium 0.8 0.08
Testing Defect 0.4 SIT 0.75 Functional 1 High 0.9 0.27
Testing Defect 0.4 UAT 1 Performance 1 Low 0.7 0.28
Testing Defect 0.4 NFT 0.8 Performance 1 Medium 0.8 0.256
Production Outage 0.8 Maintenance 1 S/W Error 1 High 1 0.8
Production Outage 0.8 Maintenance 1 H/W Error 0.75 High 1 0.6
Production Incident 0.8 Maintenance 0.9 S/W Error 1 High 0.9 0.648
Production Incident 0.8 Maintenance 0.9 Config Error 0.9 High 0.9 0.5832
Production Incident 0.8 Maintenance 0.9 H/W Error 0.7 High 0.9 0.4536
Table 1: Predefined Weight Matrix illustrating the predefined weights for different phases of a lifecycle of software application.

[0035] As shown, in the table 1, corresponding to each type of defect, incident and an outage identified during any of multiple phases of software lifecycle, a predefined weight is assigned in the predefined weight matrix. In one example, as shown in Table 1, a defect of classification type “Functional” having Severity Type “High” and belonging to a sub-phase “Review” of the phase “Development” is assigned with a weight of 0.09 (Illustrated in Row 1 of the Table 1). Similarly, an incident of classification type “Config Error” having Severity Type “High” and belonging to a sub-phase “Maintenance” of the phase “Production” is assigned with a weight of 0.5832 (Illustrated in Row 9 of the Table 1). It is to be noted that the above table 1 is exemplary and hence may be predefined in a plurality of other configurations depending on the requirements of the system 102. For example, any row or column of the predefined weight matrix may be updated, deleted, and/or added based on the requirements of the user using the system 102. The predefined weight matrix may be utilized by the assigning module 212 for assigning weight to each instance of defect, each instance of incident, and each instance of outage identified corresponding to each sub-time interval of the predefined time. In one example, assuming the quality is to be evaluated as of the midnight of 29th December 2013, and assuming the predefined time to be 6 months, and accordingly each sub-time interval as a calendar week, the Predefined Weight Matrix may be utilized for assigning weights corresponding to each instance of defect, each instance of an incident and each instance of outage, as explained below referring to table 2. In one implementation, assume that the data represented in table 2 belongs to the week from 14th Oct 2013 to 20th Oct 2013, which corresponds to sub time interval No 10, counted in the reverse chronological order from 29th December 2013, and the first week is numbered as 0.
[0036] It can be seen from last column of the table 2, that the weights assigned for each category of defect or incident or outage may be calculated by multiplying each individual weight assigned for each category. In one example, individual weights assigned (by referring the table 1) for the “Development phase” is “0.2”, the “Review” is “0.5”, “Functional” is “1”, and the “High” is “0.9”. Thus, after assigning the individual weights for each category of the defect data, the weight assigned (column 6 of the table 2) by the assigning module 212 is determined by multiplying each individual weight assigned for each category (where the individual weights are assigned based on the predefined weight matrix i.e., from the table 1). Thus, the weight assigned for the above example is “0.09” i.e., {0.2*0.5*1*0.9=0.09}. Thus, by following the same methodology, the assigning module 212 may assign the weights for each data (defect data, incident data, and outage data). This example illustrates assigning weight corresponding to a single instance of defect identified in a week. For the same week if the number of instances of similar defects are more than the one, then the assigning module 212 may assign the weight corresponding to said defect data as number of instances of defect* weight assigned for single instance. For example, in the current scenario, if there are three instances of defects identified in review phase of development phase, then final weight corresponding to defect data corresponding to the sub-phase “Review” of the phase “Development” having severity “High” will be 0.27 (last column of table 2). Similarly, for other defects and incidents identified corresponding to the sub time interval number 10; the final weights may be assigned, as illustrated in table 2. Unlike, defects and the incidents, the final weight assigned to outages may be determined based on the total time duration of the non-workability of the software application during the week. For example, referring the last two columns depicting outage data, the final weights assigned to outages identified during the production phase, and classified as S/W error and H/W error are assigned the final weights of 8 (0.8*10) and 6 (0.6*10) respectively.
[0037] After assigning the final weights, the computing module 214 may be configured to determine the gross score for the sub time interval 10 based on sum of weighted defect counts, weighted incident counts and weighted outage duration. Specifically, it is observed that the gross score for the sub time interval 10 is 16.9612 {0.27 + 0.54 + 0.56 + 0.5832 + 1.008 + 8 + 6 = 16.9612}. Similarly, the gross score may be determined for remaining weeks of the predefined time (i.e. Six Months).

Data Phases Sub-Phase Defect Classification Severity/Impact Weight As per Table 1 Number of Instances/Outage Duration in Sec Final Weight

Defect Data Development Review Any High 0.09 3 0.09*3=0.27
Testing SIT Any High 0.27 2 0.27*2=0.54
Testing UAT Any Low 0.28 2 0.28*2=0.56

Incident Data Production Maintenance Config Error High 0.5832 1 0.5832*1=0.5832
Production Maintenance Other Error Any 0.504 2 0.504*2=1.008
Outage Duration Production Maintenance S/W Error Any 0.8 10 0.8*10=8
Production Maintenance H/W Error Any 0.6 10 0.6*10=6
Gross Score 16.9612
Table 2: Determination of Gross Score for a sub time interval (Number 10).

[0038] After the determination of the gross score for each week (each sub-time interval) within the predefined time, the computing module 214 may be configured to determine a normalized score for each week. The normalized score is determined by dividing the gross score by a size factor of the software application. According to some embodiments of present subject matter, the size factor is proportional to the ratio of size of the software application and a benchmark size of a reference software application. Further, the reference software application may be a predefined software application considered for determining the size factor. Further, the benchmark size of the reference software application may remain constant for determining the size factor for all other applications. In the embodiments of present subject matter, a record comprising name of the reference software application and the benchmark size of the reference software application may be stored in the size factor database 226 of the system 102. It may noted to a person skilled in art, that there may be ‘n’ number of records present in the size factor database 226 mapping names of software applications with corresponding benchmark sizes. More particularly, the size factor database 226 stores one reference application along with a benchmark size corresponding to each software application. The size factor database 226 may be referred for the determining of the size factor for different set of software applications to be evaluated with respect to quality parameters using the system 102. The size factor may then be utilized in order to determine the normalized score corresponding to each week (sub-time interval) of the 6 month period (predefined time). This is further explained referring to table 3, as below.
[0039] It may be observed from the table 3 that the size factor for the software application being evaluated is assumed to be 2. Further, in addition to the gross score determined for the week 10 which is 16.9612, table 3 indicates gross scores for week 1, week 5, week 15, and week 20 as 16.9613, 16.9614, 16.9615, and 16.9616 respectively. It is to be understood that the gross scores for the week 1, the week 5, the week 15, and the week 20 may be determined based on aforementioned methodology described referring to the table 1 and the table 2. Thus, referring to table 3, the normalized scores for the week 10, the week 1, the week 5, the week 15, and the week 20 is 8.4806, 8.48065, 8.4807, 8.48075 and 8.4808 respectively.

Week Number Gross Score Size factor Normalized Score
10 16.9612 2 16.9612/2 = 8.4806
1 16.9613 2 16.9613/2 = 8.48065
5 16.9614 2 16.9614/2 = 8.4807
15 16.9615 2 16.9615/2 = 8.48075
20 16.9616 2 16.9616/2 = 8.4808
Table 3: Determining the normalized score for different weeks.

[0040] Similarly, the computing module 214 may be configured to determine the normalized score for other weeks of the 6 months duration. After determining the normalized score for each week (each sub-time interval), the computing module 214 may be further configured to compute a time weighted score for each week using a predefined time-weight matrix stored in the weight matrix database 230 of the system 102. The time-weight matrix comprises a predefined time weight assigned for each sub time interval. Table 4 illustrates an example of the time-weight matrix showing assignment of the time weight for each sub time interval of 6 months influence window having overall 26 number of sub time intervals (weeks).

Week No Time Weight
0 100%
1 99.739%
2 99.013%
3 98.015%
4 96.745%
5 95.203%
6 93.389%
7 91.303%
8 88.945%
9 86.315%
10 83.413%
11 80.239%
12 76.793%
13 73.075%
14 69.085%
15 64.823%
16 60.289%
17 55.483%
18 50.405%
19 45.055%
20 39.433%
21 33.539%
22 27.373%
23 20.935%
24 14.225%
25 7.243%

Table 4: Time-weight Matrix illustrating the predefined time weights assigned to various sub time intervals.
[0041] Based upon the time-weight matrix, the computing module 214 may compute the time weighted score for a specific week by multiplying the time weight corresponding to the specific week with the normalized score of the specific week. In one example, the time weighted score for the week 10 of the predefined time is computed by referring the tables 3 and 4. Specifically, in one example, the time weighted score for the week 10 is 8.4806 * 83.413% = 7.07. Similarly, the time weighted score for each week (sub-time interval) of the 6 month period (predefined time) may be computed by the computing module 214. Table 5 illustrates the time weighted score corresponding to the week 10, the week 1, the week 5, the week 15, and the week 20 as described above.

Week Number Normalized Score Weight As per Table 4 Time Weighted Score
10 8.4806 83.413% = 0.83413 8.4806*0.83413=7.07
1 8.48065 99.739% = 0.99739 8.48065*0.99739= 8.46
5 8.4807 95.203% = 0.95203 8.4807*0.95203= 8.07
15 8.48075 64.823% = 0.64823 8.48075*0.64823= 5.79
20 8.4808 39.433% = 0.39433 8.4808*0.39433= 3.34
Total Score = {7.07 + 8.46 + 8.07 + 5.79 + 3.34} = 32.73
Table 5: Computing the Total score for five weeks (sub-time intervals).

[0042] After computing the time weighted score for each week (each sub-time interval) of the 6 month period (predefined time), the computing module 214 may be further configured for determining a total score for the software application corresponding to the predefined time. Further, the total score is determined based on summation of the time weighted score of each week (each sub-time interval) within the predefined time. In one example, assuming that for the 6 month period ending 29th December 2013, the defects, the incidents and the outages were identified only corresponding to the five weeks (week no 1, week no 5, week no 10, week no 15 and week no 20) as illustrated in table 5, then the total score of the software application corresponding to the 6 month period is 32.73 {7.07 + 8.46 + 8.07 + 5.79 + 3.34 = 32.73}. It is to be noted by one skilled in the art that the above example is described for exemplary purpose and is not intended to limit the scope of the present disclosure. For example, in alternative embodiment, when defects, incidents and outages are identified for all the 26 weeks in the predefined time of 6 months, then the system 102 may be enabled to compute the total score for the predefined time based upon the summation of the time weighted score of each of the 26 weeks.
[0043] After the determination of the total score, a quality scoring module 216 may be configured to determine a quality score for the software application by comparing the total score with a reference score. Further, the reference score may be predefined for the software application, and may be stored in a reference score database 228. In one example, assuming the reference score to be “1000”, the quality score for the software may be determined based upon difference between the reference score and the total score of the software application. Therefore, considering the above example of the five weeks having the total score as 32.73, the quality score for the software application is 1000-32.73 = 967.27. Thus, based upon the quality score determined through the quality scoring module 216, the system 102 may evaluate the quality of the software application.
[0044] Further, the reporting module 220 may be configured to generate a report depicting a trend of the quality score obtained as above at the end of each calendar week of a year. The trend may facilitate a historical performance of the software application. Thus, the system 102 facilitates the evaluation of the quality of the software application based on quality score determined. Thus, the system 102 ensures that the data from each phase of the lifecycle of the software application is considered for determining the quality of the software application. Also, by considering the size factor and historical performance of the software application, the system 102 provides more efficient and accurate methodology for evaluating the software application.
[0045] The quality scoring mechanism thus converts a perception into an objective value. Higher value of the quality score indicates better quality and vice-versa. It is therefore possible to compare quality score across time or across applications.
[0046] The quality score so obtained may be compared against the quality score for another application to determine relative level of quality between these two applications. Multiple applications may be grouped together in a specific group and an average quality score may be obtained for the specific group. Further, the trend of such average scores may be observed to determine if the applications belonging to the specific group are showing an improvement in the quality.
[0047] Referring now to Figure 4, a method 400 for evaluating quality of a software application is shown, in accordance with an embodiment of the present subject matter. The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0048] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 400 or alternate methods. Additionally, individual blocks may be deleted from the method 400 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 400 may be considered to be implemented in the above described system 102.
[0049] At block 402, a defect data, incident data, and an outage data associated with the software application may be received corresponding to one or more sub-time intervals of a predefined time. The defect data, the incident data, and the outage data may be received from a defect management database 302, an incident management database 304, and an outage management database 306, respectively. Further, the defect data, the incident data, and the outage data may be stored in the other data 232.
[0050] At block 404, a defect weight, an incident weight, and an outage weight may be assigned to each defect, incident and outage based upon the predefined weight matrix stored in the weight matrix database 230.
[0051] At block 406, a gross score for each sub-time interval may be determined based on a sum of weighted defect count, weighted incident count and weighted outage duration.
[0052] At block 408, the gross score may be divided by a size factor for determining a normalized score for each sub-time interval. The size factor is proportional to the ratio of size of the software application (to be evaluated) and a benchmark size of a reference software application. According to embodiments of present subject matter, the size factor may be stored in a size factor database 226.
[0053] At block 410, a time weighted score may be computed for each sub-time interval based upon a predefined time weight matrix. Further, the predefined time weight matrix may be stored in the weight matrix database 230. In one implementation, the time weighted score is computed by multiplying the normalized score with a predefined time weight assigned for each sub-time interval in the predefined time weight matrix.
[0054] At block 412, a total score may be determined for the predefined time based on summation of the time weighed score of each sub-time interval.
[0055] At block 414, a quality score for the software application may be determined by comparing the total score with a reference score. The reference score may be a predefined quantitative value used for determining the quality score for the software application. Further, the reference score may be stored in the reference score database 228. According to one embodiment of present subject matter, the quality score may be a difference between the reference score and the total score. Thus, the quality score facilitates the evaluation of the quality of the software application.
[0056] Although implementations for methods and systems for evaluating quality of a software application have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for evaluating quality of the software application.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 487-MUM-2014-Response to office action [25-09-2023(online)].pdf 2023-09-25
1 Form 5.pdf 2018-08-11
2 487-MUM-2014-Correspondence to notify the Controller [31-08-2023(online)].pdf 2023-08-31
2 Form 3.pdf 2018-08-11
3 Form 2.pdf 2018-08-11
3 487-MUM-2014-FORM-26 [31-08-2023(online)]-1.pdf 2023-08-31
4 Figure for Abstract.jpg 2018-08-11
4 487-MUM-2014-FORM-26 [31-08-2023(online)].pdf 2023-08-31
5 Drawings.pdf 2018-08-11
5 487-MUM-2014-US(14)-HearingNotice-(HearingDate-15-09-2023).pdf 2023-08-14
6 ABSTRACT1.jpg 2018-08-11
6 487-MUM-2014-CLAIMS [04-04-2020(online)].pdf 2020-04-04
7 487-MUM-2014-FORM 26(19-3-2014).pdf 2018-08-11
7 487-MUM-2014-COMPLETE SPECIFICATION [04-04-2020(online)].pdf 2020-04-04
8 487-MUM-2014-FORM 1(21-2-2014).pdf 2018-08-11
8 487-MUM-2014-FER_SER_REPLY [04-04-2020(online)].pdf 2020-04-04
9 487-MUM-2014-CORRESPONDENCE(21-2-2014).pdf 2018-08-11
9 487-MUM-2014-OTHERS [04-04-2020(online)].pdf 2020-04-04
10 487-MUM-2014-CORRESPONDENCE(19-3-2014).pdf 2018-08-11
10 487-MUM-2014-FER.pdf 2019-10-04
11 487-MUM-2014-CORRESPONDENCE(19-3-2014).pdf 2018-08-11
11 487-MUM-2014-FER.pdf 2019-10-04
12 487-MUM-2014-CORRESPONDENCE(21-2-2014).pdf 2018-08-11
12 487-MUM-2014-OTHERS [04-04-2020(online)].pdf 2020-04-04
13 487-MUM-2014-FER_SER_REPLY [04-04-2020(online)].pdf 2020-04-04
13 487-MUM-2014-FORM 1(21-2-2014).pdf 2018-08-11
14 487-MUM-2014-COMPLETE SPECIFICATION [04-04-2020(online)].pdf 2020-04-04
14 487-MUM-2014-FORM 26(19-3-2014).pdf 2018-08-11
15 487-MUM-2014-CLAIMS [04-04-2020(online)].pdf 2020-04-04
15 ABSTRACT1.jpg 2018-08-11
16 487-MUM-2014-US(14)-HearingNotice-(HearingDate-15-09-2023).pdf 2023-08-14
16 Drawings.pdf 2018-08-11
17 487-MUM-2014-FORM-26 [31-08-2023(online)].pdf 2023-08-31
17 Figure for Abstract.jpg 2018-08-11
18 Form 2.pdf 2018-08-11
18 487-MUM-2014-FORM-26 [31-08-2023(online)]-1.pdf 2023-08-31
19 Form 3.pdf 2018-08-11
19 487-MUM-2014-Correspondence to notify the Controller [31-08-2023(online)].pdf 2023-08-31
20 Form 5.pdf 2018-08-11
20 487-MUM-2014-Response to office action [25-09-2023(online)].pdf 2023-09-25

Search Strategy

1 2020-07-2716-41-43AE_27-07-2020.pdf
1 SearchStrategyMatrix_04-10-2019.pdf
2 2020-07-2716-41-43AE_27-07-2020.pdf
2 SearchStrategyMatrix_04-10-2019.pdf