Sign In to Follow Application
View All Documents & Correspondence

Automated Assessment Of Testing Skills

Abstract: The present subject matter relates to a computer-implemented method for assessing testing skills of software testers. The computer-implemented method includes receiving at least one test case from a software tester, wherein the at least one test case identifies at least one defect in a program code. The computer-implemented method further includes determining one or more parameters indicative of a type and a severity of the at least one defect based on predefined rules. The computer-implemented method further includes calculating a score of the software tester based at least on the determined type and severity. (To be published with Fig. 3)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 April 2012
Publication Number
49/2013
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2021-07-08
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
NIRMAL BUILDING, 9TH FLOOR, NARIMAN POINT, MUMBAI, MAHARASHTRA 400021, INDIA

Inventors

1. NANDA, MOHIT
TATA CONSULTANCY SERVICES LTD., 3A103, GATEWAY PARK, ROAD#21, MIDC, ANDHERI EAST, MUMBAI - 400093, INDIA
2. KHANAPURKAR, AMOL
TATA CONSULTANCY SERVICES LTD., TCS INNOVATION LABS-PERFORMANCE ENGINEERING, 5TH FLOOR, GATEWAY PARK, ROAD#21, MIDC, ANDHERI EAST, MUMBAI-400093, INDIA

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10, rule 13)
1. Title of the invention: AUTOMATED ASSESSMENT OF TESTING SKILLS
2. Applicant(s)
NAME NATIONALITY ADDRESS
TATA CONSULTANCY Indian Nirmal Building, 9th Floor, Nariman Point, SERVICES LIMITED Mumbai, Maharashtra 400021, India
3. Preamble to the description
COMPLETE SPECIFICATION
The following specification particularly describes the invention and the manner in which it
is to be performed.

TECHNICAL FIELD
[0001] The present subject matter, in general, relates to assessing testing skills of
software testers and, in particular, relates to automated assessment of testing skills of software testers.
BACKGROUND
[0002] Software development includes a number of stages. Before such software and
applications can be deployed at client's end, the software is tested for defects or errors. Determining and addressing such defects is primarily done by one or more individuals or testers. Furthermore, it may be required that competencies of such testers should be evaluated either as part of regular or periodic assessment within an organization, or as part of a competition between a large number of testers. Such competition is primarily intended for assessing technical competencies of such individual testers.
[0003] Such assessments can be automated, wherein skills of the testers can be evaluated
based on defects that are detected by the testers in software code written by programmers. It will be also appreciated that an efficient and productive testing phase is most likely to also improve software development lifecycle. In order to better efficiency and productivity within testing phase, efficient and competent testers are to be identified and utilized.
SUMMARY [0004] This summary is provided to introduce concepts related to automated assessment of testing skills, and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter. [0005] In one implementation, a computer-implemented method for assessing testing skills of software testers is provided. The computer-implemented method includes receiving at least one test case from a software tester, wherein the at least one test case identifies at least one defect in a program code. The computer-implemented method further includes determining one or more parameters indicative of a type and a severity of at least one defect based on predefined rules. The computer-implemented method further includes calculating a score of the software tester based at least on the determined type and severity.

BRIEF DESCRIPTION OF THE DRAWINGS [0006] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.
[0007] Fig. 1 illustrates a network environment implementing a software testers assessment system, in accordance with an implementation of the present subject matter. [0008] Fig. 2 illustrates a computing system for software testers assessment, in accordance with an implementation of the present subject matter.
[0009] Fig. 3 illustrates a method for assessing testing skills of software testers, in accordance with an implementation of the present subject matter.
DETAILED DESCRIPTION
[0010] Systems and methods for assessing software testing skills of testers are described.
Such systems and methods can be implemented in a variety of computing devices, such as
laptops, desktops, workstations, tablet-PCs, smart phones, notebooks or portable computers,
tablet computers, mainframe computers, mobile computing devices, entertainment devices,
computing platforms, internet appliances, and similar systems. However, a person skilled in the
art will comprehend that the embodiment of the present subject matter are not limited to any
particular computing system, architecture or application device, as it may be adapted to take
advantage of new computing system and platform as they become accessible.
[0011] As indicated previously, a software development lifecycle, among other stages,
includes testing. Software testing ensures that work product that is undergoing testing operates in an expected manner. Furthermore, testing also determines and ensures that various functional requirements of software application have also been incorporated during development of the software application under consideration.
[0012] As would also be appreciated by a person skilled in the art, the efficiency of the
testing phase is also dependent on skill and competencies of a tester. Testers, as part of testing phase, determine one or more defects or errors that are present or likely to occur when the code being developed is executed. As would also be understood the error may arise due to the code

executing in an unexpected manner, or produces results which fail to conform to one or more requirements that are specified by the end user. Such errors or defects, when detected, can be addressed and corrected appropriately.
[0013] It therefore becomes desirable that proficient and competent testers are identified
and deployed for the testing within the software development lifecycle. The testers can be assessed to determine a level of competencies that they may possess. Conventional systems and mechanism exist that can assess and evaluate competencies of testers. Such systems involve providing the testers with one or more problem statements that may include one or more defects. The problem statements may include the code which is to be tested. In some cases, the code may also be accompanied with one or more requirement specification to which the code under consideration should conform.
[0014] The defects arising out of the code can then be detected by the various testers. The
manner in which the defects are detected or determined may depend on the skill and the competencies of the respective tester, and is beyond the scope of the present subject matter. The defects once identified by various testers can be analyzed by conventional systems for determining validity of the defects submitted by the testers. The conventional systems typically analyze the defects by comparing the defects submitted by the testers with one or more previously defined solution. However, such an assessment does not allow for a dynamic evaluation and may be restricted based on comparison with solutions which have been previously defined. Furthermore, such conventional testing systems may not be extensible to allow testing of codes for different platform.
[0015] Systems and methods for automated assessment of testing skills of software
testers are described. In one implementation, a plurality of problem statements is defined. Once
the plurality of problem statements are defined, one or more rules are defined which are
eventually utilized for assessing the previously defined problem statements. As would be
understood, each of defined rules is associated with the relevant problem statement.
[0016] Once the problem statements have been defined and the rules for assessing the
same have been associated, one or more problem statements are provided to one or more software testers. The problem statements can include program code that is to be tested. The program code that is to be tested can be in form of either source code or can be in the form of a binary code which can be executed. In one implementation, problem statement can be further

associated with a requirement specification. A requirement specification may, amongst other
things, specify one or more functional requirements that may specify behavior of the program
code for valid and invalid inputs, along with defining the valid and invalid input criteria.
[0017] As would be evident, the testers may determine one or more defects that are
inherent in the problem statements. In one implementation, the one or more defects may be functional defects present in the program code. The testers may detect such defects based on their competencies or proficiency in reviewing and testing program code. The manners in which the defects are determined or detected by the testers are, however, beyond the scope of the present subject matter.
J0018J In one implementation^ the defects that are detected by the testers can be uploaded
onto a central location. In such a case, the defects can be submitted as tests cases that are utilized
for testing the program code under consideration. Once uploaded, the test cases can be evaluated
to ascertain whether the test cases conform to one or more specific requirements, for example,
syntactical requirements. Once the test cases are ascertained to be syntactically correct, a further
determination can be made to determine whether the submitted test cases are valid or not.
[0019] Once the test case has been validated, it is further assessed to determine types of
defects that it addresses. Based on the type of the defect that the test case addresses, the test case can be classified. On classification, severity of the identified defects can be determined. Based on the type and the severity of the defects, the testers associated with the submitted test cases for which the type and the severity of the defects were determined, are assessed. In one implementation, the tester can be allotted a score based on the assessment. Furthermore, each of the type and the severity can be represented by one or more parameters.
[0020] Similarly, a score for each of the testers being assessed is determined, based on
which the testers are ranked. Based on the ranking, one or more testers can be identified for further projects, or in case where the assessment was the basis of a competition, the identified testers can be suitably awarded.
[0021] As will be gathered, systems and methods for automated assessment of testing
skills of software testers reduce manual efforts that are required for purposes of testing. Furthermore, the automated assessment system is based on one or more pre-defined rules and processes which can be defined by an individual, for example, an evaluator. This allows mch systems; implementing automated assessment of testing skills of software testers to be extensible.

Furthermore, such systems and methods can be utilized for evaluating one or more probable solutions for addressing a particular defect in the program code that is to be tested. Furthermore, the rules can be implemented for a specific time frame for a better understanding of the defects that are inherent within the program code to be tested. Still further, the extensible assessment systems and method can also prescribe rules for evaluating the program code specific to different programming platforms.
[0022] These and other advantages of the present subject matter would be described in
greater detail in conjunction with the following figures. While aspects of described systems and methods for automated assessment of software testing skills can be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system(s).
[0023] Fig. 1 illustrates a network environment 100 implementing a tester assessment
system 102, hereinafter referred to as system [02, configured for automated assessment of testing skills of software testers, according to an embodiment of the present subject matter. In one implementation, the network environment 100 can be a public network environment, including thousands of personal computers, laptops, various servers, such as blade servers, and other computing devices. In another implementation, the network environment 100 can be a private network environment with a limited number of personal computers, servers, laptops, and other computing devices.
[0024] The system 102 may be communicatively connected to a plurality of user devices
104-1, 104-2,...104-N, collectively referred to as the user devices 104 and individually referred to as a user device 104, through a network 106. The system 102 and the user devices 104 may be implemented as any of a variety of conventional computing devices, including, servers, a desktop personal computer, a notebook or portable computer, a workstation, a mainframe computer, a mobile computing device, and a laptop. Further, in one implementation, the system 102 may itself be a distributed or centralized network system in which different computing devices may host one or more of the hardware or software components of the system 102. In another implementation, the various components of the system 102 may be implemented as a part of the same computing device.
[0025] The network 106 may be a wireless network, a wired network, or a combination
thereof. The network 106 can also be an individual network or a collection of many such

individual networks, interconnected with each other and functioning as a single large network,
for example, the Internet or an Intranet. The network 106 can be implemented as one of the
different types of networks, such as intranet, local area network (LAN), wide area network
(WAN), the internet, and such. The network 106 may either be a dedicated network or a shared
network, which represents an association of the different types of networks that use a variety of
protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control
Protocol/Internet Protocol (TCP/IP), to communicate with each other. Further, the network 104
may include network devices, such as network switches, hubs, routers, for providing a link
between the system 102 and the user devices 104. The network devices within the network 104
may interact with the system 102 and the user devices 104 through the communication links.
[0026] Furthermore, the system 102 can be connected to the user devices 104 through the
network 106. In one implementation, the user devices 104 may include computing devices. Examples of the computing devices include, but are not limited to personal computers, desktop computers, smart phones, PDAs, and laptops. In one implementation, a plurality of software testers may use the user devices 104 to communicate with the system 102 for providing an assessment of their software testing skills. In one example, the user devices 104 can be provided with a User Interface (Ul), via which the software tester can access test code and submit defects. Further, communication links between the user devices 104 and the system 102 may be enabled through a desired form of connection, for example, via dial-up modem connection, cable link, digital subscriber line (DSL), wireless or satellite link, or any other suitable form of communication.
[0027] In one implementation, the system 102 further includes a dynamic evaluation
module 108. The dynamic evaluation module 108 assesses testers based on the defects identified by one or more testers. The testers can be assessed by the dynamic evaluation module 108 based on one or more problem statements and associated rules for assessing such problem statements. In one implementation, such problem statements and associated rules can be defined. The rules may include rules for checking syntax and file format of the defects. The rules may further include rules for determining validity of the defects, type of the defects, and severity of the defects.
[0028] Subsequently, one or more testers can be assessed. The assessment of the testers
can be initiated by providing the testers with a test program code, which is to be tested. The test

program code can be provided to each of the testers through the user devices 104. In one implementation, the test program code can be in the form of a source code or a binary code. In yet another implementation, the test program code can be further associated with a requirement specification to which the test program code under consideration, should conform. As also indicated previously, the requirement specification indicates various functional requirements of the test program code.
[0029] Once received by the testers, the test program code is analyzed by testers for
determination of defects within the test program code. The testers can determine the defects within the test program code. In one implementation, the defects are functional defects in the test program code. In one implementation, the defects in the test program code can be determined based on the requirements specification. The defects within the test program code can be determined by the testers based on their skill, competencies, prior experience, area of expertise, etc. The manners in which the defects are determined by the testers are beyond the scope of the present subject matter.
[0030] Once the defects have been determined, the defects are uploaded onto the system
102 by the user devices 104. In one implementation, the defects can be listed in a predefined format, for example an ASCII text file. In one implementation, the defects can be identified as one or more test cases. The test cases can be considered to be one or more inputs or conditions that can be utilized for determining whether the test program code when executed, would provide an unexpected result that do not conform to functional requirement specifications of the test program code.
[0031] The test cases once determined can be further uploaded onto the system 102.
Once the test cases are uploaded, the dynamic evaluation module 108 may determine the validity of each of the test cases that have been uploaded by one or more testers through the user devices 104. The dynamic evaluation module 108 may determine the validity of the test cases based on one or more predefined rules. For example, test case can be considered to be valid if it can not be successfully executed against the test program code or the test cases bring out a defect present in the test program code. For valid test cases, output of the test program code does not conform to the functional requirement specifications. In such a case, only valid test cases can be considered against the tester, by whom the test cases under consideration, was uploaded.

[0032] Once the valid test cases are determined, the dynamic evaluation module 108 may
categorize the test cases based on the type of defects that have been addressed. In one
implementation, the dynamic evaluation module 108 may determine the type of the defect based
on one or more predefined rules. The type of the defects may vary for a white box problem
statement and a black box problem statement. In one implementation, the type of defects may
include syntax errors in the test program code, unreachable codes, array index out of bound,
stack overflow, out of memory, non-recoverable crash of the program code, boundary value
check, infinite loops, and data type range check. In one implementation, the dynamic evaluation
module 108 also determines the number of test cases within each type. Once the defects have
been categorized, the dynamic evaluation module 108 further analyzes the test cases to determine
the severity of the defects. In such a case, the dynamic evaluation module 108 may associate a
weight parameter indicative of the severity of the defect that has detected. Higher the weight
parameters, more severe and critical are the defect which has been detected.
[0033] Once, the type and the severity of the defects have been determined, the testers
are ranked based on the number of the defects of each type and the severity of the defects which were addressed through the testers' respective test cases. Based on the ranking of the testers, further actions can be taken. For example, testers who are ranked better in comparison with others can be considered to be more proficient in testing, and hence can be further shortlisted for future projects.
J0034J In one implementation, the system 102 may also be configured to determine
uniqueness of the test cases submitted by each of the testers. For example, the system 102 can compare, on receiving the test cases from a tester, with a repository of previously received test cases by other testers. In case of a match, the testers submitting such test cases can either be allotted a lower score, or the test cases can be associated with a lower weightage. In one implementation, the system 102 can also determine whether the test cases submitted by one or more of the testers have been plagiarized, or are based on previously submitted test cases. In yet another implementation, the system 102 can further determine whether the submitted test cases adhere to a predefined format. In case the test cases are not within the prescribed format, the test cases would not be considered for the assessment of the respective tester. The working of the system ] 02 is further explained in detail in conjunction with Fig, 2,

[0035] Fig. 2 illustrates the system 102, in accordance with an implementation of the
present subject matter. In said implementation, the system 102 includes one or more processor(s) 202, interface(s) 204, and a memory 206 coupled to the processor(s) 202. The processor 202 can be a single processing unit or a number of units, all of which could also include multiple computing units. The processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions and data stored in the memory 206.
[0036] The interface(s) 204 may include a variety of software and hardware interfaces,
for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, and a printer. Further, the interfaces 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data repositories in the communication network (not shown in the figure). The interfaces 204 may facilitate multiple communications within a wide variety of protocols and networks, such as a network, including wired networks, for example, LAN and cable, and wireless networks, for example, WLAN, cellular, and satellite. The interfaces 204 may further include one or more ports for connecting the system 102 to a number of computing devices.
[0037] The memory 206 may include any computer-readable medium known in the art
including, for example, volatile memory, such as static random access memory (SRAM) and
dynamic random access memory (DRAM), and/or non-volatile memory, such as read only
memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and
magnetic tapes. The memory 206 further includes modules 208 and data 210.
[0038] The modules 208 include routines, programs, objects, components, data
structures, etc., which perform particular tasks or implement particular abstract data types. The modules 208 further include the dynamic evaluation module 108, a ranking module 212, and other module(s) 214. The other module(s) 214 may include other modules that supplement applications and functions of the system 102, either alone or in conjunction with the dynamic evaluation module 108 and the ranking module 212.
[0039] The data 210, amongst other things, serves as a repository for storing data
processed, received, and generated by one or more of the modules 208. The data 210 includes

test program code 216, test cases 218, syntax rules 220, dynamic evaluation rules 222, and other data 224. The other data 224 may include data generated as a result of the execution of one or more modules in the other module(s) 214,
[0040] In operation, the system 102 may be accessed by the software tester through the
user device 104. In one implementation, the access to the system 102 through one or more of the user devices 104 can be based on user credentials. Once the testers have been authenticated and access to the system 102 has been provided, the dynamic evaluation module 108 provides the authenticated testers with one or more test program code. In one implementation, the test program code can be obtained from test program code 216. In one implementation, the test program code 216 provided to the testers may be based on the user credential. For example, based on the user credential, the dynamic evaluation module 108 may determine that the testers being tested is a beginner or has very recently acquired certain expertise. On determining the same, the dynamic evaluation module 108 may provide test program code 216 which is of a less difficulty level. Similarly, in cases where the testers include experienced professionals, the level of the test program code 216 provided may be high. In another implementation, the user credentials can also be used to determine the software platform of the test program code 216 to be tested. For example, based on the user credentials, the dynamic evaluation module 108 may identify that the testers is proficient for Java based programming, and so on. As will be appreciated by a person skilled in the art, the test program code 216 being provided to each of the user devices 104 may be based on other parameters, without deviating from the scope of the present subject matter.
[0041] As also indicated previously, the test program code 216 may include one or more
problem statements which are associated with plurality of rules. In one implementation, the rules prescribe the manner in which the problem statement, with which it is associated, is to be evaluated. The problem statement may further include program code in the form of a source code or binary code. As will be known to persons skilled in the art, testing the former is referred to as white-box testing and the latter testing is known as black-box testing. White-box testing typically involves detecting test cases based on internal structure of the program code which can be determined based on the source code. On the other hand, black-box testing typically involves detecting test cases based on the output of the program code being tested. The output of the

program code being tested, when executed, provides an indication as to whether the program code has one or more defects.
[0042] In either case, the program code to be tested in the test program code 216 further
includes one or more requirement specifications. The requirement specifications prescribe various other requirements which the test program code 216 should conform to. These may include requirements that are directly related to the functionality of the test program code being tested.
[0043] The test program code 216 is provided to one or more testers through the
respective user devices 104. The testers on the respective user devices 104 analyze the problem statement and the accompanying requirement specifications. Based on the analysis, the testers may determine one or more defects in the test program code 216 provided. The defects that are detected by the testers using their respective knowledge and experience in relation to testing of different program codes. Once the defects are determined, the testers may list the defects, say in an ASCII text file. As also indicated previously, the defects in test program code 216 provided to the testers can be indicated by way of one or more test cases. Each of such test cases may identify various conditions or scenarios which may arise due to the defects in the test program code 216.
[0044] Once all the defects or the test cases are determined by the testers, the same are
communicated to the system 102. For example, the testers can upload the test cases onto the
system 102, through the user devices 104. Once the test cases are uploaded onto the system 102,
the dynamic evaluation module 108 analyzes the uploaded test cases for each of the testers. In
one implementation, the test cases can be uploaded and stored as test cases 218.
[0045] In one implementation, the dynamic evaluation module 108 may check for
syntax-based errors within the test cases 218 based upon syntax rules 220. Furthermore, the dynamic evaluation module 108 may also determine whether the file format in which the test cases 218 have been submitted is in an appropriate format. The syntax rules 220 may be provided to users along with problem statements. The syntax rules 220 may vary for each problem statement. In one implementation, the syntax rules may include rules, for example, submissions by the testers are to be made in form ASCII text file (*.txt), filename should be _.txt5 each line in the file should contain a single test case, each test case should be specified in the following format of common-separated values , and last line of the file should contain a single character '0' to mark end of the file.
[0046] In yet another implementation, the dynamic evaluation module 108 may also
determine whether the test cases 218 being submitted by one tester matches with the test cases 218 submitted by another tester to determine plagiarism. Such cases when detected may indicate that the test cases 218 detected by the respective tester may not be originally detected by them and may be plagiarized. For example, the system 102 can compare the test cases 218 with a repository of previously received test cases (not shown in fig. 2). In such a case, the testers submitting such test cases can either be allotted a lower score, or the test cases can be associated with a lower weightage. In one implementation, duplicate test cases 218 can be determined by evaluating a message-digest and hash of the test cases 218. A hash value of previously received test cases may be evaluated and stored in the other data 224. A hash value of the test cases 218 may be evaluated by the dynamic evaluation module 108. The hash values of the previously received test cases may be retrieved and compared with the hash value of the test cases 218 to determine whether the test cases 218 are similar to the one or more of previously submitted test cases. In case, the test cases 218 are similar to the one or more of previously submitted test cases, duplicate entries have been submitted by one or more testers.
|0047] Once the syntactical and the format adherence are ascertained, the dynamic
evaluation module 108 further analyzes the test cases 218 based on the dynamic evaluation rules 222. In one implementation, the dynamic evaluation rules 222 may include different types of rules for analyzing the test cases 218. For example, the dynamic evaluation rules 222 may include rules based on which the validity of the submitted test cases 218 can be determined. Further, the dynamic evaluation rules 222 may also include rule for determining the type of the defects. Furthermore, the dynamic evaluation rules 222 may include rules for determining the severity of the defects. As would be appreciated by a person skilled in the art, the types of rules within the dynamic evaluation rules 222 can be predefined that may allow the system 102 to be extensible.
[0048] Returning to the evaluation, the dynamic evaluation module 108 analyzes all the
test cases 218 submitted by different testers, based on the dynamic evaluation rules 222. In one implementation, the dynamic evaluation module 108 determines whether the test cases 218 that have been submitted are valid. For example, the dynamic evaluation module 108 based on the

dynamic evaluation rules 222, may determine whether the defect that has been submitted is a valid defect afflicting the test program code 216 provided to the testers. In case the test cases 218 submitted by the testers is not valid, the same can be rejected or not considered for further assessment. In one implementation, the dynamic evaluation rules 222 may require output of a test program code 216 to lie in a certain range, for inputs of a certain range. Further, in one implementation, the dynamic evaluation rules 222 may include a plurality of rules to determine validity of the submitted test cases to ensure that the dynamic evaluation module 108 determines validity of test cases based on rules applicable at that point of time.
[0049] Once the test cases 218 have been ascertained to be valid, the dynamic evaluation
module 108 may determine the type of the defects that are specified by the test cases 218. In one implementation, the type of the defects can be determined based on the dynamic evaluation rules 222. The dynamic evaluation module 108, subsequently, may categorize the defects based on their type. In another implementation, the number of different types of defects that have been reported in the test cases 218 can be determined.
[0050] Subsequently, for each of the test cases 218, the severity of the defects can be
determined. In one implementation, the dynamic evaluation module 108 determines the severity of the defects based on the dynamic evaluation rules 222. Examples of such defects include, but are not limited to defects arising out of boundary-value, type mismatch, array out of bound, and so on. For each of the defects that are found to be severe, the dynamic evaluation module 108 may associate a weight parameter indicating the severity. As also indicated previously, the higher the weight parameter, the more severe can the defect be considered. It should be noted that the weight parameter is only one implementation for indicating the severity of the defects. Other mechanisms for indicating the severity would also be included within the scope of the present subject matter.
[0051] Once the validity, type, and severity of the defects are determined, the same can
be used to assess the respective testers. For example, testers that have detected more number of defects and defects within are likely to be more severe, can be considered to be proficient in terms of testing when compared to other testers. In one implementation, the dynamic evaluation module 108 may generate a matrix indicating the defect-type versus the defect severity for the test cases 218 submitted by each of the testers. Based on the matrix, the testers can be assessed.

The matrix may vary based on the problem statement provided to the tester. An exemplary defect-type versus the defect severity matrix is indicated below as Table 1:
Table 1

Severity/ Boundary Array Syntax Unreachable Stack Out Of Non- Infinite
Defect Value Index Out Errors Code Overflow memory recoverable Loops
Type Of Bound crash
Critical 2 9 5 1 4 3 1 2
High 4 7 4 2 3 2 2 4
Medium 8 8 7 1 5 1 4 3
Low 1 9 2 3 2 4 2 2
[0052] In one implementation, the ranking module 212 may compare the defect-type
versus the defect severity matrix for all testers to determine the respective score which can be allotted to each of the testers. Once the score have been allotted the ranking module 212 ranks the testers in a decreasing order of the score. In case of a tie in the score, time taken for submission may be used for tie-breaker. Furthermore, based on the score allotted by the ranking module 212, a further determination can be made as to which of the testers can be selected for a prospective project. The present subject matter has been described with respect to assessing the testers. The assessment could be for different purposes or end-objectives. However, the same would also be within the scope of the present subject matter.
[0053] Fig. 3 illustrates a method 300 for automated assessment of testing skills of
software testers, according to one embodiment of the present subject matter. The method may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices. The method may be implemented in a variety of computing systems. For

example, the method 300, described herein, may be implemented by using the system 102, as described above.
[0054] The order in which the method 300 is described is not intended to be construed as
a limitation, and any number of the described method blocks can be combined in any order to implement the method, or an alternative method. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the methods can be implemented in any suitable hardware, software, firmware, or combination thereof. It will be understood that even though the method 300 is described with reference to the system 102, the description may be extended to other systems as well.
[0055] At block 302, one or more defects associated with test program code are received
from one or more testers. For example, the system 102 receives the defects or the test cases from
one or more testers through the user devices 104. The defects or the test cases can be received by
the dynamic evaluation module 108 and stored in the test cases 218. In one implementation, the
test cases 218 can be received from the testers in the form of an ASCII text file.
[0056] At block 304, syntax and format of the one or more defects are checked. For
example, the dynamic evaluation module 108 may check the syntax and the format of the received defects based on one or more predefined rules. A file format of the received one or defects may be checked. In one implementation, if the file-format is an ASCII text file, the file-format is accepted. Otherwise, a message may be displayed on the display of the system 102, that the file-format is incorrect. In one implementation, the dynamic evaluation module 108 may further determine whether the defects submitted by one or more testers are similar to previously submitted defects, by comparing the submitted defects with a repository of previously submitted defects.
[0057] At block 306, validity of the one or more defects may be evaluated. For example,
the dynamic evaluation module 108 determines the validity of the submitted test cases 218 based on one or more dynamic evaluation rules 222. As also indicated previously, the dynamic evaluation rules 222 may also include rules to determine the type, and the severity of the defects. In one implementation, the dynamic evaluation module 108 may determine whether the defects are such which afflict the test program code 216 provided to the testers for assessment. If the defect is found to be valid, the test cases 218 can be considered for further assessment.

[0058] At block 308, a type and a severity of one or more valid defects are determined.
For example, the dynamic evaluation module 108 based on the dynamic evaluation rules 222, determines the type and the severity of the defects that are reported as part of the test cases 218. Depending on the type, the defects can be further classified. Furthermore, the dynamic evaluation module 108 additionally may count the number of defects that have been reported for each type. In one implementation, the dynamic evaluation module 108 may further associate a parameter indicating the severity of the defects detected. In yet another implementation, the dynamic evaluation module 108 may generate a defect-type versus defect-severity matrix, which indicates the various types of defects, their respective severity, along with the number of defects that have been determined, for each of the testers.
[0059] At block 310, a score for each of the software testers being assessed is
determined. For example, the ranking module 212 based on the number, the type and the severity of the defects that were detected, scores each of the respective testers. In one implementation, the ranking module 212 may determine the score of the software testers based on the defect-type versus defect-severity matrix. Subsequently, the ranking module 212 ranks the testers based on the allotted score. In case of a tie, time taken for submission may be used for tie-breaker. The top scorers, i.e., the more proficient testers can be shortlisted and further can be selected for prospective projects.
[0060] Although implementations of software tester assessment system have been
described in language specific to structural features and/or methods, it is to be understood that the present subject matter is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as implementations for software tester assessment system.

I/We claim:
1. A computer-implemented method for assessing testing skills of software testers, the
method comprising:
receiving at least one test case from a software tester, wherein the at least one test case identifies at least one defect in a program code;
determining one or more parameters indicative of a type and a severity of the at least one defect based on predefined rules; and
calculating a score of the software tester based at least on the determined type and severity.
2. The method as claimed in claim 1 further comprising checking syntax of the at least one received test case.
3. The method as clamed in claim 1, wherein the score is calculated based at least in part on the type of the at least one defect and the severity of the at least one defect.
4. The method as clamed in claim 1, further comprising ascertaining whether the received at least one test case is similar to one or more of previously submitted test cases.
5. The method as claimed in claim 4, wherein the ascertaining further comprises:
evaluating a hash value of the at least one received test case;
retrieving hash values of one or more of the previously submitted test cases; and comparing the hash value of the at least one received test case with the hash values of one
or more of the previously submitted test cases to determine whether the at least one received test
case is similar to the one or more of previously submitted test cases.
6. The method as clamed in claim 1, further comprising ranking the software testers based
at least on the score.

7. A software testers assessment system (102) comprising:
a processor (202); and
a memory (206) coupled to the processor (202), the memory (206) comprising:
a dynamic evaluation module (108) configured to determine, based on predefined
rules, one of a type and a severity of at least one defect, wherein the defect is submitted
by a software tester in a test program code, and wherein the at least one defect is
indicated in a test case; and
a ranking module (212) configured to calculate a score of the software tester
based at least on one of the type and the severity of the at least one defect.
8. The software testers assessment system (102) as claimed in claim 7, wherein the dynamic evaluation module (108) is further configured to check for syntactical conformity of the test case, based on a plurality of syntax rules.
9. The software testers assessment system (102) as claimed in claim 7, wherein the dynamic evaluation module (108) is further configured to evaluate validity of the at least one defect based at least on dynamic evaluation rules.
10. The software testers assessment system (102) as claimed in claim 7, wherein the predefined rules further includes rules for determining one of the validity and the severity of the at least one defect.
11. The software testers assessment system (102) as claimed in claim 7, wherein the ranking module (212) generates a matrix listing type of the at least one defect, severity of the at least one defect and number of defects submitted by the software tester.
12. The software testers assessment system (102) as claimed in claim 11, wherein the ranking module (212) calculates the score based on the generated matrix.

13. A computer-readable medium having embodied thereon computer executable instructions for executing a method comprising:
receiving at least one test case from a software tester, wherein the at least one test case identifies at least one defect in a program code;
determining one or more parameters indicative of a type and a severity of the at least one defect based on predefined rules; and
calculating a score of the software tester based at least on the determined type and severity.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 1316-MUM-2012-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26
1 ABSTRACT1.jpg 2018-08-11
2 1316-MUM-2012-POWER OF ATTORNEY(18-6-2012).pdf 2018-08-11
2 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-20-04-2021).pdf 2021-10-03
3 1316-MUM-2012-IntimationOfGrant08-07-2021.pdf 2021-07-08
3 1316-MUM-2012-FORM 3.pdf 2018-08-11
4 1316-MUM-2012-PatentCertificate08-07-2021.pdf 2021-07-08
4 1316-MUM-2012-FORM 2.pdf 2018-08-11
5 1316-MUM-2012-Written submissions and relevant documents [05-05-2021(online)].pdf 2021-05-05
5 1316-MUM-2012-FORM 2(TITLE PAGE).pdf 2018-08-11
6 1316-MUM-2012-FORM 18(1-5-2012).pdf 2018-08-11
6 1316-MUM-2012-Correspondence to notify the Controller [24-03-2021(online)].pdf 2021-03-24
7 1316-MUM-2012-FORM 1.pdf 2018-08-11
7 1316-MUM-2012-Correspondence to notify the Controller [13-08-2020(online)].pdf 2020-08-13
8 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-20-08-2020).pdf 2020-07-20
8 1316-MUM-2012-FORM 1(25-5-2012).pdf 2018-08-11
9 1316-MUM-2012-FER.pdf 2018-08-11
9 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-16-04-2020).pdf 2020-03-16
10 1316-MUM-2012-ABSTRACT [28-09-2018(online)].pdf 2018-09-28
10 1316-MUM-2012-DRAWING.pdf 2018-08-11
11 1316-MUM-2012-CLAIMS [28-09-2018(online)].pdf 2018-09-28
11 1316-MUM-2012-DESCRIPTION(COMPLETE).pdf 2018-08-11
12 1316-MUM-2012-CORRESPONDENCE.pdf 2018-08-11
12 1316-MUM-2012-FER_SER_REPLY [28-09-2018(online)].pdf 2018-09-28
13 1316-MUM-2012-CORRESPONDENCE(25-5-2012).pdf 2018-08-11
13 1316-MUM-2012-OTHERS [28-09-2018(online)].pdf 2018-09-28
14 1316-MUM-2012-ABSTRACT.pdf 2018-08-11
14 1316-MUM-2012-CORRESPONDENCE(18-6-2012).pdf 2018-08-11
15 1316-MUM-2012-CLAIMS.pdf 2018-08-11
15 1316-MUM-2012-CORRESPONDENCE(1-5-2012).pdf 2018-08-11
16 1316-MUM-2012-CLAIMS.pdf 2018-08-11
16 1316-MUM-2012-CORRESPONDENCE(1-5-2012).pdf 2018-08-11
17 1316-MUM-2012-CORRESPONDENCE(18-6-2012).pdf 2018-08-11
17 1316-MUM-2012-ABSTRACT.pdf 2018-08-11
18 1316-MUM-2012-CORRESPONDENCE(25-5-2012).pdf 2018-08-11
18 1316-MUM-2012-OTHERS [28-09-2018(online)].pdf 2018-09-28
19 1316-MUM-2012-CORRESPONDENCE.pdf 2018-08-11
19 1316-MUM-2012-FER_SER_REPLY [28-09-2018(online)].pdf 2018-09-28
20 1316-MUM-2012-CLAIMS [28-09-2018(online)].pdf 2018-09-28
20 1316-MUM-2012-DESCRIPTION(COMPLETE).pdf 2018-08-11
21 1316-MUM-2012-ABSTRACT [28-09-2018(online)].pdf 2018-09-28
21 1316-MUM-2012-DRAWING.pdf 2018-08-11
22 1316-MUM-2012-FER.pdf 2018-08-11
22 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-16-04-2020).pdf 2020-03-16
23 1316-MUM-2012-FORM 1(25-5-2012).pdf 2018-08-11
23 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-20-08-2020).pdf 2020-07-20
24 1316-MUM-2012-FORM 1.pdf 2018-08-11
24 1316-MUM-2012-Correspondence to notify the Controller [13-08-2020(online)].pdf 2020-08-13
25 1316-MUM-2012-FORM 18(1-5-2012).pdf 2018-08-11
25 1316-MUM-2012-Correspondence to notify the Controller [24-03-2021(online)].pdf 2021-03-24
26 1316-MUM-2012-Written submissions and relevant documents [05-05-2021(online)].pdf 2021-05-05
26 1316-MUM-2012-FORM 2(TITLE PAGE).pdf 2018-08-11
27 1316-MUM-2012-PatentCertificate08-07-2021.pdf 2021-07-08
27 1316-MUM-2012-FORM 2.pdf 2018-08-11
28 1316-MUM-2012-IntimationOfGrant08-07-2021.pdf 2021-07-08
28 1316-MUM-2012-FORM 3.pdf 2018-08-11
29 1316-MUM-2012-US(14)-HearingNotice-(HearingDate-20-04-2021).pdf 2021-10-03
29 1316-MUM-2012-POWER OF ATTORNEY(18-6-2012).pdf 2018-08-11
30 ABSTRACT1.jpg 2018-08-11
30 1316-MUM-2012-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26

Search Strategy

1 search_16-04-2018.pdf

ERegister / Renewals

3rd: 09 Jul 2021

From 25/04/2014 - To 25/04/2015

4th: 09 Jul 2021

From 25/04/2015 - To 25/04/2016

5th: 09 Jul 2021

From 25/04/2016 - To 25/04/2017

6th: 09 Jul 2021

From 25/04/2017 - To 25/04/2018

7th: 09 Jul 2021

From 25/04/2018 - To 25/04/2019

8th: 09 Jul 2021

From 25/04/2019 - To 25/04/2020

9th: 09 Jul 2021

From 25/04/2020 - To 25/04/2021

10th: 09 Jul 2021

From 25/04/2021 - To 25/04/2022

11th: 23 Mar 2022

From 25/04/2022 - To 25/04/2023

12th: 12 Apr 2023

From 25/04/2023 - To 25/04/2024

13th: 10 Apr 2024

From 25/04/2024 - To 25/04/2025

14th: 15 Apr 2025

From 25/04/2025 - To 25/04/2026