Sign In to Follow Application
View All Documents & Correspondence

Method And System For Analyzing Object Code During Application Runtime To Automatically Generate Test Cases

Abstract: A method and system for analyzing an object code of at least one application to generate test cases for the at least one application. The method comprises receiving at least one code execution path traversed by the object code of the at least one application during application runtime and at least one interactive element associated with the at least one code execution path. Further, analyzing the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path. Further, generating at least one test case for the at least one test scenario. Further, analyzing the generated at least one test case to predict at least one unexplored code execution path. Furthermore, creating at least one test case for the predicted at least one unexplored code execution path.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 December 2017
Publication Number
27/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-07-13
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. KESARY, Shailaja
Tata Consultancy Services Limited, PLOT NO.1, JUBILEE GARDEN, KONDAPUR, CYBERBAD, Hyderabad - 500081, Telangana, India
2. SARASWATHY, Rajesh Viswambaran
Tata Consultancy Services Limited, UNIT - 1, BLOCK A, PODIUM FLOOR, 1ST TO 5TH FLOORS ELECTRONICS TECHNOLOGY PARK SEZ - 1,TECHNOPARK CAMPUS, Trivandrum - 695581, Kerala, India
3. GEORGE, Sephin
Tata Consultancy Services Limited, 6th Floor, A BLOCK, TCS CENTRE SEZ UNIT, Infopark, Kusumagiri P.O. , Kakkanad, Kochi - 682 030, Kerala, India

Specification

Claims:1. A processor implemented method for analyzing an object code of at least one application to generate test cases for the at least one application, the method comprising:
receiving, from a plurality of monitoring probes, at least one code execution path traversed by the object code of the at least one application during application runtime of the at least one application and at least one interactive element associated with the at least one code execution path, wherein the received at least one code execution path differs from a prior-known set of code execution paths traversed by the object code of the at least one application;
analyzing the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path, wherein the at least one test scenario is submitted for acceptance;
generating at least one test case for the at least one test scenario based on the test data set, upon acceptance of the at least one test scenario;
analyzing the generated at least one test case to predict at least one unexplored code execution path; and
creating at least one test case for the predicted at least one unexplored code execution path utilizing test data extracted by backtracking the predicted at least one unexplored code execution path, wherein backtracking identifies data required to invoke the predicted at least one unexplored code execution path.

2. The method as claimed in claim 1, wherein the step of analyzing the generated at least one test case to predict the at least one unexplored code execution, based on a prediction model, comprises:
applying deep learning on a plurality of transaction function entry points of the generated at least one test case to identify a plurality of probable transaction function entry points of generated at least one test case that are susceptible to traversing unexplored code execution paths;
performing a static control flow analysis of a source code associated with the plurality of probable transaction function entry points of the at least one test case to identify a plurality of candidate code execution paths traversed by the at least one test case;
identifying a plurality of unexplored code execution paths from the identified plurality of candidate code execution paths; and
predicting the at least one unexplored code execution path from the identified plurality of unexplored code execution paths, wherein the predicted at least one unexplored code execution path has maximum possibility of being traversed, wherein the maximum possibility is determined based on a predefined probability threshold.

3. The processor implemented method as claimed in claim 1, further comprising analyzing previous patterns associated with acceptance of created test cases and adaptively learning from rejected test scenarios among the created test cases.

4. The processor implemented method as claimed in claim 1, wherein the plurality of monitoring probes to capture and log the at least one code execution path and the corresponding plurality of interactive elements into log files.

5. A system (100) for analyzing an object code of at least one application to generate test cases for the at least one application, the system (100) comprising a plurality of monitoring probes 106-1 through 106-n) and an application quality control system (102):
the application quality control system (102) comprises:
a code module (212) configured to:
receive, from the plurality of monitoring probes, at least one code execution path traversed by the object code of the at least one application during application runtime of the at least one application and at least one interactive element associated with the at least one code execution path, wherein the received at least one code execution path differs from a prior-known set of code execution paths traversed by the object code of the at least one application;
analyze the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path, wherein the at least one test scenario is submitted for acceptance;
a test case generation module (214) configured to:
generate at least one test case for the at least one test scenario based on the test data set, upon acceptance of the at least one test scenario;
a prediction module (216) configured to:
analyze the generated at least one test case to predict at least one unexplored code execution path; and
the test case generation module configured to:
create at least one test case for the predicted at least one unexplored code execution path using test data set extracted by backtracking the predicted at least one unexplored code execution path, wherein backtracking identifies data required to invoke the predicted at least one unexplored code execution path.

6. The system (100) as claimed in claim 5, wherein the prediction module (216) is configured to analyze the generated at least one test case to predict the at least one un explored code execution path by:
applying deep learning on a plurality of transaction function entry points of the at least one test case to identify a plurality of probable transaction function entry points of the at least one test case that are susceptible to traversing unexplored code execution paths;
performing a static control flow analysis of a source code associated with the plurality of probable transaction function entry points of the at least one test case to identify a plurality of candidate code execution paths traversed by the at least one test case;
identifying at least one unexplored code execution path from the identified plurality of candidate code execution paths; and
predicting the at least one unexplored code execution path from the identified plurality of unexplored code execution paths, wherein the predicted at least one unexplored code execution path has maximum possibility of being traversed, wherein the maximum possibility is determined based on a predefined probability threshold.

7. The system (100) as claimed in claim 5, wherein the prediction model (216) is configured to analyze previous patterns associated with acceptance of created test cases and adaptively learn from rejected test scenarios among the created test cases.

8. The system (100) as claimed in claim 5, wherein the application quality control system (102) is configured to utilize the plurality of monitoring probes (106-1 through 106-n) to capture and log the at least one code execution path and the corresponding plurality of interactive elements into log files.
, Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
METHOD AND SYSTEM FOR ANALYZING OBJECT CODE DURING APPLICATION RUNTIME TO AUTOMATICALLY GENERATE TEST CASES

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed
TECHNICAL FIELD
[0001] The disclosure herein generally relates to field of application testing and, more particularly to, analyzing the applications during runtime to automatically generate test cases for the application.

BACKGROUND
[0002] Computerized systems are replacing traditional systems. Multitude of applications are being developed for these computerized systems to tasks. These application(s) deployed on the computerized systems carry out intended functions or tasks. The applications need be tested through a plurality of test cases for a plurality of test scenarios to ensure that runtime application behavior is as expected. Conventionally, an application is tested with the test cases prior to deployment of the application methods, wherein multiple test cases are created by testers for all possible test scenarios so as to reach an optimal number of test cases with least redundancies. However, it may not be practically feasible to create or anticipate all possible execution flows or paths followed by the object code of application during runtime. Moreover, during application runtime in near production environment such as User Acceptance Testing (UAT) environment or actual production environment, the application may encounter different and unexpected user behavior(s). Such unexpected user behavior(s) may lead to execution of unexpected scenarios leading to new execution paths to be exercised, which were never tested earlier or covered in any prior developed test cases. This can usually lead to incidents being reported in production environment since these execution paths can lead to functional defects or cause errors or exceptions in the application. Thus, the conventional approach for test cases may have its own limitations.
[0003] In some existing methods, the unexpected user behavior during runtime of application is captured manually, where a user who identifies a defect or unexpected behavior of the application may report the same to the application development team. The team may then develop a test case for the same. However, this existing approach requires manual intervention and focusses on User Interface element for generation of the test cases. Some existing methods generate automated tests, which are limited to creation of unit tests. These unit test are generated for a set of files submitted and focus on monitoring user behavior to which an application under test provides an unexpected response. Further, the existing method also creates unit test cases for the user behavior.
[0004] Some existing methods generate automated test cases but limit to performing static analysis of an application under test to generate these test cases. Thus, they limit to pre-production test case generation techniques.

SUMMARY
[0005] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for analyzing an object code of at least one application to generate test cases for the at least one application is provided. The method comprises receiving, from a plurality of monitoring probes, at least one code execution path traversed by the object code of the at least one application during application runtime of the at least one application and at least one interactive element associated with the at least one code execution path, wherein the received at least one code execution path differs from a prior-known set of code execution paths traversed by the object code of the at least one application. Further, the method comprises analyzing the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path, wherein the at least one test scenario is submitted for acceptance. Further, the method comprises generating at least one test case for the at least one test scenario based on the test data set, upon acceptance of the at least one test scenario. Further, the method comprises predicting at least one unexplored code execution path by analyzing the generated at least one test case. Furthermore, the method comprises creating at least one test case for the predicted at least one unexplored code execution path by backtracking the predicted at least one unexplored code execution path to identify data required to invoke the predicted at least one unexplored code execution path.
[0006] In another aspect, a system for analyzing an object code of at least one application to generate test cases for the at least one application. The system comprises a code module configured to receive, from a plurality of monitoring probes, at least one code execution path traversed by the object code of the at least one application during application runtime and at least one interactive element associated with the at least one code execution path, wherein the received at least one code execution path differs from a prior-known set of code execution paths traversed by the object code of the at least one application. Further, the code module is configured to analyze the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path, wherein the at least one test scenario is submitted for acceptance. Further, a test case generation module configured to generate at least one test case for the at least one test scenario based on the test data set, upon acceptance of the at least one test scenario. Further, a prediction module configured to predict at least one unexplored code execution path by analyzing the generated at least one test case. Furthermore, the test case generation module configured to create at least one test case for the predicted at least one unexplored code execution path by backtracking the predicted at least one unexplored code execution path to identify data required to invoke the predicted at least one unexplored code execution path.
[0007] In yet another aspect, a non-transitory computer readable medium is provided. The non-transitory computer-readable medium stores instructions which, when executed by a hardware processor, cause the hardware processor to perform acts comprising analyzing an object code of at least one application to generate test cases for the at least one application is provided. The acts comprise receiving, from a plurality of monitoring probes, at least one code execution path traversed by the object code of the at least one application during application runtime of the at least one application and at least one interactive element associated with the at least one code execution path, wherein the received at least one code execution path differs from a prior-known set of code execution paths traversed by the object code of the at least one application. Further, the acts comprise analyzing the at least one code execution path and the at least one interactive element to generate at least one test scenario and test data set for the at least one code execution path, wherein the at least one test scenario is submitted for acceptance. Further, the acts comprise generating at least one test case for the at least one test scenario based on the test data set, upon acceptance of the at least one test scenario. Further, the acts comprise predicting at least one unexplored code execution path by analyzing the generated at least one test case. Furthermore, the acts comprise creating at least one test case for the predicted at least one unexplored code execution path by backtracking the predicted at least one unexplored code execution path to identify data required to invoke the predicted at least one unexplored code execution path.
[0008] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[0010] FIG. 1 illustrates an exemplary system implementing an application quality control system for analyzing an object code of an application during application runtime to automatically generate test cases for the application, according to some embodiments of the present disclosure;
[0011] FIG. 2 illustrates a functional block diagram of the application quality control system of FIG. 1, according to some embodiments of the present disclosure; and
[0012] FIG. 3 is a flow diagram illustrating a method for analyzing an object code of the application during the application runtime to automatically generate the test cases for the application, in accordance with some embodiments of the present disclosure.
[0013] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

DETAILED DESCRIPTION OF EMBODIMENTS
[0014] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[0015] The embodiments herein provide method and system for analyzing an object code of one or more applications during application runtime to automatically generate test cases for the one or more applications. The one or more applications may be running in in near production environment such as User Acceptance Testing (UAT) environment or actual production environment such as a client application environment. Further, the method can also be utilized in a testing environment of an application, to monitor the application during testing phase.
[0016] Each application among the one or more application is monitored during application runtime using a plurality of monitoring probes. The monitoring probes can be configured to detect if any code execution path traversed by the object code of the application being monitored exhibits an unexpected behavior. The unexpected behavior refers to a code execution path traversed by the object code, which differs from a prior-known set of code execution paths traversed by the object code of the application being monitored. Upon detection of an unexpected behavior, the method includes capturing and logging the data of interest corresponding to the application into log files. The data of interest refers to the detected at least one code execution path and a corresponding plurality of interactive elements. The interactive elements can include a User Interface (UI) element or a system element (for example, data generated during machine to machine communication) associated or identified for the detected code execution path. Further, for each application being monitored, the data of interest is analyzed for automatically generating one or more test scenarios for the one or more code execution paths of the application, which exhibited unexpected behavior. The test scenarios, so generated are submitted for acceptance. For example, the acceptance may refer to review and go ahead from a Subject matter Expert (SME). Upon acceptance, a test case is generated for each of the test scenarios, and all the generated test cases are stored in a test case library. Each of the newly generated test case for each corresponding application is further analyzed using deep learning techniques to predict one or more unexplored code execution paths. Furthermore, the method includes generating test cases for each of the predicted one or more unexplored code execution paths. Upon acceptance, these generated test cases for the predicted unexplored code execution paths can then be added to the test case library. The method further provides adaptively learning capability, wherein the adaptive learning capability restricts re-generating rejected test scenarios.
[0017] These generated test cases updated in the test library may be used later to upgrade quality of the application, for example, during version upgrade. Herein, quality refers to reduced defects in the application, enhancing capability of the application to perform in what is expected when implemented, effectively enhancing user experience while working with the application.
[0018] Inclusion of the SME review and acceptance, before the test case is added to the test case library improves reliability of the generated test cases. The SME intervention enables reducing unwanted or noise data as SME review rejects false positive test cases, further such identified false positives are learnt and ignored while generating future test cases.
[0019] Referring now to the drawings, and more particularly to FIG. 1 through FIG. 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[0020] FIG. 1 illustrates an exemplary system 100 implementing an application quality control system 102 for analyzing the object code of the application during the application runtime to automatically generate test cases for the one or more applications, according to some embodiments of the present disclosure. The system 100 depicts an application environment 112, wherein applications, for example an application 110a and an application 110b, running or executed in the application environment 112 are being monitored. The application environment 112 may be the UAT environment, the production environment, the test environment or the like where the applications 110a and 110b are executed. A plurality of monitoring probes are created and inserted in the applications 110a-b to collect data of interest corresponding to the applications 110a and 110b respectively. One or more data sources among data sources 106-1 through 106-n in the system 100 may be the one or more monitoring probes that monitor and collect the data of interest for one or more applications during application runtime. The monitoring probes can be configured to monitor and detect any unexpected behavior of each of the applications 110a and 110b respectively. The unexpected behavior refers to the code execution path traversed by the object code of the applications 110a and 110b respectively, which differs from the prior-known set of code execution paths traversed by the object code of each application. The monitoring probes 106-1 through 106n capture the object code of the applications 110a and 110b generated during runtime and the corresponding one or more interactive elements. The monitoring probes 106-1 through 106-n are configured to collect data of interest, the object code and interactive element, for each transaction of every application being monitored. Typically, the transaction is demarcated by SMEs 116 as a particular landing page in the web application and a corresponding back end method call that may be invoked from that front end page. For example, the transaction can be one set of activities that starts with visiting a particular Uniform Resource Locator (URL) or webpage, then performing a certain activity on the webpage, executing a final click or submit action to finish the logical activity on that webpage. Such a transaction, performed by a user working on the application during application runtime, in turn executes a function/process in the backend code. The function processes these action and finishes the logical work on that page(s). Such a function may be referred as a transaction function.
[0021] Further, the data of interest collected for each transaction is analyzed. For example, analysis of the data of interest includes analyzing captured object code and one or more interactive elements for each transactions and then generating hash code for each such transaction. Further, the monitoring probes 106-1 through 106-n respectively are configured to compare the generated hash code with the hashcodes of already captured transaction paths from previous runs or execution of the applications (say application 110a or 110b or both respectively) in same or any other environment such as the production environment, the UAT environment or the testing environment. The generated hash code refers to the current execution path traversed by the application say 110a, while the hashcodes of already captured transaction paths refers to the prior-known set of code execution paths for the applications (say application 110a or 110b or both respectively). Thus, the hash code comparison enables detecting any unexpected behavior, wherein the object code of the application (say 110a) being monitored traverses an unknown path.
[0022] Unlike existing methods where such monitoring probes are often used to generate static code coverage information for unit test cases, the application quality monitoring system 102 can be configured to utilize the monitoring probes to collect data of interest that can be utilize for automatically generating functional test cases. The number of applications that can be monitored and analyzed for automatically generating the test cases for the applications can be up-scaled by using additional monitoring probes. For different types of application such as a JAVA based web application, an ASP.NET application or the like, corresponding different types of monitoring probes can be written. The monitoring probes are technology dependent, so based on the technologies used by the client application, the monitoring probes developed to capture the code can differ. Thus the system 100 does not limit any specific type of application for automatically generating test cases, rather enables creating customized monitoring probes based on type of application to be monitored.
[0023] The data sources 106-1 to 106-n, alternatively referred as the plurality of monitoring probes, may be connected to a computing device 104 through a network 108. The computing device 104 can include the application quality control system 102. Since the volume of data generated by the plurality of monitoring probes that can monitor plurality of applications is very high, big data solutions like Hadoop can be used as computing device 104 to store and process the data of interest to generate and predict test cases for the application automatically. In an example embodiment, the application quality control system 102 may be embodied in the computing device 104 (not shown). In example embodiment the application quality control system 102 may be in direct communication with the computing device 104, as depicted in FIG. 1. Upon detection of the unexpected behavior, one or more monitoring probes (monitoring probes106-1 through 106-n) can be configured to log the captured data of interest into a log environment 114, creating log files for each transaction. The log environment refers to a location where the application generated logs can be stored. It may be in a different Storage Area Network (SAN) than a core application deployment file system. The monitoring probes 106-1 through 106-n do not store the captured data rather data of interest as and when captured is written to the log files using an application logging mechanism to allow fast and concurrent writes. Further, regulatory compliance expected from a monitoring tool perspective is followed, protecting Personally Identifiable Information while monitoring the applications (for example, applications 110a and 110b respectively).
[0024] All the logged data of interest in the log environment 114, which is collected from the plurality of monitoring probes (106-1 through 106-n) can then be forwarded to the application quality control system 102. The application quality control system 102 can be configured to receive the logged data of interest for each application being monitored (herein, application 110a and application 110b). The logged data provides detected one or more code execution path (exhibiting unexpected behavior) and the corresponding one or more interactive elements from the corresponding log files. The interactive elements may include the UI activity or the machine to machine communication associated with the detected code execution path. Furthermore, the application quality control system 102 can be configured to automatically generate one or more test scenarios for the one or more code execution paths of each application received as log files. The code execution paths, so received, are those that exhibit unexpected behavior. The generated test scenarios can be submitted for acceptance, for example, the acceptance may refer to review and go ahead from SMEs 116. Upon acceptance (or post acceptance), the application quality control system 102 can be configured to generate the test case for each of the test scenarios. The test cases can be maintained in the test case library and stored in a repository 118. Furthermore, the application quality control system 102 can be configured to analyze each newly generated test case using deep learning techniques to predict one or more unexplored code execution paths. For each of the predicted one or more unexplored code execution paths the application quality control system 102 can be configured to create one or more test cases utilizing test data extracted by backtracking the predicted one or more unexplored code execution paths. The backtracking identifies data required to invoke the predicted at least one unexplored code execution path. Upon acceptance, for example from the SMEs 116, these created test cases are then added to the test case library (repository 118). The application quality control system 102 is further configured with adaptively learning capability, wherein the adaptive learning capability restricts re-generating rejected test scenarios. The method enables adaptive learning from the rejected test scenarios through the analysis of previous patterns associated with acceptance of the created test cases by the SMEs. The adaptive learning capability enables improving the efficiency of prediction.
[0025] In an embodiment, the network 108 connecting the computing device 104 and the monitoring probes 106-1 through 106-n may be a wireless or a wired network, or a combination thereof. In an example, the network 108 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 108 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 108 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 108 may interact with the application quality control system 102 through communication links.
[0026] In an embodiment, the computing device 104, which implements the application quality control system 102 can be a big data server such as the Hadoop the like. The application quality control system 102 may also be implemented in a workstation, a mainframe computer, a general purpose server, and a network server. Further, the repository 118, coupled to the application quality control system 102 may also store other data such as the intermediate data generated during automatic generation of the test cases.
[0027] In an alternate embodiment, the data repository 118 may be internal to the application quality control system 102. The components or modules and functionalities of application quality control system 102 are described further in detail with reference to FIG. 2 and FIG. 3.
[0028] FIG. 2 is a functional block diagram of the application quality control system 102, according to some embodiments of the present disclosure. The application quality control system 102 includes or is otherwise in communication with one or more hardware processors such as a processor 202, at least one memory such as a memory 204, and an I/O interface 206. The processor 202 (hardware processor), the memory 204, and the I/O interface(s) 206 may be coupled by a system bus such as a system bus 208 or a similar mechanism. The memory 204 further may include modules 210.
[0029] In an embodiment, the modules 210 include a code module 212, a test case generation module 214, a prediction module 216 and other modules (not shown) for implementing functions of the application quality control system 102. In an embodiment, the code module 212, the test case generation module 214, the prediction module 216 may be integrated into a single module. In an embodiment, the module 210 can be an Integrated Circuit (IC), external to the memory 204 (not shown), implemented using a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). The names of the modules of functional block in the modules 210 referred herein, are used for explanation and are not a limitation. Further, the memory 204 can also include a repository 218.
[0030] The hardware processor 202 may be implemented as one or more multicore processors, a microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the hardware processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 and communicate with the modules 210, external to the memory 204, for triggering execution of functions to be implemented by the modules 210. In an embodiment, the code module 212 is configured to receive the data of interest through log files from the log environment 114. The data of interest that includes one or more code execution paths traversed by the object code of the each application (110a and 110b) being monitored during application runtime and the interactive element associated. As explained, the received code executions paths differ from the prior-known code execution paths.
[0031] Further, the code module 212 can be configured to analyze the one or more code execution paths and the associated interactive element for each application (application 110a and 110b) from the received log files to generate one or more test scenarios for the code execution path and associated test data sets.
The generated one or more test scenarios are submitted for acceptance, such as review from the SMEs 116. The generated the test data set provides information of the at least one code execution path in a format required for test case generation. Upon acceptance, the test case generation module 214 can be configured to generate the one or more test cases for corresponding the one or more test scenarios based on the test data set identified for the corresponding test scenario. The generated one or more test cases are updated in the test case library, maintained in the repository 118. Known test case generation methods such as Selenium or the like may be used by the test case generation module 214.
[0032] Further, the prediction module 216 can be configured to analyze the generated one or more test cases to predict one or more unexplored code execution paths. The analysis performed by the prediction module 216 comprises applying deep learning on a plurality of transaction function entry points of each of generated test cases to identify a plurality of probable transaction function entry points of the test case, which are susceptible to traversing unexplored code execution paths.
[0033] Further, the analysis performed by the prediction module 216 comprises performing a static control flow analysis of a source code associated with the plurality of probable transaction function entry points of the one or more test cases. The static control flow analysis enables identifying a plurality of candidate code execution paths traversed by the one or more test cases. The candidate code execution path refers to possible code execution paths. The analysis performed by the prediction module 216 further comprises identifying at least one unexplored code execution path from the identified plurality of candidate code execution paths. Thereafter, predicting one or more unexplored code execution paths from the identified plurality of unexplored code execution paths such that predicted at least one unexplored code execution path has maximum possibility of being traversed. The maximum possibility may be determined based on a predefined probability threshold or any similar possibility measures that can be used.
[0034] Further, the prediction module 216 can be configured to backtrack the predicted one or more unexecuted code path to identify data required to invoke each of the predicted unexecuted paths. Further, based on output of the prediction module 216, the test case generation module 214 can be configured to create one or more test cases for each of the one or more predicted unexplored code execution paths using corresponding interactive elements and the identified data. Further, post SME acceptance the created one or more test cases for the predicted code paths are updated in the test case library for the respective application.
[0035] The prediction module 216 is further configured with adaptively learning capability, wherein the adaptive learning capability restricts re-generating rejected test scenarios. The prediction module 216 is configured to keep track of all the predicted scenarios that the SMEs 116 rejected so that the pattern can be used to restrict future generation of similar test scenarios, as well as use the selected scenarios patterns to predict with better confidence. Thus, as the SMEs 116 handle more interactions (acceptance and rejection of test cases), the prediction module 216 gradually learns and matures.
[0036] The I/O interface 206 in the system 102 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface for SME interaction. The interface(s) 206 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a camera device, and a printer and a display. The interface(s) 206 may enable the application quality control system 102 to communicate with other devices, such as the computing device 104, web servers and external databases (repository 112). The interface(s) 206 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the interface(s) 206 may include one or more ports for connecting a number of computing systems with one another or to another server computer. The I/O interface(s) 206 may include one or more ports for connecting a number of devices to one another or to another server. The memory 204 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the modules 210 may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types. The modules 210 may include computer-readable instructions that supplement applications or functions performed by the application quality control system 102. The repository 218 may store data that is processed, received, or generated as a result of the execution of one or more modules in the module(s) 210.
[0037] Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0038] FIG. 3 is a flow diagram illustrating a method 300 for analyzing the object code of the one or more applications to automatically generate test cases for the one nor more applications, in accordance with some embodiments of the present disclosure. In an example scenario, one or more applications running on client environment are monitored utilizing the plurality of monitoring probes to capture and log one or more code execution paths and the corresponding plurality of interactive elements into log files for the code executions paths that exhibit unexpected behavior and traverse unexpected code execution paths.
[0039] In an embodiment, at step 302, the method 300 includes allowing the code module 212 to receive the one or more code execution path traversed by the object code of each application among the one or more applications being monitored during application runtime. The code module 212 can also be configured to receive one or more interactive elements (such as UI element or system element) associated with the at least one code execution path. The received one or more code execution paths differs from the prior-known set of code execution paths traversed by the object code of the one or more applications. At step 304, the method 300 includes allowing the code module 212 to analyze the one or more code execution paths and the associated one or more interactive elements to generate one or more test scenarios and test data set for the one or more code execution paths. The generated one or more test scenarios are submitted for acceptance. The review for the acceptance may be by the SMEs 116. At step 306, the method 300 allows the test case generation module 214 to generate at least one test case for the at least one test scenario based on the test data set, upon acceptance. The generated the test data set provides information of the one or more code execution paths in a format required for test case generation. The generated one or more test cases are updated in the test case library, maintained in the repository 112. Known test case generation methods such as Selenium or the like may be used by the test case generation module 214.
[0040] At step 308, the method 300 allows the prediction module 216 to analyze the generated one or more test cases to predict one or more unexplored code execution paths. The analysis performed by the prediction module 216 comprises applying deep learning on a plurality of transaction function entry points of each of generated test cases to identify the plurality of probable transaction function entry points of the test case, which are susceptible to traversing unexplored code execution paths. Further, the analysis performed by the prediction module 216 comprises performing the static control flow analysis of the source code associated with the plurality of probable transaction function entry points of the one or more test cases. The static flow control analysis enables identifying the plurality of candidate code execution paths traversed by the one or more test cases. The analysis performed by the prediction module 216 further comprises identifying at least one unexplored code execution path from the identified plurality of candidate code execution paths. Thereafter, the analysis comprises predicting one or more unexplored code execution paths from the identified plurality of unexplored code execution paths such that predicted at least one unexplored code execution path has maximum possibility of being traversed. The maximum possibility may be determined based on the predefined probability threshold or any similar possibility measures that can be used. Further, the analysis performed by the prediction module 216 includes backtracking the predicted one or more unexecuted code path to identify data required to invoke each of the predicted unexecuted paths. At step 310, the method 300 includes allowing the test case generation module to create one or more test cases for each of the one or more predicted unexplored code execution paths using corresponding interactive elements and the identified data. Further, upon acceptance the created one or more test cases for the predicted code paths are updated in the test case library for the respective application. The prediction module 216 is further configured with adaptively learning capability, wherein the adaptive learning capability restricts re-generating rejected test scenarios. The prediction module 216 is configured to keep track of all the predicted scenarios that the SMEs 116 rejected so that the pattern can be used to restrict future generation of similar test scenarios, as well as use the selected scenarios patterns to predict with better confidence. Thus, as the SMEs 116 handle more interactions, the prediction module 216 gradually learns and matures.
[0041] A typical use case for a JAVA web application is provided below. The Java web application may be deployed in the client environment, wherein a customized JVM probe (for example, monitoring probe 106-1) can be attached to an application server where the java web application is deployed. The JVM probe can capture java code execution for the application (also referred as target client application) during application runtime in the client environment. Further, the target client application web pages can be modified to include a UI event capture probe (monitoring probe). The JVM probe can write the captured execution code information in an encrypted format to a log file. The UI event capture probe can capture all UI activities performed by end users and will send this information securely to the back end server where it is be first stored in an encrypted format to a log file. The data from the log files can be normalized and streamed into a big data environment for storage and analysis. Analysis jobs on the big data server can find transactions within the already configured transaction boundaries and can compute a hash code for a UI event bread crumb and the corresponding java code captured for each such transaction. This hash can be compared against all previously recorded hashcodes in the application quality control system 102. If the application quality control system 102 detects a new hash code, it indicates that a new code execution path is discovered and is a candidate for automated test case generation. Such scenarios are experienced due to different ways (or manner) in which, users use the application. Moreover, the new code execution path (unexpected code execution path) may be due to system element such as different variations of data that the application quality control system 102 is fed with. Using the UI events already captured for this transaction, a test scenario is created and presented to the SME for vetting. If the SME decides that it is a good test scenario, automated test are created using test automation framework like Selenium, which is fed with the data captured by the UI event capture probe for that transaction. This test is stored in an automation test case repository of choice that the enterprise is using. The illustrated steps of method 300 are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development may change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation.
[0042] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0043] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0044] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0045] These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0046] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0047] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

Documents

Application Documents

# Name Date
1 201721046911-STATEMENT OF UNDERTAKING (FORM 3) [27-12-2017(online)].pdf 2017-12-27
2 201721046911-REQUEST FOR EXAMINATION (FORM-18) [27-12-2017(online)].pdf 2017-12-27
3 201721046911-FORM 18 [27-12-2017(online)].pdf 2017-12-27
4 201721046911-FORM 1 [27-12-2017(online)].pdf 2017-12-27
5 201721046911-FIGURE OF ABSTRACT [27-12-2017(online)].jpg 2017-12-27
6 201721046911-DRAWINGS [27-12-2017(online)].pdf 2017-12-27
7 201721046911-COMPLETE SPECIFICATION [27-12-2017(online)].pdf 2017-12-27
8 201721046911-FORM-26 [23-01-2018(online)].pdf 2018-01-23
9 201721046911-Proof of Right (MANDATORY) [24-01-2018(online)].pdf 2018-01-24
10 abstract1.jpg 2018-08-11
11 201721046911-ORIGINAL UNDER RULE 6 (1A)-310118.pdf 2018-08-11
12 201721046911-OTHERS [22-04-2021(online)].pdf 2021-04-22
13 201721046911-FER_SER_REPLY [22-04-2021(online)].pdf 2021-04-22
14 201721046911-COMPLETE SPECIFICATION [22-04-2021(online)].pdf 2021-04-22
15 201721046911-CLAIMS [22-04-2021(online)].pdf 2021-04-22
16 201721046911-FER.pdf 2021-10-18
17 201721046911-US(14)-HearingNotice-(HearingDate-01-02-2023).pdf 2023-01-04
18 201721046911-Correspondence to notify the Controller [27-01-2023(online)].pdf 2023-01-27
19 201721046911-FORM-26 [31-01-2023(online)].pdf 2023-01-31
20 201721046911-FORM-26 [31-01-2023(online)]-1.pdf 2023-01-31
21 201721046911-Written submissions and relevant documents [14-02-2023(online)].pdf 2023-02-14
22 201721046911-PatentCertificate13-07-2023.pdf 2023-07-13
23 201721046911-IntimationOfGrant13-07-2023.pdf 2023-07-13

Search Strategy

1 2020-09-1013-43-13E_10-09-2020.pdf

ERegister / Renewals

3rd: 09 Oct 2023

From 27/12/2019 - To 27/12/2020

4th: 09 Oct 2023

From 27/12/2020 - To 27/12/2021

5th: 09 Oct 2023

From 27/12/2021 - To 27/12/2022

6th: 09 Oct 2023

From 27/12/2022 - To 27/12/2023

7th: 09 Oct 2023

From 27/12/2023 - To 27/12/2024

8th: 26 Dec 2024

From 27/12/2024 - To 27/12/2025