Sign In to Follow Application
View All Documents & Correspondence

Determining An Impact Of Code And Predicting Areas Of Failures Thereof

Abstract: Conventionally, in many complex programs and dev-ops methodology, due to the lack of time and complexity of applications, functional impacts of code are almost always underestimated leading to a significant portion of the testing related failures being attributed to the code, thus leaving complex programs to depend on subject matter experts to identify and flag the same. Embodiments of the present disclosure provide system and method that perform code assurance to assess code impact and predict failure areas by assessing and evaluating the code through established execution paths by bringing into perspective a) functional depths of impacts of execution paths b) feedback from testing (test cases and defects) on stability and business impacts and d) historical assessment of system behavior and impacts thereof by employing code parsers, metrics extractors and machine-learning algorithms to perform historical data analysis to classify and predict possible areas of failures.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 August 2020
Publication Number
06/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-12-02
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. KESARY, Shailaja
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India
2. RAGUKUMAR, Preethi
Tata Consultancy Services Limited, Brigade Bhuwalka Icon, Whitefield Main Road, Pattandur Agrahara, Whitefield, Bangalore - 560067, Karnataka, India
3. TUMMALA, Amulya Venkata Sai
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India
4. SILIVERI, Tejaswi
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India

Specification

Claims:
1. A processor implemented method, comprising:
obtaining, one or more hardware processors, a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths (202);
determining an established traceability across the set of inputs (204);
determining, using the determined established traceability across the set of inputs, via the one or more hardware processors, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements (206);
evaluating, via a trained classifier executed by the one or more hardware processors, stability of the one or more associated execution paths based on at least one of (i) the first set of characteristics associated with the one or more associated execution paths, (ii) the second set of characteristics associated with the one or more test cases, and (iii) the third set of characteristics associated with the one or more defects (208);
determining a fifth set of characteristics associated with (i) the one or more associated execution paths, and (ii) the one or more requirements, and evaluating, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof (210); and
predicting probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements (212).

2. The processor implemented method of claim 1, further comprising:
obtaining an input comprising (a) an updated code from a user, (b) the one or more requirements and (c) the predicted probability of failure of the code, and (d) one or more results of one of more unit tests being performed;
determining the first set, the second set, the third set, the fourth set and the fifth set of characteristics based on the input;
evaluating the stability using the first set, the second set and the third set of characteristics;
determining the functional complexity using the fifth set of characteristics; and
predicting probability of failure of the updated code based on (a) the evaluated stability, (b) the determined functional complexity, (c) the fourth set of characteristics associated with the one or more requirements and (d) the one or more results of one or more unit tests being performed.

3. The processor implemented method of claim 1, wherein the first set of characteristics comprises at least one of (a) number of methods per execution path, ) number of attributes per execution path, (c) an inheritance factor, (d) a coupling factor, (e) time complexity, (f) space complexity, (g) number of lines in the one or more associated execution paths, and (h) number of decisions and nested statements in the one or more associated execution paths.

4. The processor implemented method of claim 1, wherein the second set of characteristics comprises at least one of (a) number of failures, (b) a test case severity score, (c) number of steps in the one or more test cases, (d) number of execution paths, (e) number of impacted entities, and (f) number of impacted operations.

5. The processor implemented method of claim 1, wherein the third set of characteristics comprises at least one of (a) number of defects, (b) root cause and magnitude, (c) a defect severity, (d) a defect priority, (e) number of impacted requirements amongst the one or more requirements, and (f) number of impacted execution paths amongst the one or more associated execution paths.

6. The processor implemented method of claim 1, wherein the fourth set of characteristics associated with the one or more requirements comprises at least one of (a) number of impacted entities, (b) number of impacted operations, (c) severity impact score, and (d) number of previously impacted requirements.

7. The processor implemented method of claim 1, wherein the fifth set of characteristics comprises at least one of (a) a business criticality of one or more functions comprised in the code and the one or more associated execution paths, and (b) a requirements severity score.

8. The processor implemented method of claim 1, wherein the stability of each of the one or more associated execution paths is classified as at least one stability level type amongst two or more stability level types.

9. The processor implemented method of claim 1, wherein the functionality complexity of each of the of the one or more associated execution paths is classified as at least one complexity level type amongst two or more complexity level types.

10. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain, a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths;
determine an established traceability across the set of inputs;
determine, using the determined established traceability across the set of inputs, via the one or more hardware processors, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements;
evaluate, via a trained classifier executed by the one or more hardware processors, stability of the one or more associated execution paths based on at least one of (i) the first set of characteristics associated with the one or more associated execution paths, (ii) the second set of characteristics associated with the one or more test cases, and (iii) the third set of characteristics associated with the one or more defects;
determine a fifth set of characteristics associated with (a) the one or more associated execution paths, and (b) the one or more requirements, and evaluating, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof; and
predict probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements.

11. The system of claim 10, wherein the one or more hardware processors are further configured by the instructions to:
obtain an input comprising (a) an updated code from a user, (b) the one or more requirements and (c) the predicted probability of failure of the code, and (d) one or more results of one of more unit tests being performed;
determine the first set, the second set, the third set, the fourth set and the fifth set of characteristics based on the input;
evaluate the stability using the first set, the second set and the third set of characteristics;
determine the functional complexity using the fifth set of characteristics; and
predicting probability of failure of the updated code based on the (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements, and (d) the one or more results of one or more unit tests being performed

12. The system of claim 10, wherein the first set of characteristics comprises at least one of a) number of methods per execution path, (b) number of attributes per execution path, (c) an inheritance factor, (d) a coupling factor, (e) time complexity, f) space complexity, (g) number of lines in the one or more associated execution paths, and (h) number of decisions and nested statements in the one or more associated execution paths.

13. The system of claim 10, wherein the second set of characteristics comprises at least one of (a) number of failures, (b) a test case severity score, (c) number of steps in the test case, (d) number of execution paths, (e) number of impacted entities, and (f) number of impacted operations.

14. The system of claim 10, wherein the third set of characteristics comprises at least one of (a) number of defects, (b) root cause and magnitude, (c) a defect severity, (d) a defect priority, (e) number of impacted requirements amongst the one or more requirements, and (f) number of impacted execution paths amongst the one or more associated execution paths.

15. The system of claim 10, wherein the fourth set of characteristics associated with the one or more requirements comprises at least one of (a) number of impacted entities, (b) number of impacted operations, (c) a severity impact score, and (d) number of previously impacted requirements.

16. The system of claim 10, wherein the fifth set of characteristics comprises at least one of (a) a business criticality of one or more functions in the code comprised the one or more associated execution paths, and (b) a requirements severity score.

17. The system of claim 10, wherein the stability of each of the one or more associated execution paths is classified as at least one stability level type amongst two or more stability level types.

18. The system of claim 10, wherein the functionality complexity of each of the of the one or more associated execution paths is classified as at least one complexity level type amongst two or more complexity level types.
, Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
DETERMINING AN IMPACT OF CODE AND PREDICTING AREAS OF FAILURES THEREOF

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to programming language code assurance, and, more particularly, to determining an impact of code and predicting areas of failures thereof.

BACKGROUND
The main issue seen in many complex programs and dev-ops methodology is that due to the lack of time and given the complexity of applications, the functional impacts of the code are almost always underestimated leading to a significant portion of the testing related failures being attributed to the code. There is a lack of a sophisticated, intelligent, and automated mechanism to do these impact assessments leaving these complex programs to depend on the competency of the SMEs to identify and flag the same. Another perspective that programs oversee is the impacts of testing and defects on the code and the resulting behavior of the same to identify stability issues before hand to take necessary actions before moving ahead. These issues cause serious consequences and high costs. Some of the observation on the code and its impacts are: 1) 60% of identified defects in testing are coding related defects, wherein primary causes include a) low quality of code and b) poor impact assessment and the like, 2) As products/applications increase in size and complexity, there is a need to identify, predict and control the potential effects of requirement volatility on the architecture and design, 3) inadequate assessment of this volatility and the lack of focus are the two main issues plaguing assessment of impacts leading to huge project costs and re-work efforts, 4) critical functionalities break in production causing incidents due to inadequate assessment and testing leading to business loss. Currently with the available processes and methods there are very few insights indicating the extent of impact on existing functionality and hence the associated pieces of vulnerable code, 5) to add to the code quality and impact assessment, the current testing done by the developers is extremely modularized and confined to testing lines of code in which the essence of testing the entire functionality of the application or product itself is lost, and 6) these gaps result in huge number of defects in the subsequent phases resulting in higher costs and lower quality.

SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for determining an impact of code and predicting areas of failures thereof. The method comprises: obtaining, one or more hardware processors, a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths; determining an established traceability across the set of inputs; determining, using the determined established traceability across the set of inputs, via the one or more hardware processors, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements; evaluating, via a trained classifier executed by the one or more hardware processors, stability of the one or more associated execution paths based on at least one of (a) the first set of characteristics associated with the one or more associated execution paths, (b) the second set of characteristics associated with the one or more test cases, and (c) the third set of characteristics associated with the one or more defects; determining (v) a fifth set of characteristics associated with (a) the one or more associated execution paths, and (b) the one or more requirements, and evaluating, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof; and predicting probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements.
In an embodiment, the method further comprises: obtaining an input comprising (i) an updated code from a user based on (a) the one or more requirements and (b) the predicted probability of failure of the code, and (ii) one or more results of one of more unit tests being performed; and determining the first set, the second set, the third set, the fourth set and the fifth set of characteristics based on the input; evaluating the stability using the first set, the second set, and the third set of characteristics; determining the functional complexity using the fifth set of characteristics; and predicting probability of failure of the updated code (a) the evaluated stability, (b) the determined functional complexity, (c) the fourth set of characteristics associated with the one or more requirements and (d) the one or more results of one or more unit tests being performed.
In an embodiment, the first set of characteristics comprises at least one of (i) number of methods per execution path, (ii) number of attributes per execution path, (iii) an inheritance factor, (iv) a coupling factor, (v) time complexity, (vi) space complexity, (vii) number of lines in the one or more associated execution paths, and (viii) number of decisions and nested statements in the one or more associated execution paths.
In an embodiment, the second set of characteristics comprises at least one of (i) number of test steps of the one or more test cases, (ii) number of failures, (iii) a test case severity score, (iv) a priority score, (v) number of functions of test cases, (vi) number of impacted entities, and (vii) number of impacted operations.
In an embodiment, the third set of characteristics comprises at least one of (i) number of defects, (ii) root cause and magnitude, (iii) a defect severity, (iv) a defect priority, (v) number of impacted requirements amongst the one or more requirements, and (vi) number of impacted execution paths amongst the one or more associated execution paths.
In an embodiment, the fourth set of characteristics associated with the one or more requirements comprises at least one of (i) number of impacted entities, (ii) number of impacted operations, (iii) a severity impact score, and (iv) number of previously impacted requirements.
In an embodiment, the fifth set of characteristics comprises at least one of (i) a business criticality of one or more functions in the code comprised the one or more associated execution paths, and (ii) a requirement severity score.
In an embodiment, the stability of each of the one or more associated execution paths is classified as at least one stability level type amongst two or more stability level types.
In an embodiment, the functionality complexity of each of the of the one or more associated execution paths is classified as at least one complexity level type amongst two or more complexity level types.
In another aspect, there is provided a system for determining an impact of code and predicting areas of failures thereof. The system comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths; determine an established traceability across the set of inputs; determine, using the determined established traceability across the set of inputs, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements; evaluate, via a trained classifier executed by the one or more hardware processors, stability of the one or more associated execution paths based on at least one of (a) the first set of characteristics associated with the one or more associated execution paths, (b) the second set of characteristics associated with the one or more test cases, and (c) the third set of characteristics associated with the one or more defects; determine a fifth set of characteristics associated with (a) the one or more associated execution paths, and (b) the one or more requirements, and evaluate, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof; and predicting probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements.
In an embodiment, the one or more hardware processors are further configured by the instructions to: obtain an input comprising (i) an updated code from a user based on (a) the one or more requirements and (b) the predicted probability of failure of the code, and (ii) one or more results of one of more unit tests being performed; determine the first set, the second set, the third set, the fourth set and the fifth set of characteristics based on the input; evaluate the stability using the first set, the second set and the third set of characteristics; determine the functional complexity using the fifth set of characteristics; and predict probability of failure of the updated code (a) the evaluated stability, (b) the determined functional complexity, (c) the fourth set of characteristics associated with the one or more requirements and (d) the one or more results of one or more unit tests being performed.
In an embodiment, the first set of characteristics comprises at least one of (i) number of methods per execution path, (ii) number of attributes per execution path, (iii) an inheritance factor, (iv) a coupling factor, (v) time complexity, (vi) space complexity, (vii) number of lines in the one or more associated execution paths, and (viii) number of decisions and nested statements in the one or more associated execution paths.
In an embodiment, the second set of characteristics comprises at least one of (i) number of test steps of the one or more test cases, (ii) number of failures, (iii) a test case severity score, (iv) a priority score, (v) number of functions of test cases, (vi) number of impacted entities, and (vii) number of impacted operations.
In an embodiment, the third set of characteristics comprises at least one of (i) number of defects, (ii) root cause and magnitude, (iii) a defect severity, (iv) a defect priority, (v) number of impacted requirements amongst the one or more requirements, and (vi) number of impacted execution paths amongst the one or more associated execution paths.
In an embodiment, the fourth set of characteristics associated with the one or more requirements comprises at least one of (i) number of impacted entities, (ii) number of impacted operations, (iii) severity impact score, and (iv) number of previously impacted requirements.
In an embodiment, the fifth set of characteristics comprises at least one of (i) a business criticality of one or more functions in the code comprised the one or more associated execution paths, and (ii) a requirement severity score.
In an embodiment, the stability of each of the one or more associated execution paths is classified as at least one stability level type amongst two or more stability level types.
In an embodiment, the functionality complexity of each of the of the one or more associated execution paths is classified as at least one complexity level type amongst two or more complexity level types.
In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device causes the computing device to obtain a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths; determine an established traceability across the set of inputs; determine, using the determined established traceability across the set of inputs, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements; evaluate, via a trained classifier executed by the one or more hardware processors, stability of the one or more associated execution paths based on at least one of (a) the first set of characteristics associated with the one or more associated execution paths, (b) the second set of characteristics associated with the one or more test cases, and (c) the third set of characteristics associated with the one or more defects; determine a fifth set of characteristics associated with (a) the one or more associated execution paths, and (b) the one or more requirements, and evaluate, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof; and predicting probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements.
In an embodiment, the computer readable program, when executed on the computing device further causes the computing device to: obtain an input comprising (i) an updated code from a user based on (a) the one or more requirements and (b) the predicted probability of failure of the code, and (ii) one or more results of one of more unit tests being performed; determine the first set, the second set, the third set, the fourth set and the fifth set of characteristics based on the input; evaluate the stability using the first set, the second set and the third set of characteristics; determine the functional complexity using the fifth set of characteristics; and predict probability of failure of the updated code (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements (d) the one or more results of one or more unit tests being performed.
In an embodiment, the first set of characteristics comprises at least one of (i) number of methods per execution path, (ii) number of attributes per execution path, (iii) an inheritance factor, (iv) a coupling factor, (v) time complexity, (vi) space complexity, (vii) number of lines in the one or more associated execution paths, and (viii) number of decisions and nested statements in the one or more associated execution paths.
In an embodiment, the second set of characteristics comprises at least one of (i) number of test steps of the one or more test cases, (ii) number of failures, (iii) a test case severity score, (iv) a priority score, (v) number of functions of test cases, (vi) number of impacted entities, and (vii) number of impacted operations.
In an embodiment, the third set of characteristics comprises at least one of (i) number of defects, (ii) root cause and magnitude, (iii) a defect severity, (iv) a defect priority, (v) number of impacted requirements amongst the one or more requirements, and (vi) number of impacted execution paths amongst the one or more associated execution paths.
In an embodiment, the fourth set of characteristics associated with the one or more requirements comprises at least one of (i) number of impacted entities, (ii) number of impacted operations, (iii) severity impact score, and (iv) number of previously impacted requirements.
In an embodiment, the fifth set of characteristics comprises at least one of (i) a business criticality of one or more functions in the code comprised the one or more associated execution paths, and (ii) a requirements severity score.
In an embodiment, the stability of each of the one or more associated execution paths is classified as at least one stability level type amongst two or more stability level types.
In an embodiment, the functionality complexity of each of the of the one or more associated execution paths is classified as at least one complexity level type amongst two or more complexity level types.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 depicts a system for determining an impact of code and predicting areas of failures thereof, in accordance with an embodiment of the present disclosure.
FIG. 2 depicts a flow-diagram illustrating a method for determining an impact of code and predicting areas of failures thereof, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 3 depicts a knowledge graph for various test cases to determine impacted entities, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 4 depicts a knowledge graph for various test cases to determine impacted operations, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 5 depicts a flow-diagram illustrating a method for determining stability for various code paths, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 6 depicts an exemplary flow-diagram illustrating a method for determining/evaluating functional complexity for the various code paths, using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 7 depicts prediction of probability of failure of the code in one or more associated execution paths before code check-in, using a trained classifier comprised in the system of FIG. 1, in accordance with an embodiment of the present disclosure.
FIG. 8 depicts prediction of probability of failure of the code in the one or more associated execution paths after code check-in, using the trained classifier comprised in the system of FIG. 1, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
Embodiments of the present disclosure provide systems and methods to perform code assurance to assess the impact of code and predict areas of failures. The method of the present disclosure entails a holistic approach of assessing and evaluating the code through one or more established execution paths by bringing into perspective a) the functional depths of impacts of the execution paths b) feedback from testing (test cases and defects) on stability and business impacts and d) historical assessment of system behavior and the impacts of the same. The method of the present disclosure employs code parsing, extraction of metrics and machine-learning algorithms including but not limited to neural networks to perform historical data analysis to classify and predict possible areas of failures, in one example embodiment.
In one example, the system and method of the present disclosure address the technical problem of poor code/ technical debt from two perspectives to aid the Continuous Integration/Continuous Deployment (CI/CD) pipeline at the development review gate and the Quality Assurance (QA) review gate:
Before code check-in: Involves assessment of the code based on the code complexity, function complexity, code stability and requirements leveraging previous history to provide insights and recommendations to the development team on execution paths that are critical and prone to failures.
After code check-in: Involves assessment of the code complexity, function complexity, code stability and unit test results after the code is updated to provide insights and recommendation to both the development team and the test team on a) critical execution paths prone to failure b) actions the development team needs to take in-case of code shortcomings c) check-point for entry into quality assurance (QA) phase.
The system and method of the present disclosure further help understand the differences in the testing between the development and test environments by providing insights on the differences in the tests on the execution paths.
Further, the system and method of the present disclosure enable assessment of structural and functional complexity associated with the execution through multiple features such as:
a) FUNCTIONAL COMPLEXITY: i) Business criticality of the function/feature associated with the execution path ii) requirements severity score of the feature/function associated with the execution path iii) number of functions/ features associated with the execution path iv) complexity score of the features based on the code associated with the execution path.
b) STRUCTURAL COMPLEXITY: i) Methods per execution path, ii) Attributes per execution path, iii) Execution path inheritance factor, iv) Number of lines of code in execution path and v) time and space complexity, vi) Number of decision and nested statements per path.
Further, the system and method of the present disclosure enable classification of the execution paths based on the complexity of the functions/ features and code into levels as needed for the solution and leverage various classification models such as support vector machines (SVM), Random Forest, Logistic regression and neural networks with the classifier with highest accuracy chosen for processing.
In one embodiment, a system and method to perform code assurance to assess the impact of code (before code check-in) and predict areas of failures is disclosed. In one example, the method leverages the execution path complexity, execution path stability and related requirement features to predict possible execution path failures and the recommendations for the same. Further the method, assess the impact of the code (after code check-in) and predict areas of failure leveraging Execution path related requirement features, Execution path complexity, Execution path stability, Execution path changed, and Execution path unit test results.
Referring now to the drawings, and more particularly to FIGS. 1 through 8, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 depicts a system 100 for determining an impact of code and predicting areas of failures thereof, in accordance with an embodiment of the present disclosure. The system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices (e.g., mobile communication device such as smart phones, tablet/tablet computer), workstations, mainframe computers, and the like.
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises data pertaining to one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code, and an established traceability between the one or more test cases and one or more associated execution paths, and the like). The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
FIG. 2, with reference to FIG. 1, depicts a flow-diagram illustrating a method for determining an impact of code and predicting areas of failures thereof, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors and are configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to components of the system 100 of FIG. 1, the flow diagram as depicted in FIG. 2 and other FIGS. 3-8 depicting various embodiments of the present disclosure. In an embodiment, at step 202 of the present disclosure, the one or more hardware processors 104 obtain a set of inputs comprising one or more test cases, one or more defects observed during an execution of the one or more test cases, an associated domain knowledge, one or more requirements, programming language code through one or more established execution paths, and an established traceability between the one or more test cases and one or more associated execution paths. The set of inputs is provided in below Table 1 by way of illustrative example:
Table 1
Test cases Defects Code Paths
Module-A sends information to Module-B Configuration class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}}
File2 is sent to Module-E Requirement class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void main(String []args){
Example();}}
Module-E sends information to Module-A No Defect class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}}
Module-F processes 3 files Performance public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}}
Processed data is sent to Module-C No Defect class GFG {
public static String name;
public static void getName(){
name='hello';
System.out.println(name);}
public static void main(String []args){
GetName();}}

Example of the associated domain knowledge is: “Knowledge Base: File1 is taken as input into Module-A. Module-A processes the file and sends information to Module-B, Module-C and Module-D. Module-C takes as input Module-D output and create a File2 which is sent to Module-E. Module-E processes the file and send information to Module-A, Module-F which processes 3 files File3, File4 and File5, sends the processed data to Module-C”.
In an embodiment of the present disclosure, at step 204, the one or more hardware processors 104 determine an established traceability across the set of inputs. Examples of the determined established traceability are depicted in below Table 2:
Table 2
Code path Function
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Login
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void main(String []args){
Example();}} Payment

As can be seen from above Table 2, functions are being derived for defects and test cases based on entity extraction, which is indicative of the established traceability being determined, in one example embodiment of the present disclosure.
Another example of the determined established traceability is depicted in below Table 3:
Table 3
Test cases Function
Module-A sends information to Module-B Login
File2 is sent to Module-E Login
Module-E sends information to Module-A Payments

As can be seen from above Table 3, using the function, mapping of the test cases is done back to the requirements indicating which are the impacted test cases/defects for a particular requirements, in one example embodiment of the present disclosure.
In an embodiment of the present disclosure, at step 206, the one or more hardware processors 104 determine, using the determined established traceability across the set of inputs, via the one or more hardware processors, (i) a first set of characteristics associated with the one or more associated execution paths, (ii) a second set of characteristics associated with the one or more test cases, (iii) a third set of characteristics associated with the one or more defects, and (iv) a fourth set of characteristics associated with the one or more requirements. The first set of characteristics comprises at least one of (a) number of methods per execution path, (b) number of attributes per execution path, (c) an inheritance factor, (d) a coupling factor, (e) time complexity, (f) space complexity, (g) number of lines of code in the one or more associated execution paths, and (h) number of decisions and nested statements in the one or more associated execution paths, in one example embodiment of the present disclosure.
The number of methods per execution path (also referred as Methods-Per-Class (MPCF)) is determined as below:
MPCF = # Public Methods/ # Public Methods + # Non-Public Methods
A large value of MPCF shows that:
The class may have too much functionality (or higher degree of functionality or higher number of functionality)
The reusability of the class will increase
Implementation of methods is good.
The number of attributes per execution path (also referred as Attributes-Per-Class (APCF)) is determined as below:
APCF = # Private or protected attributes/ # Private or protected + # Non-Private or not protected
A large value of APCF shows that
The object has more properties in it
The high potential impact on children
The time and effort of the construction of a class
Inheritance factor or Method Inheritance Factor (MIF)
MIF = # Inherited methods/ # of inherited methods + # of defined methods
A large value of MIF indicates that:
More difficult to predict the behavior of the class,
Violated the abstraction implied by the super class
Generally formed a designing difficulty.
Number of lines of code in the one or more associated execution paths is referred as a count of the lines of code, after removal of empty spaces and comments.
Number of nested statements refers to a count of the number of nested statements within the execution path.
Time Complexity – is computed at execution path run time indicating the time taken for the execution path to complete.
Space Complexity – is computed at execution path run time indicating the total space taken by a program including auxiliary space and input size. Auxiliary space is the temporary spaced used by the program during execution.
Below table 4 is an example of the first set of characteristics being determined:
Table 4
Code paths Methods Attributes Inheritance factor Coupling factor Time complexity Space complexity Lines of code Decision and nested statements
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 1 1 0 1 1 6 0
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void main(String []args){
Example();}} 1 1 0 1 0 1 7 0
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 1 1 0 1 1 6 0
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} 0 0 0 0 1 0 7 2

Similarly, the second set of characteristics comprises at least one of (a) number of test steps of the one or more test cases, (b) number of failures, (c) a test case severity score, (d) a priority score, (e) number of functions of test cases, (f) number of impacted entities, and g) number of impacted operations, in one example embodiment.
A score for each impacted entity is computed by way of following illustrative example:
Impacted Entities Score (TIE) = V+E (where V is the node(entity/function) and E are the edges from the node) if there is only one function/entity or IE=(V0+E0) + ?_(n=1)^8¦??(w+(n-1)/n*x)?^y*(V_n+E_n)? where V0 is the primary node, V1 is the secondary node, and Vn is the nth layer node to node V0 and ?(w/n*x)?^y is the weightage given to the subsequent node to the primary node to calculate reduced impact.
Impacted Operations Score (TIO) = E (where E are the edges (verbs) from the node (function) if there is only 1 function/entity or IO=E0+?_(n=1)^8¦?(w+(n-1/n)*x)?^y *E_n, where E0 is the primary edge, E1 is the secondary edge and En is the nth layer edge to E0 and?(w/n*x)?^y is the weightage given to the subsequent edges to the primary edge to calculate reduced impact.
A Test Case Severity Score (TCS) is defined as the severity of the function impacted by the test case and is calculated by the number of shortest paths that pass through the given node/ functional feature.
g(v)=?_(s?v?t)¦(s_st (v))/s_st
where v is the given node/functional feature, s_st is the total number of shortest paths from node s to node t and s_st (v) is the number of shortest paths from node s to node t and pass through v.
Number of steps in the test case (TS) = Direct score based on the number of steps in the test case.
Number of times of execution failure (TEF) = Direct score on the number of times a test case has failed in the past.
Number of execution paths (NEP) = Direct score of the number of execution paths related to the test case.
Priority of the test case = Direct score of the priority of the test case as indicated in the source of record.
Below table 5 is an example of the second set of characteristics being determined:
Table 5
Code paths Number of associated test cases Number of failures Test Case Severity score Priority score Functions associated
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 2 1 0.3 1 1
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void 0.3main(String []args){
Example();}} 1 0 0.2 2 1
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 1 0.3 1 1
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} 1 0 0.5 1 0

The third set of characteristics comprises at least one of (a) number of defects, (b) root cause and magnitude, (c) a defect severity, (d) a defect priority, (e) number of impacted requirements amongst the one or more requirements, and (f) number of impacted execution paths amongst the one or more associated execution paths.
Defect Severity (DS) refers to a severity of the functional feature impacted by the defects and is calculated by the number of shortest paths that pass-through a given node/ functional feature:
g(v)=?_(s?v?t)¦(s_st (v))/s_st
where v is the given node/functional feature, s_st is the total number of shortest paths from node s to node t and s_st (v) is the number of shortest paths from node s to node t and pass through v.
Number of impacted requirements (DIR) is computed based on extraction of entity vectors for both older requirements (Vector A) and Defects (Vector B) and performing cosine similarity between the vectors:
sim(A,B)=Cos(?)=(A.B)/?A??B?
Number of impacted execution paths (IEP) = Count of number of execution paths a defect is impacting.
Number of defects per execution path (DPE) = Count of defects associated with each execution path
Defect Priority and Defect Severity – Are directly read from the source system where defect data resides
Defect RCA (root cause analysis) and Magnitude (DRM) = The root causes of the defects are systematically arrived at using the BOW and the RCA is converted into one – hot encodings. Magnitude which is pre-determined is mapped. There is a pre-defined set of root causes which is configured along with a score for the magnitude of the problem based on that root cause. For example, defect root cause with “requirements” as the RCA results in a magnitude of 10 versus a defect with root cause as “test data” resulting in a magnitude of 1.
Below table 6 is an example of the third set of characteristics being determined:
Table 6
Code paths Number of defects Root cause and magnitude Defect severity Defect priority
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 1 2 1
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void 0.3main(String []args){
Example();}} 1 2 1 1
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 2 3 2
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} 1 1 1 1

Defect severity and defect priority are directly read from a source system where defect data resides, in one example embodiment of the present disclosure.
The fourth set of characteristics associated with the one or more requirements comprises at least one of (a) number of impacted entities, (b) number of impacted operations, (c) severity impact score, and d) number of previously impacted requirements.
A score is computed for each of the number of impacted entities being determined. The impacted Entities Score (IE) = V+E (where V is the node(entity/function) and E are the edges from the node) if there is only one function/entity or IE=(V0+E0) + ?_(n=1)^8¦??(w+(n-1)/n*x)?^y*(V_n+E_n)?
where V0 is the primary node, V1 is the secondary node, and Vn is the nth layer node to node V0 and ?(w/n*x)?^y is the weightage given to the subsequent node to the primary node to calculate reduced impact.
Severity impact score also referred as Requirement Severity Impact Score (RS) Is calculated based on the importance of a function in the requirement within an application/Product currently calculated as the number of shortest paths that pass-through a given node
g(v)=?_(s?v?t)¦(s_st (v))/s_st
where v is the given node/functional feature, s_st is the total number of shortest paths from node s to node t and s_st (v) is the number of shortest paths from node s to node t and pass through v.
Number of previously impacted requirements also referred as number of impacted older Requirements (IR) is computed based on extraction of entity vectors for both older requirements (Vector A) and Defects (Vector B) and performing cosine similarity between the vectors
sim(A,B)=Cos(?)=(A.B)/?A??B?
Below table 7 is an example of the fourth set of characteristics being determined:
Table 7
Code paths Number of impacted operations Number of impacted entities severity impact score number of previously impacted requirements
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 1 2 1
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void 0.3main(String []args){
Example();}} 1 2 1 1
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} 1 2 3 2
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} 1 1 1 1

In relation to above Table 7, FIG. 3, with reference to FIGS. 1 through 2, depicts a knowledge graph for various test cases to determine impacted entities, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure.
Number of impacted entities: The associated functionality of the execution path is already established. There is an underlying knowledge graph created for the entire system as depicted in FIG. 3. From the knowledge graph, the function in question is evaluated for number of impacted entities as per equation below:
IE=(V0+E0) + ?_(n=1)^8¦??(w+(n-1)/n*x)?^y*(V_n+E_n)?
Computation of the number of impacted entities for module-f is shown below by way of an example:
V0 = 4, E0 = 4,
V1 = 1, E0 = 1
V2 = 1, E1 = 1
V3 = 1, E2 = 1
W = 0.8, X=0.1, y = 5, and now solving the above equation gives value of IE as:
IE = 8+1.6+1.2+0.8 = 11.6
In relation to above Table 7, FIG. 4, with reference to FIGS. 1 through 3, depicts a knowledge graph for various test cases to determine impacted operations, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure.
Number of impacted operations: Similarly, the system 100 after extracting the entity in question from the requirements, traverses the knowledge documentation graph till it reaches the entity for which an impact assessment is being performed and once identified, it identifies the number of branches connecting to actions or verbs which indicate the number of operations it performs and the number of operations performed on it along with incrementing an impact counter. All the actions or verbs are identified leveraging Natural Language Processing (NLP) technique as known in the art. Below is an example for determining number of impacted operations for a module:
Number of primary edges = 3
Number of secondary edges = 3
W=0.8, x=0.1, y=5
IO = 3.06 based on the formula.
Severity impact score is calculated based on above documented formula. Sample computation
{'file1': 0.0, 'module-a': 0.6666666666666667, 'module-b': 0.0, 'module-c': 0.08888888888888889, 'module-d': 0.11111111111111112, 'file2': 0.011111111111111112, 'module-e': 0.06666666666666667, 'module-f': 0.4222222222222222, 'file3': 0.0, 'file4': 0.03333333333333333, 'file5': 0.0}
Number of other requirements impacted:
New Requirement: Manipulator process: The manipulator should in addition to the existing process, send out a message to the dynamic module.
Existing Requirements: Below table 8 depicts example requirements:
Table 8
S. No Requirement Name Requirement Description Priority
1 Manipulator Module The manipulator module should take as argument a process file which is a batch file that contains a description of the actions to be carried out by network manipulator. An action consists of one input part and one or more output parts High
2 Manipulator Input In the input part, network manipulator should either load a network file (already produced by new or another generator) or generate a network from a set of parameters given in a specification file. High
3 Input converter module The input converter module should be controlled by the network filename extension (if known by nem). If the extension is .specif, the generator module is called, otherwise the loader module is called by the converter if the network file format is recognized. High
4 Manipulator output In the output part, network manipulator should be able to chain any of these tasks: analyse the graph, convert it in another format available for output and run a static simulation upon it Medium
5 Static simulator module The static simulator module should only allow the fast simulation of simple protocol mechanisms. As it is not based on a discrete event engine and thus does not take time into account (hence the name ‘static’), it is quite limited and not included in the public distribution of nem. Medium

Output: Requirements 1,2 and 4 are affected by the new requirement. Score against the new requirement is 3.
Referring to steps of FIG. 2, in an embodiment of the present disclosure, at step 208, the one or more hardware processors 104 evaluate, via a trained classifier, stability of the one or more associated execution paths based on at least one of (i) the first set of characteristics associated with the one or more associated execution paths, (ii) the second set of characteristics associated with the one or more test cases, and (iii) the third set of characteristics associated with the one or more defects. FIG. 5, with reference to FIGS. 1-4, depicts a flow-diagram illustrating a method for determining stability for various code paths, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. The evaluated stability for the various code paths is illustrated below by way of example in Table 9:
Table 9
Code Paths Stability
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Critical
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void main(String []args){
Example();}} High
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Low
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} Low
class GFG{
public static String name;
public static void getName(){
name='hello';
System.out.println(name);}
public static void main(String []args){
GetName();}} Medium
In an embodiment of the present disclosure, at step 210, the one or more hardware processors 104 determine a fifth set of characteristics associated with (i) the one or more associated execution paths, and (ii) the one or more requirements, and evaluating, via the trained classifier executed by the one or more hardware processors, a functional complexity thereof. The fifth set of characteristics comprises at least one of (a) a business criticality of one or more functions in the code comprised the one or more associated execution paths, and (b) a requirement severity score. Below table 10 depicts functions comprised in the programming language code, provided by way of example:
Table 10
Code Paths Function
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Module F

The business criticality of the functions associated therewith are obtained from a business (or nature of engagement between different parties/entities), in one example embodiment of the present disclosure.
FIG. 6, with reference to FIGS. 1-5, depicts an exemplary flow-diagram illustrating a method for determining/evaluating functional complexity for the various code paths, using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. The evaluated functional complexity for the various code paths is illustrated below by way of example in Table 11:
Table 11
Code Paths Functional complexity
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Critical
class Def{
public static int;
public static void Example(){
Abc abc=new Abc();
abc.name='hi';}
public static void main(String []args){
Example();}} Critical
class Abc{
public String name;}
classs childclass extends Abc{
public static void Xyz(){
Abc abc=new Abc();
abc.name='hello';}} Low
public class Test{
public static void main(String args[]){
int x=30
int y=10
if(x==30){
if(y==10){
System.out.print('x=30 and y=10');}}}} Critical
class GFG{
public static String name;
public static void getName(){
name='hello';
System.out.println(name);}
public static void main(String []args){
GetName();}} Low

In an embodiment of the present disclosure, at step 212, the one or more hardware processors 104 predict probability of failure of the code in the one or more associated execution paths based on (a) the evaluated stability, (b) the determined functional complexity and (c) the fourth set of characteristics associated with the one or more requirements. The steps 202 till 212 depict flow-diagram for prediction of probability of failure of the code in one or more associated execution paths before code check-in, in one example embodiment.
FIG. 7, with reference to FIGS. 1 through 6, depicts prediction of probability of failure of the code in the one or more associated execution paths before code check-in, using the trained classifier comprised in the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure.
Based on the probability of failure of the code in the associated execution paths (e.g., execution path with low probability of failure, execution path with medium probability of failure, or execution path with high probability of failure), users work on modifying the code and an input comprising (a) an updated code is obtained from the users, (b) the one or more requirements, (c) the predicted probability of failure of the code, and (d) one or more results of one of more unit tests being performed. Once the updated code is obtained, the first set of characteristics, the second set of characteristics, the third set of characteristics, the fourth set of characteristics and the fifth set of characteristics are determined once again for the modified code/updated code. Further, the stability is re-evaluated after the receipt of updated code, using the first set of characteristics, the second set of characteristics and the third set of characteristics. Further the functional complexity is determined/evaluated using the fifth set of characteristics, and probability of failure is predicted for the updated code based on (a) the evaluated stability, (b) the determined functional complexity (c) the fourth set of characteristics associated with the one or more requirements and (d) the one or more results of one or more unit tests being performed. Unit testing as known in the art refers to a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use.
FIG. 8, with reference to FIGS. 1 through 7, depicts prediction of probability of failure of the code in the one or more associated execution paths after code check-in, using the trained classifier comprised in the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure.
Conventionally, in many complex programs and dev-ops methodology, due to the lack of time and complexity of applications, functional impacts of code are almost always underestimated leading to a significant portion of the testing related failures being attributed to the code, thus leaving complex programs to depend on subject matter experts to identify and flag the same. Embodiments of the present disclosure provide systems and methods that implement machine learning models (e.g., SVM, Random forest and the like) for code assurance and automated Impact analysis, wherein traceability between the requirements, code and test cases are established by use of artificial intelligence techniques (comprised in the system 100) and code execution paths captured. Further, the present disclosure enables the systems and methods described herein to assess the code complexity based on key feature attributes to identify key attributes for a given functionality. Furthermore, historical data stored in the memory 102 of the system 100 is analyzed to perform correlation and identify patterns of failures. In other words, embodiments of the present disclosure provide system and method that perform code assurance to assess code impact and predict failure areas by assessing and evaluating the code through one or more established execution paths by bringing into perspective a) functional depths of impacts of execution paths b) feedback from testing (test cases and defects) on stability and business impacts and d) historical assessment of system behavior and impacts thereof by employing code parsers, metrics extractors and machine-learning algorithms to perform historical data analysis to classify and predict possible areas of failures Moreover, the system 100 predicts possible types of failures based on the data analyzed and processed by the above components.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 202021034020-IntimationOfGrant02-12-2024.pdf 2024-12-02
1 202021034020-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
1 202021034020-US(14)-HearingNotice-(HearingDate-19-11-2024).pdf 2024-11-05
2 202021034020-REQUEST FOR EXAMINATION (FORM-18) [07-08-2020(online)].pdf 2020-08-07
2 202021034020-PatentCertificate02-12-2024.pdf 2024-12-02
2 202021034020-CLAIMS [27-06-2022(online)].pdf 2022-06-27
3 202021034020-Written submissions and relevant documents [29-11-2024(online)].pdf 2024-11-29
3 202021034020-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
3 202021034020-FORM 18 [07-08-2020(online)].pdf 2020-08-07
4 202021034020-FORM-26 [18-11-2024(online)]-1.pdf 2024-11-18
4 202021034020-FORM 1 [07-08-2020(online)].pdf 2020-08-07
4 202021034020-FER_SER_REPLY [27-06-2022(online)].pdf 2022-06-27
5 202021034020-OTHERS [27-06-2022(online)].pdf 2022-06-27
5 202021034020-FORM-26 [18-11-2024(online)].pdf 2024-11-18
5 202021034020-FIGURE OF ABSTRACT [07-08-2020(online)].jpg 2020-08-07
6 202021034020-FER.pdf 2022-02-18
6 202021034020-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
6 202021034020-Correspondence to notify the Controller [14-11-2024(online)].pdf 2024-11-14
7 Abstract1.jpg 2021-10-19
7 202021034020-US(14)-HearingNotice-(HearingDate-19-11-2024).pdf 2024-11-05
7 202021034020-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
8 202021034020-Proof of Right [03-02-2021(online)].pdf 2021-02-03
8 202021034020-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
8 202021034020-CLAIMS [27-06-2022(online)].pdf 2022-06-27
9 202021034020-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
9 202021034020-FORM-26 [12-11-2020(online)].pdf 2020-11-12
10 202021034020-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
10 202021034020-FER_SER_REPLY [27-06-2022(online)].pdf 2022-06-27
10 202021034020-Proof of Right [03-02-2021(online)].pdf 2021-02-03
11 202021034020-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
11 202021034020-OTHERS [27-06-2022(online)].pdf 2022-06-27
11 Abstract1.jpg 2021-10-19
12 202021034020-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
12 202021034020-FER.pdf 2022-02-18
13 202021034020-FIGURE OF ABSTRACT [07-08-2020(online)].jpg 2020-08-07
13 202021034020-OTHERS [27-06-2022(online)].pdf 2022-06-27
13 Abstract1.jpg 2021-10-19
14 202021034020-FER_SER_REPLY [27-06-2022(online)].pdf 2022-06-27
14 202021034020-FORM 1 [07-08-2020(online)].pdf 2020-08-07
14 202021034020-Proof of Right [03-02-2021(online)].pdf 2021-02-03
15 202021034020-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
15 202021034020-FORM 18 [07-08-2020(online)].pdf 2020-08-07
15 202021034020-FORM-26 [12-11-2020(online)].pdf 2020-11-12
16 202021034020-CLAIMS [27-06-2022(online)].pdf 2022-06-27
16 202021034020-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
16 202021034020-REQUEST FOR EXAMINATION (FORM-18) [07-08-2020(online)].pdf 2020-08-07
17 202021034020-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
17 202021034020-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
17 202021034020-US(14)-HearingNotice-(HearingDate-19-11-2024).pdf 2024-11-05
18 202021034020-Correspondence to notify the Controller [14-11-2024(online)].pdf 2024-11-14
18 202021034020-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
19 202021034020-FIGURE OF ABSTRACT [07-08-2020(online)].jpg 2020-08-07
19 202021034020-FORM-26 [18-11-2024(online)].pdf 2024-11-18
20 202021034020-FORM-26 [18-11-2024(online)]-1.pdf 2024-11-18
20 202021034020-FORM 1 [07-08-2020(online)].pdf 2020-08-07
21 202021034020-Written submissions and relevant documents [29-11-2024(online)].pdf 2024-11-29
21 202021034020-FORM 18 [07-08-2020(online)].pdf 2020-08-07
22 202021034020-REQUEST FOR EXAMINATION (FORM-18) [07-08-2020(online)].pdf 2020-08-07
22 202021034020-PatentCertificate02-12-2024.pdf 2024-12-02
23 202021034020-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
23 202021034020-IntimationOfGrant02-12-2024.pdf 2024-12-02

Search Strategy

1 202021034020E_18-02-2022.pdf

ERegister / Renewals

3rd: 27 Feb 2025

From 07/08/2022 - To 07/08/2023

4th: 27 Feb 2025

From 07/08/2023 - To 07/08/2024

5th: 27 Feb 2025

From 07/08/2024 - To 07/08/2025

6th: 31 Jul 2025

From 07/08/2025 - To 07/08/2026