Sign In to Follow Application
View All Documents & Correspondence

Method And System For Assessing Quality Of Requirements Through Prediction Of Failures

Abstract: Requirement plays a key role in the software development process. A method and system for assessing the quality of requirements through predicting failures in software development has been provided. The system is configured to assess, evaluate, and predict software development requirements related failures. The method entails a holistic approach of assessing and evaluating requirements by bringing into perspective a) The functional depths of impacts of the requirement b) Feedback from testing (test cases and defects) on impacts and c) Historical assessment of system behavior and the impacts of the same. The system employs natural language processing, dependency trees and maximum spanning graphs to access the requirements, establish the functional context, extracting the characteristics of the same and in addition it leverages machine-learning algorithms to perform historical data analysis to classify and predict possible areas of failures.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 August 2020
Publication Number
06/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai - 400021, Maharashtra, India

Inventors

1. KESARY, Shailaja
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India
2. RAGUKUMAR, Preethi
Tata Consultancy Services Limited, Brigade Bhuwalka Icon, Whitefield Main Road, Pattandur Agrahara, Whitefield, Bangalore - 560067, Karnataka, India
3. TUMMALA, Amulya Venkata Sai
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India
4. SILIVERI, Tejaswi
Tata Consultancy Services Limited, Synergy Park Unit 1 - Phase I, Premises No.2-56/1/36, Survey No.26, Gachibowli, Serilingampally Mandal, R R District, Hyderabad - 500019, Telangana, India

Specification

Claims:

1. A processor implemented method (400) for assessing and improving quality of requirements through predicting requirement related failures in software development, the method comprising:

obtaining, via one or more hardware processors, a plurality of input data, wherein the plurality of input data is stored in a database, wherein the plurality of input data comprises one or more of:
the requirements across all releases of the software,
a domain knowledge acquired from the application or the software,
a plurality of test cases, wherein the plurality of test cases are generated for the requirements, and
a plurality of defects occurred during the running of the plurality of test cases and a plurality of incidents occurred during production use (402);
preprocessing, via the one or more hardware processors, the plurality of input data, wherein the preprocessing results in the generation of functional features (404);
establishing, via the one or more hardware processors, a traceability between the different plurality of input data (406);
establishing a functional structure, via the one or more hardware processors, using a plurality of dependency trees, a plurality of maximum spanning graphs and a plurality of natural language processing (NLP) techniques, wherein the functional structure along with the established traceability outlines associations and inter-dependencies between the extracted functional features (408);
performing, via the one or more hardware processors, functional impact assessment of the requirements by applying the plurality of natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of requirement features (410);
performing, via the one or more hardware processors, functional impact assessment of the plurality of defects by applying the plurality natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of defect features (412);
performing, via the one or more hardware processors, functional impact assessment of the plurality of test cases by applying a natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of test case features (414);
applying, via the one or more hardware processors, a classification model generated using the plurality of defect features, to classify the plurality of defects into two or more classes of defects based on their impact using neural network techniques (416);
applying, via the one or more hardware processors, a classification model generated using the plurality of test case features to classify the plurality of test cases into two or more classes based of test cases on their impact using the neural network techniques (418);
identifying, via the one or more hardware processors, a set of business critical requirements using a classification model generated using the plurality of requirement features, the plurality of defect features and the plurality of test case features (420);
predicting, via the one or more hardware processors, a set of possible requirement failures using the plurality of requirement features, the classified plurality of defects and the classified plurality of test cases (422); and
assessing, via the one or more hardware processors, by comparing the generated plurality of test case features with the generated plurality of requirement features to ensure predefined testing coverage of the software product (424).

2. The method of claim 1 further comprising establishing traceability of the plurality of test cases and the plurality of defects to the requirements based on the established functional structure.

3. The method of claim 1, wherein a format of the established functional structure is rendered in one or more formats for further utilization in the software development.

4. The method of claim 1, wherein the plurality of requirement features comprises:
a score for incompleteness, a score for ambiguity, a score for volatility, a score for complexity, a number of impacted entities/ functions, a number of operations performed on and by the entity/ function, a centrality score of the entities/ functions, and a number of impacted old requirements.

5. The method of claim 1, wherein the plurality of defect features comprises:
a defect centrality score, a number of entities impacted by the plurality of defects, root cause, and impact magnitude score, a defect severity score, and a defect priority score.

6. The method of claim 1, wherein the plurality of test case features comprises:
a test case centrality score, a number of entities acted on, a number of operations performed, a number of steps in the test case, priority of the test case, and number of execution failures

7. The method of claim 1, wherein the set of possible requirement failures are classified into two or more classes of requirements based on their probability of failure.

8. A system (100) for assessing and improving quality of requirements through predicting requirement related failures in software development, the system comprises:
an input/output interface (104) for obtaining a plurality of input data, wherein the plurality of input data is stored in a database, wherein the plurality of input data comprises one or more of:
the requirements across all releases of the software,
a domain knowledge acquired from the application or the software,
a plurality of test cases, wherein the plurality of test cases are generated for the requirements, and
a plurality of defects occurred during the running of the plurality of test cases and a plurality of incidents occurred during production use;
one or more hardware processors (108);
a memory (110) in communication with the one or more hardware processors, wherein the one or more first hardware processors are configured to execute programmed instructions stored in the one or more first memories, to:
preprocess the plurality of input data, wherein the preprocessing results in the generation of functional features;
establish a traceability between the different plurality of input data;
establish a functional structure using a plurality of dependency trees, a plurality of maximum spanning graphs and a plurality of natural language processing (NLP) techniques, wherein the functional structure along with the established traceability outlines associations and inter-dependencies between the extracted functional features;
perform functional impact assessment of the requirements by applying the plurality of natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of requirement features;
perform functional impact assessment of the plurality of defects by applying the plurality natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of defect features;
perform functional impact assessment of the plurality of test cases by applying a natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of test case features;
apply a classification model generated using the plurality of defect features, to classify the plurality of defects into two or more classes of defects based on their impact using neural network techniques;
apply a classification model generated using the plurality of test case features to classify the plurality of test cases into two or more classes based of test cases on their impact using the neural network techniques;
identify a set of business critical requirements using a classification model generated using the plurality of requirement features, the plurality of defect features and the plurality of test case features;
predict a set of possible requirement failures using the plurality of requirement features, the classified plurality of defects and the classified plurality of test cases; and
assess by comparing the generated plurality of test case features with the generated plurality of requirement features to ensure predefined testing coverage of the software product.
, Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
METHOD AND SYSTEM FOR ASSESSING QUALITY OF REQUIREMENTS THROUGH PREDICTION OF FAILURES

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to field of software development, and, more particularly, to a method and a system for assessing and improving quality of requirements through predicting requirement related failures in the software development.

BACKGROUND

With the adoption of DevOps, continuous integration/continuous development models, new versions/ releases occur as fast as they are written. To identify and flag real time issues become a major challenge as too much is crunched into too little a time. Undiscovered defects in products, software and services that remain undetected manifest into production disasters leading to loss of revenue, reputation and sometimes detrimental to business.
Requirements that are not well documented (inconsistent, incomplete, ambiguous, complex and volatile) lead to a major chunk of defects related to requirements found later in the project life cycle. Articulation of dependencies, span of impact both direct and indirect and referencing eventually is very cumbersome especially when the application is very complex. This obscures the level to which impact analysis can be done effectively to convert the requirements into functioning application based on impact and historical performance and problem areas.
Today this level of analysis and representation of requirements is not done because of the effort and complexity involved in doing this activity. One needs to scan through the all the documented information to be able to do this. Hence, the project teams rely on subject matter expert’s (SME) knowledge to perform the impact analysis, which is also done very superficially. A more agile and intelligent way of assurance needs to be adopted.
Current methods do not look at the contents of the requirements to understand and determine the potential problems. These activities are typically done by SMEs based on the functional knowledge. This area is not explored due to the complexity of information and the forms in which the information is present. Most of the teams rely on the tractability that is established by SMEs in the Life cycle management systems.
Quality of requirements along with exhaustive impact analysis and assessments have high impacts on the project outcome. Well-articulated requirements along with detailed impact analysis is a project battle that is half won. The impacts of poorly documented and assessed requirements are primarily 2-fold in terms of project impacts and business impacts. This leads to poor design, inadequate specifications, poor solution integration and costly code re-work. This has a direct bearing on testing as the requirements are directly translated into test cases. This would affect the test effectiveness and coverage and lead to defects/incidents post release overall yielding low quality solutions at very high costs.
Thus, requirement gathering is a key phase in software development and yet till date poor quality continues to hamper the success of many projects negatively impacting budgets, timeline and the business as a whole. Regardless of the methodology followed, bringing into perspective a) the context in which the business operates b) understanding the business problem c) Identifying the gaps to be bridged d) Leveraging historical data for insights will help deliver quality requirements and hence a better value to the organization.

SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for assessing and improving quality of requirements through predicting requirement related failures in software development is provided. The system comprises an input/output interface, one or more hardware processors and a memory. The input/output interface obtains a plurality of input data, wherein the plurality of input data is stored in a database, wherein the plurality of input data comprises one or more of: the requirements across all releases of the software, a domain knowledge acquired from the application or the software, a plurality of test cases, wherein the plurality of test cases are generated for the requirements, and a plurality of defects occurred during the running of the plurality of test cases and a plurality of incidents occurred during production use. The memory is in communication with the one or more hardware processors. The one or more first hardware processors are configured to execute programmed instructions stored in the one or more first memories, to: preprocess the plurality of input data, wherein the preprocessing results in the generation of functional features; establish a traceability between the different plurality of input data; establish a functional structure using a plurality of dependency trees, a plurality of maximum spanning graphs and a plurality of natural language processing (NLP) techniques, wherein the functional structure along with the established traceability outlines associations and inter-dependencies between the extracted functional features; perform functional impact assessment of the requirements by applying the plurality of natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of requirement features; perform functional impact assessment of the plurality of defects by applying the plurality natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of defect features; perform functional impact assessment of the plurality of test cases by applying a natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of test case features; apply a classification model generated using the plurality of defect features, to classify the plurality of defects into two or more classes of defects based on their impact using neural network techniques; apply a classification model generated using the plurality of test case features to classify the plurality of test cases into two or more classes based of test cases on their impact using the neural network techniques; identify a set of business critical requirements using a classification model generated using the plurality of requirement features, the plurality of defect features and the plurality of test case features; predict a set of possible requirement failures using the plurality of requirement features, the classified plurality of defects and the classified plurality of test cases; and assess by comparing the generated plurality of test case features with the generated plurality of requirement features to ensure predefined testing coverage of the software product.
In another aspect, a method for assessing and improving quality of requirements through predicting requirement related failures in software development is provided. Initially, a plurality of input data is obtained, wherein the plurality of input data is stored in a database, wherein the plurality of input data comprises one or more of: the requirements across all releases of the software, a domain knowledge acquired from the application or the software, a plurality of test cases, wherein the plurality of test cases are generated for the requirements, and a plurality of defects occurred during the running of the plurality of test cases and a plurality of incidents occurred during production use. Further, the plurality of input data is preprocessed, wherein the preprocessing results in the generation of functional features. Later a traceability is established between the different plurality of input data. Further, a functional structure is established using a plurality of dependency trees, a plurality of maximum spanning graphs and a plurality of natural language processing (NLP) techniques, wherein the functional structure along with the established traceability outlines associations and inter-dependencies between the extracted functional features. In the next step, functional impact assessment of the requirements is performed by applying the plurality of natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of requirement features. Similarly, functional impact assessment of the plurality of defects is performed by applying the plurality natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of defect features. A functional impact assessment of the plurality of test cases is performed by applying a natural language processing algorithms along with dependency tree and maximum spanning graphs, wherein the assessment results in generation of a plurality of test case features. Further, a classification model generated using the plurality of defect features is applied, to classify the plurality of defects into two or more classes of defects based on their impact using neural network techniques. Further, a classification model generated using the plurality of test case features is applied to classify the plurality of test cases into two or more classes based of test cases on their impact using the neural network techniques. In the next step, a set of business critical requirements is identified using a classification model generated using the plurality of requirement features, the plurality of defect features and the plurality of test case features. Further, a set of possible requirement failures is predicted using the plurality of requirement features, the classified plurality of defects and the classified plurality of test cases. And finally, the generated plurality of test case features is assessed by comparing with the generated plurality of requirement features to ensure predefined testing coverage of the software product.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates a network diagram of a system for assessing and improving quality of requirements through predicting requirement related failures in software development according to some embodiments of the present disclosure.
FIG. 2 illustrates a schematic diagram of the system of FIG. 1 for assessing and improving quality of requirements through predicting requirement related failures in software development according to some embodiments of the present disclosure.
FIG. 3 shows graphical representation of assessment of requirements and evaluation for complexity and volatility is according to some embodiments of the present disclosure.
FIG. 4A-4B illustrates a flowchart of a method for assessing and improving quality of requirements through predicting requirement related failures in software development in accordance with some embodiments of the present disclosure.
FIG. 5 shows classification of a plurality of defects based on impact according to an embodiment of the present disclosure.
FIG. 6 shows classification of a plurality of test cases based on impact according to an embodiment of the present disclosure.
FIG. 7 shows classification of a plurality of requirements based on business impact according to an embodiment of the present disclosure.
FIG. 8 shows classification of a plurality of requirements based on the probability of failure according to an embodiment of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 8, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates a network diagram and FIG. 2 is a schematic diagram of a system 100 for assessing and improving quality of requirements through predicting requirement related failures in software development, in accordance with an example embodiment. Although the present disclosure is explained considering that the system 100 is implemented on a server, it may also be present elsewhere such as a local machine. It may be understood that the system 100 comprises one or more computing devices 102, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system 100 may be accessed through one or more input/output interfaces 104-1, 104-2... 104-N, collectively referred to as I/O interface 104. Examples of the I/O interface 104 may include, but are not limited to, a user interface, a portable computer, a personal digital assistant, a handheld device, a smartphone, a tablet computer, a workstation and the like. The I/O interface 104 are communicatively coupled to the system 100 through a network 106.
In an embodiment, the network 106 may be a wireless or a wired network, or a combination thereof. In an example, the network 106 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 106 may interact with the system 100 through communication links.
The system 100 may be implemented in a workstation, a mainframe computer, a server, and a network server. In an embodiment, the computing device 102 further comprises one or more hardware processors 108, one or more memory 110, hereinafter referred as a memory 110 and a data repository 112, for example, a repository 112. The memory 110 is in communication with the one or more hardware processors 108, wherein the one or more hardware processors 108 are configured to execute programmed instructions stored in the memory 110, to perform various functions as explained in the later part of the disclosure. The repository 112 may store data processed, received, and generated by the system 100.
The system 100 supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of various components of the system 100 using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system 100 is implemented to operate as a stand-alone device. In another embodiment, the system 100 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system 100 are described further in detail.
According to an embodiment of the disclosure, the system 100 is configured to assess, evaluate, and predict software development requirements related failures. The method entails a holistic approach of assessing and evaluating requirements by bringing into perspective a) The functional depths of impacts of the requirement b) Feedback from testing (test cases and defects) on impacts and c) Historical assessment of system behavior and the impacts of the same. The system 100 employs natural language processing, dependency trees and maximum spanning graphs to access the requirements, establish the functional context, extracting the characteristics of the same and in addition it leverages machine-learning algorithms including but not limited to neural networks to perform historical data analysis to classify and predict possible areas of failures.
According to an embodiment of the disclosure, the system 100 is configured to perform requirements assurance to assess the impact of requirements and predict areas of failures. In one example, the system 100 comprises of accessing, reading and storing requirements across all releases and the relevant domain / application / product related documentation. The system 100 further comprises of applying Natural Language Processing (NLP) techniques such as tokenization, parts of speech tagging to perform an initial validation of requirements to ensure successful processing which includes validating the same for ambiguity and completeness.
The system 100 further performs an assessment of requirements and evaluate the same for complexity and volatility as shown in FIG. 3. In a 4-quadrant approach – validation parameters against the assessment parameters indicate a high risk zone for the requirements if it falls in quadrant 1 and medium risk in quadrant 2 and 3 and low risk in quadrant 4. The system further comprises of applying dependency trees and maximum spanning graphs over the underlying natural language processing to extract functional assessment/evaluation on
a) The number of impacted entities
b) The number of operations performed on and by the entity
c) The centrality score of the entities in question and
d) The number of impacted old requirements.
According to an embodiment of the disclosure, in operation, a flowchart 400 for assessing and improving quality of requirements through predicting requirement related failures in software development is shown in FIG. 4A-4B. Initially, at step 402, a plurality of input data is obtained, wherein the plurality of input data is stored in the data repository 112. The plurality of input data comprises one or more of: the requirements across all releases of the software, a domain knowledge acquired from the application or the software, a plurality of test cases which are generated for the requirements, and a plurality of defects occurred during the running of the plurality of test cases and a plurality of incidents occurred during production use.
Further at step 404 the plurality of input data is preprocessed. The preprocessing results in the generation of functional features. At step 406, a traceability is established between the different plurality of input data.
At step 408, a functional structure is established using a plurality of dependency trees, a plurality of maximum spanning graphs and a plurality of natural language processing (NLP) techniques. The functional structure along with the established traceability outlines associations and inter-dependencies between the extracted functional features;
At step 410, functional impact assessment of the requirements is performed by applying the plurality of natural language processing techniques along with dependency tree and maximum spanning graphs. The assessment results in generation of a plurality of requirement features.
The plurality of requirement features comprises, but not limited to, a requirement validation score, a requirement assessment score, an impacted entities score, an impacted operations score, a number of impacted old requirements, an impacted severity of functional features, a business criticality and requirement priority score, a score for incompleteness, a score for ambiguity, a score for volatility, a score for complexity. Though it should be appreciated a person skilled in the art may calculate any other requirement features.
According to an embodiment of the disclosure, the impacted entities score (IE) is calculated as follows:
In case of only one function/entity,
Impacted Entities Score (IE) = V+E ………… (1)
where V is a node (entity / function) and E are the edges from the node, or
In case of more than one entities,
IE=(V0+E0)+ ?(w+(n-1)/n)*x)?^y*(Vn+En) …………. (2)
where V0 is the primary node, V1 is the secondary node, and Vn is the nth layer node to node V0 and ((w/n)*x) to power y) is the weightage given to the subsequent node to the primary node to calculate reduced impact.
According to an embodiment of the disclosure, the impacted operations Score (IO) is,
IO = E ……………… (3)
where E are the edges (verbs) from the node (function) if there is only 1 function/entity, or
IO=E0+ ?(w+(n-1)/n)*x)?^y*(En) …………….. (4)
where E0 is the primary edge, E1 is the secondary edge and En is the nth layer edge to E0 and ((w/n)*x)to power y) is the weightage given to the subsequent edges to the primary edge to calculate reduced impact
According to an embodiment of the disclosure, the requirement severity impact score (RS) or the centrality score is calculated based on the importance of a function in the requirement within an application/product currently calculated as the number of shortest paths that pass through a given node is
g(v)= ?_(s?v?t)¦(s_st (v))/s_st ……………….. (5)
where v is the given node, sigma st is the total number of shortest paths from node s to node t and sigma st(v) is the number of shortest paths from node s to node t and pass through v.
According to an embodiment of the disclosure, the number of impacted older requirements (IR) is equal to after extracting entity vectors for both older requirements (Vector A) and newer requirements (Vector B), cosine similarity is performed
sim(A,B)=cos?(?)= (A.B)/(|A||B|)…………… (6)
At step 412, functional impact assessment of the plurality of defects is performed by applying the plurality natural language processing algorithms along with dependency tree and maximum spanning graphs. The assessment results in generation of a plurality of defect features.
According to an embodiment of the disclosure, the system 100 is configured to access, read and store defects across all releases along with requirements and the relevant domain/application/product related documentation. The system 100 further configured to apply NLP to extract the plurality of defect features.
The plurality of defect features comprises, but not limited to, a defect functional feature severity score, an impacted functional feature score, a root cause and magnitude score, a defect severity score, a defect priority score, a defect centrality score (to assess functional magnitude and impact), and a number of entities impacted by the plurality of defects. Though it should be appreciated a person skilled in the art may calculate any other defect features.
According to an embodiment of the disclosure, the defect severity (DS) is defined as the severity of the functional feature impacted by the defects and is calculated by the number of shortest paths that pass through a given node / functional feature
g(v)= ?_(s?v?t)¦(s_st (v))/s_st ……………… (7)
where v is the given node/functional feature, sigma st is the total number of shortest paths from node s to node t and sigma st(v) is the number of shortest paths from node s to node t and pass through v.
According to an embodiment of the disclosure, the number of impacted requirements (DIR) is equal to after extracting entity vectors for both older requirements (Vector A) and Defects (Vector B), cosine similarity is performed between the vectors
sim(A,B)=cos?(?)= (A.B)/(|A||B|)……………… (8)
According to an embodiment of the disclosure, the defect root cause score (RCA) and magnitude score is referred as the root causes of the defects are systematically arrived at using the bag of words (BOW) and the RCA is converted into one – hot encodings. Magnitude which is pre-determined is mapped and used for normalization). The use of any other method is also well within the scope of this disclosure.
Further at step 414, functional impact assessment of the plurality of test cases is performed by applying a natural language processing techniques along with dependency tree and maximum spanning graphs. The assessment results in generation of a plurality of test case features.
According to an embodiment of the disclosure, the system 100 is configured to access, read and store test cases across all releases along with requirements and the relevant domain/application/product related documentation. The method further comprises of applying NLP along with dependency tree and maximum spanning graphs to extract the plurality of test case features.
The plurality of test case features comprises, but not limited to, a test case functional feature severity, number of times of execution failure, an impacted functional feature score, an impacted operations score, number of steps in the test case, a test case priority score, a test case centrality score, a number of entities acted on, and a number of operations performed. Though it should be appreciated a person skilled in the art may calculate any other test case features.
According to an embodiment of the disclosure, impacted entities score (TIE) if there is only one function/entity is
TIE = V+E ………………….. (9)
where V is the node (entity/function) and E are the edges from the node, or
if there are more than one function/entity
IE=(V0+E0)+ ?(w+(n-1)/n)*x)?^y*(Vn+En) ………………….. (10)
where V0 is the primary node, V1 is the secondary node, and Vn is the nth layer node to node V0 and ((w/n)*x)to power y) is the weightage given to the subsequent node to the primary node to calculate reduced impact.
According to an embodiment of the disclosure, impacted Operations Score (TIO) is E,
where E are the edges (verbs) from the node (function) if there is only one function/entity, or
IO=E0+ ?(w+(n-1)/n)*x)?^y*(En) …………… (11)
where E0 is the primary edge, E1 is the secondary edge and En is the nth layer edge to E0 and ((w/n)*x)to power y) is the weightage given to the subsequent edges to the primary edge to calculate reduced impact.
According to an embodiment of the disclosure, the test case severity score (TCS) is defined as the severity of the function impacted by the test case and is calculated by the number of shortest paths that pass through the given node/ functional feature
g(v)= ?_(s?v?t)¦(s_st (v))/s_st ……………………….. (12)
where v is the given node/functional feature, sigma st is the total number of shortest paths from node s to node t and sigma st(v) is the number of shortest paths from node s to node t and pass through v.
According to an embodiment of the disclosure, the test case number of steps (TS) is direct score based on the number of steps in the test case. The features extracted from the above said various data types are derived for historical data (defects, requirements, test cases) and for the latest set of requirements and are fed into the neural network. The neural network is then trained to understand the co-relation between the various characteristics of the data-types determined above from the various data types and the training is performed using the large quantity of historical data and the failures associated with them. With the training, the system understands the weights between the layers and the same is updated as feedback through the multiple epochs performed by the neural network.
Further, at step 416, a classification model is applied to classify the plurality of defects into two or more classes of defects based on their impact using neural network techniques. In an example the plurality of defects are classified as “critical impact”, “high impact”, “medium impact” and “low impact” using neural network techniques as shown in FIG. 5. Though it should be appreciated that the classification is not limited to only four classes. In another example, the classification model can be designed to classify the defects in any other number (two or more) of classes. The classification model is generated using the plurality of defect features. The system 100 is configured to employ neural network to classify the defects based on the above captured plurality of defect features leveraging various classification models like support vector machine (SVM), random forest, logistic regression and neural networks with the classifier with highest accuracy chosen for processing. The method also establishes traceability of the plurality of defects to requirements.
Similarly, at step 418, another classification model is applied to classify the plurality of test cases into two or more classes of test cases based on their impact using neural network techniques. In an example, the plurality of test cases are classified as “critical impact”, “high impact”, “medium impact” and “low impact” using the neural network techniques as shown in FIG. 6. Though it should be appreciated that the classification is not limited to only four classes. In another example, the classification model can be designed to classify the test cases in any other number (two or more) of classes. In this case the classification model is generated using the plurality of test case features. The system 100 is configured to employ neural network to classify test cases based on the above captured plurality of test case features leveraging various classification models like SVM, random forest, logistic regression and neural networks with the classifier with highest accuracy chosen for processing. The method also establishes traceability of the plurality of test cases to requirements.
Further at step 420, a set of business critical requirements is identified using a classification model generated using the plurality of requirement features, the plurality of defect features and the plurality of test case features. The requirements can be classified with “critical business impact”, “high business impact”, “medium business impact” or “low business impact” as shown in the FIG. 7. Though it should be appreciated that the classification of requirements is not limited to only four classes. In another example, the classification model can be designed to classify the requirements in any other number (two or more) of classes. The requirements can be classified into 2 or more classes based on the needs of the customer. According to an embodiment of the disclosure, the system 100 also leverages classification models like SVM, random forest, logistic regression and neural network with the classifier with highest accuracy to classify requirements as critical, high impact, medium impact and low impact based on business criticality.
Further at step 422, a set of possible requirement failures is predicted using the plurality of requirement features, the classified plurality of defects and the classified plurality of test cases. The set of possible requirement failures are classified among requirements with “highest probability of failure”, “medium probability of failure”, and “low probability of failure” as shown in FIG. 8. Though it should be appreciated that the classification of requirement failures is not limited to only four classes. In another example, the classification model can be designed to classify the requirement failures in any other number (two or more) of classes. The failures can be classified into two or more classes based on the needs of the customer. According to an embodiment of the disclosure, the system 100 is configured to perform requirements assurance to assess the impact of requirements and predict areas of failures. In one example, the system 100 is configured to take the outcome of each individual neural network (generated from the requirements, the plurality of defects and the plurality of test cases) and based on all these perspectives listed as features of each individual requirement, predicting the possible requirement failures based on historical data assessment.
And finally at step 424, the test cases are assessed by comparing the generated plurality of test case features with the generated plurality of requirement features to ensure predefined testing coverage of the software product. The predefined testing coverage is chosen such a way that adequate testing of the software product is covered.
According to an embodiment of the disclosure, the system 100 can also be explained with the help of various examples as follows:
Assessment of the number of impacted entities and impacted operations
As mentioned above, the system 100 reads the requirements input and natural language processing is performed to identify nouns (entities) in the requirements. The system also reads the associated knowledge documentation related to the requirements and dependency trees and maximum spanning graphs are generated for the same. For example: If the requirements state "Module-F needs to now send the processed information from File3, File4 and File5 to Module-A and Module-B as well" and if the related knowledge documentation reads "File1 is taken as input into Module-A. Module-A processes the file and sends-information to Module-B, Module-C and Module-D. Module-C takes as input Module-D output and create a File2 which is sent to Module-E. Module-E processes the file and send information to Module-A, Module-F which processes 3 files File3 , FIle4 and File5 , sends the processed data to Module-C".
Natural language processing is performed to preprocess the input data (requirements and knowledge base). (a) stopwords removal: stopwords removal is performed by importing a module from NLTK (natural language tool kit) which is "from nltk.corpus import stopwords"and using the function "stop_words = set(stopwords.words('english'))". (b) Tokenization: Tokenization is performed by importing a module from NLTK(natural language tool kit) which is "from nltk import word_tokenize" to tokenize or split the input into list of tokens. Thus, the output after applying Tokenization and removing stopwords will be:
['Module-F', 'needs', 'send', 'processed', 'information', 'File3', 'File4', 'File5', 'Module-A', 'Module-B', 'well']
Further, dependency trees and maximum spanning graphs can also be generated for the knowledge documentation using the module 'from nltk import Tree' which is present in NLTK (natural language tool kit) and using the function as follows:
def tree(node):
if node.n_lefts + node.n_rights > 0:
return Tree(node.orth_, [to_nltk_tree(child) for child in node.children])
else:
return node.orth_
The generated tree follows the Depth First Search-Inorder tree traversal (Left-Root-Right)
Further, maximum spanning graph is also generated for the entities using the function 'add_edge()' present in the module 'networkx' which is imported as:
import networkx as nx
G=nx.Graph()
As the system executes "add_edge()' function, the system also increments the counter for the entity for which an edge is being added and stores the value in the database. This count indicates the number of impacted entities by the entity in question.
The system then extracts the entities in question from the requirements, traverses the knowledge documentation graph till it reaches the entity for which we are performing the impact assessment and once identified, it identifies the adjacent entities (since there is a tree structure and the branches of the trees link associated entities) and marks them as impacted along with incrementing an impact counter. All the entities are nouns and hence are identified leveraging natural language processing.
For the above stated example: Initially nouns (entities) are extracted from the requirements using a method from natural language processing which is POS (parts of speech) tagging that is present in NLTK (natural language tool kit) and imported as 'from nltk import pos_tag' and performed as follows:
is_noun = lambda pos: pos[:2] == 'NN'
nouns = [word for (word, pos) in nltk.pos_tag(word_tokens) if is_noun(pos)]
The nouns (entities) generated from the requirement are as follows:
['information', 'File3', 'File4', 'File5', 'Module-A', 'Module-B']
The above noun array is passed to the below function 1 at a time to calculate the impact on other entities count. The impact counter is implemented using the function below where 'n' in the function definition refers to the noun which is passed as a parameter to the function. The function returns a dictionary (which is a key-value pair) where keys are nouns (entities) and values are the impact count from the database where the impact count was stored when the maximum spanning graph was being generated.
def frequency(n):
keys = dict.keys()
if n in keys:
dict[n] += 1
else:
dict[n] = 1
return (dict)
The above would establish the number of entities that are impacted by the requirement.
Similarly, the system after extracting the entity in question from the requirements, traverses the knowledge documentation graph till it reaches the entity for which we are performing an impact assessment and once identified, it identifies the number of branches connecting to actions or verbs which indicate the number of operations it performs and the number of operations performed on it along with incrementing an impact counter. All the actions or verbs are identified leveraging NLP.
Assessment of the criticality of the impacted entities
Once the system extracts and identifies the impacted entities, leveraging maximum spanning graphs, the system also establishes the criticality of the impacted entity in the entire eco-system. This is established by computing the centrality score of the entity which is concept from graph theory. If there are multiple impacted entities, the average of the centrality score is computed to assess the criticality of impacts.
Between-ness centrality algorithm which is present in the module 'networkx' is applied on Knowledge documentation graph which results in centrality score for each entity. This result is stored in a dictionary (key-value pair) where keys are nouns (entities) and values are centrality scores. The system then iterates through the generated nouns (entities) from the requirements and the above generated dictionary and if any noun (entity) from the requirement is present as a key in the dictionary the corresponding value (centrality score) associated with the noun is extracted. If there are multiple impacted entities, the average of the centrality score is computed to assess the criticality of impacts.
Sample output: Between-ness centrality on Knowledge documentation:
{'file1': 0.0, 'module-a': 0.6666666666666667, 'module-b': 0.0, 'module-c': 0.08888888888888889, 'module-d': 0.11111111111111112, 'file2': 0.011111111111111112, 'module-e': 0.06666666666666667, 'module-f': 0.4222222222222222, 'file3': 0.0, 'file4': 0.03333333333333333, 'file5': 0.0}
Assessment of impacted older requirements
Once the system extracts and identifies the impacted entities, the system traverses through older requirements and if the entity in question is listed in the requirement then the impact counter is incremented indicating an impacted older requirement. The system reads the older requirements into the database. Entities are extracted from the older requirements using POS(Parts of Speech) tagging present in NLTK(natural language tool kit).The system then iterates through the entities of older requirements and if any entity from the older requirement is present in the current requirement then the impact counter is incremented indicating an impacted older requirement corresponding to the current requirement.
Defect root cause analysis (RCA) and Magnitude
Data pre-processing is done, to map custom root causes to the standard set of root causes in the back-end. Each of the standard root-causes are assigned a magnitude score to indicate the level of impact/ re-work. The dictionary is shown below:
{'Requirements': 1, 'Design': 2, 'Code Errors': 3, 'Application': 4, 'Batch Jobs': 5,'Database/Backend':6,'Configuration':7,'Performance':8,'Infrastructure':9,'Incorrect testing':10,'Data':11,'User Error':12,'Invalid Defect':13,'Others':14}
The system then reads all the defect data and converts the root cause into the standard set of root causes. The system then applies one-hot encoding and coverts each of the RCA as a feature and based on that defect RCA assigns the relevant score (indicating magnitude of the problem) to the appropriate root cause.
Number of impacted requirements
Natural language processing is performed to preprocess the defect data to extract the entities that are impacted by the defect. (a). stopwords removal: stopwords removal is performed by importing a module from NLTK (natural language tool kit) which is "from nltk.corpus import stopwords" and using the function "stop_words = set (stopwords.words ('english'))". (b) Tokenization: Tokenization is performed by importing a module from NLTK (natural language tool kit) which is "from nltk import word_tokenize" to tokenize or split the input into list of tokens. Similar to the above, the same process is implemented on requirements to understand the entities that are impacted by the requirements.
The system then takes each defect, and based on the entity impacted by that defect, it iterates through every requirement to analyze if the same impacted entity exists and if so, increments a counter which is finally assigned to the defect
Assessment of the number of impacted entities and impacted operations
The system 100 reads the test cases as input into the system. Natural language processing is performed to identify nouns phrases (entities) in the test cases. The system also reads the associated knowledge documentation related to the test cases and dependency trees and maximum spanning graphs are generated for the same. For example: Assume the test cases to be as follows:
a. Send a batch file to the manipulator module
b. Validate the actions mentioned in the batch file
c. Generate a network file from nem
d. Delete any generated network file
e. Validate the input on the manipulator
f. Load a network filename with extension as .specif
g. Validate the input on the manipulator
h. Validate if the generator module is called by the input converter module
i. Load a network filename with extension not .specif
j. Validate of the loader module is called by the input converter module
k. In the output run the task analyse the graph
l. In the output run the task convert graph into another format
m. In the output run the task run static simulation
n. In the output run the task analyse the graph and convert into another format
o. Perform all 3 tasks in the output
Natural language processing is performed to preprocess the input data. (a) stopwords removal: stopwords removal is performed by importing a module from NLTK(natural language tool kit) which is "from nltk.corpus import stopwords"and using the function "stop_words = set(stopwords.words('english'))". (b) Tokenization: Tokenization is performed by importing a module from NLTK(natural language tool kit) which is "from nltk import word_tokenize" to tokenize or split the input into list of tokens. Output after applying Tokenization and removing stopwords will be:
['Send', 'batch', 'file', 'manipulator', 'module']
['Validate', 'actions', 'mentioned', 'batch', 'file']
['Generate', 'network', 'file', 'nem']
['Validate', 'input', 'manipulator']
['Delete', 'generated', 'network', 'file']
['Validate', 'input', 'manipulator']
['Load', 'network', 'filename', 'extension', '.specif']
['Validate', 'generator', 'module', 'called', 'input', 'converter', 'module']
['Load', 'network', 'filename', 'extension', '.specif']
['Validate', 'loader', 'module', 'called', 'input', 'converter', 'module']
['In', 'output', 'run', 'task', 'analyse', 'graph']
['In', 'output', 'run', 'task', 'convert', 'graph', 'another', 'format']
['In', 'output', 'run', 'task', 'run', 'static', 'simulation']
['In', 'output', 'run', 'task', 'analyse', 'graph', 'convert', 'another', 'format']
['Perform', '3', 'tasks', 'output']
Initially nouns(entities) and verbs(operations) are extracted from the test cases using a method from natural language processing which is POS(parts of speech) tagging that is present in NLTK(natural language tool kit) and imported as 'from nltk import pos_tag' and performed as follows:
tagged=nltk.pos_tag(word)
[word for word,pos in tagged
if(pos=='NN' or pos=='VB' or pos=='NNP' or pos=='NNS' or pos=='NNPS' or pos=='VBD' or pos=='VBG' or pos=='VBN' or pos=='VBP' or pos=='VBZ')]
Maximum spanning graph is also generated for the entities using the function 'add_edge()' present in the module 'networkx' which is imported as:
import networkx as nx
G=nx.Graph()
The system then traverses the graph using a top-down approach till it reaches the entity for which we are performing the impact assessment and once identified, it identifies the adjacent entities and marks them as impacted along with incrementing an impact counter. If there are multiple impacted entities, the average of the impact counts is computed to get the score for impacted entities. Output for impacted entites score after taking average of impact counts:
nouns
[0.2, 0.3333333333333333, 0.4666666666666667, 0.5333333333333333, 0.6666666666666666, 0.7333333333333333, 1.0, 1.2666666666666666, 1.4666666666666666, 1.8, 1.9333333333333333, 2.2, 2.3333333333333335, 2.6, 2.6666666666666665]
verbs
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06666666666666667, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 0.4, 0.4]
Assessment of the number of steps in a test case
The system reads the test-steps from the database and iterates through the teststeps and generates the number of test steps for each testcase and if number of teststeps is <=8 the system assigns a complexity level of '1' else if it is >8 and <=15 it assigns a complexity level of '2' else a complexity level of '3'.
Assessment of centrality score
Once the system extracts and identifies the impacted entities from the test cases, leveraging maximum spanning graphs, the system also establishes the criticality of the impacted entity in the entire eco-system. This is established by computing the centrality score of the entity which is concept from graph theory. If there are multiple impacted entities, the average of the centrality score is computed to assess the criticality of impacts.
Betweenness centrality algorithm which is present in the module 'networkx' is applied on Knowledge documentation graph which results in centrality score for each entity. This result is stored in a dictionary (key-value pair) where keys are nouns(entities) and values are centrality scores. The system then iterates through the generated nouns(entities) from the requirements and the above generated dictionary and if any noun(entity) from the requirement is present as a key in the dictionary the corresponding value(centrality score) associated with the noun is extracted. If there are multiple impacted entities, the average of the centrality score is computed to assess the criticality of impacts.
Assessment of Impacted Requirements
Noun phrases are extracted from testcases and cosine similiarty algorithm is applied between each testcase and all the requirements. This results in similarity score for each requirement associated with that testacase.The requirement with maximum similarity score is retrieved and the corresponding requirement number is marked against the testcase.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein addresses unresolved problems associated at the time of requirements gathering. The existing methods do not consider the context in which the business operates, understanding the business problem, identifying the gaps to be bridged or leverage historical data for insights. The embodiment, thus provides the method and system for assessing and improving quality of requirements through predicting requirement related failures in software development.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

Documents

Application Documents

# Name Date
1 202021034012-CLAIMS [16-06-2022(online)].pdf 2022-06-16
1 202021034012-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
2 202021034012-REQUEST FOR EXAMINATION (FORM-18) [07-08-2020(online)].pdf 2020-08-07
2 202021034012-COMPLETE SPECIFICATION [16-06-2022(online)].pdf 2022-06-16
3 202021034012-FORM 18 [07-08-2020(online)].pdf 2020-08-07
3 202021034012-FER_SER_REPLY [16-06-2022(online)].pdf 2022-06-16
4 202021034012-OTHERS [16-06-2022(online)].pdf 2022-06-16
4 202021034012-FORM 1 [07-08-2020(online)].pdf 2020-08-07
5 202021034012-FIGURE OF ABSTRACT [07-08-2020(online)].jpg 2020-08-07
5 202021034012-FER.pdf 2022-02-22
6 Abstract1.jpg 2021-10-19
6 202021034012-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
7 202021034012-Proof of Right [03-02-2021(online)].pdf 2021-02-03
7 202021034012-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
8 202021034012-FORM-26 [12-11-2020(online)].pdf 2020-11-12
8 202021034012-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
9 202021034012-FORM-26 [12-11-2020(online)].pdf 2020-11-12
9 202021034012-COMPLETE SPECIFICATION [07-08-2020(online)].pdf 2020-08-07
10 202021034012-DECLARATION OF INVENTORSHIP (FORM 5) [07-08-2020(online)].pdf 2020-08-07
10 202021034012-Proof of Right [03-02-2021(online)].pdf 2021-02-03
11 Abstract1.jpg 2021-10-19
11 202021034012-DRAWINGS [07-08-2020(online)].pdf 2020-08-07
12 202021034012-FIGURE OF ABSTRACT [07-08-2020(online)].jpg 2020-08-07
12 202021034012-FER.pdf 2022-02-22
13 202021034012-OTHERS [16-06-2022(online)].pdf 2022-06-16
13 202021034012-FORM 1 [07-08-2020(online)].pdf 2020-08-07
14 202021034012-FORM 18 [07-08-2020(online)].pdf 2020-08-07
14 202021034012-FER_SER_REPLY [16-06-2022(online)].pdf 2022-06-16
15 202021034012-REQUEST FOR EXAMINATION (FORM-18) [07-08-2020(online)].pdf 2020-08-07
15 202021034012-COMPLETE SPECIFICATION [16-06-2022(online)].pdf 2022-06-16
16 202021034012-STATEMENT OF UNDERTAKING (FORM 3) [07-08-2020(online)].pdf 2020-08-07
16 202021034012-CLAIMS [16-06-2022(online)].pdf 2022-06-16

Search Strategy

1 SearchStrategyE_21-02-2022.pdf