Abstract: Existing techniques on systematic intelligent change validation are completely manual and highly dependent on human intervention which are prone to errors. The present disclosure provides a system and method which extracts a plurality of parameters from one or more change requests using one or more natural language processing (NLP) techniques. Change risk score and change record gap analysis are obtained by comparing extracted parameters with historical parameters. The change risk score is obtained by dynamically calculating one or more weightages of at least one of application instability score, change proneness score, changeability score, impact score, and likelihood score. One or more change requests are automatically reviewed based on calculated change risk score and change record gap analysis. Further, overall review outcome for each of one or more change requests is determined based on the automatic review. Feedback and action items are further generated based on review outcome. [To be published with FIG. #2]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
ENHANCING INFORMATION TECHNOLOGY (IT) CHANGE PROCESS EFFICIENCY WITH MACHINE LEARNING POWERED AUTOMATED CHANGE RECORD REVIEW
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
2
TECHNICAL FIELD
[001]
The disclosure herein generally relates to change request management, and, more particularly, to a system and method for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review. 5
BACKGROUND
[002]
Digital transformation has drastically changed the mindset of customers where dependency on IT (Information Technology) Enterprise has grown manifold and has become the pillar to extend essential services including 10 healthcare, food supply, transportation, bank, and the like, to larger people in need. In this scenario it is important to keep enterprise-wide IT systems live 24x7 with zero outages. Higher availability ensures timely and efficient delivery of products and services, launch satellites, and even save lives. If industries are to continually create value for customers, they must be able to adapt to an ever-changing 15 environment. “Change” being the primary source of outages in enterprises, controlling ill-formed changes and proactively improving quality of change remains the key focus across the globe.
[003]
Having a structured approach to evaluate the changes for the underlying risks and issues is critical to avoid any catastrophic impact for the 20 enterprise. One of the existing works explained the foundations of impact analysis and provided the definition of the term impact analysis as identifying the potential consequences of a change, or estimating what needs to be modified to accomplish a change. An impact defined as a part determined to be affected, and therefore worthy of inspection. Traceability is the ability to determine what parts are related 25 to what other parts according to specific relationships. Another existing work defined a side effect as an error or other undesirable behavior that occurs as a result of a modification. Another existing works defined stability as the resistance to the potential ripple effect which a program would have when it is modified" ([Yau1980], p. 28). Ripple effect is the "effect caused by making a small change to 30 a system which affects many other parts of a system.
3
[004]
Change Management is a crucial process to keep business upgraded ensuring stability of ongoing systems. Total workflow of change lifecycle involves different stakeholders’ inter-dependent decision. It is a cumbersome activity to review change requests manually and matter of significant manual effort & prone to potential human error. Over the years, many companies have focused on 5 improving efficiency of change management. Organizations which recognize better change processes can drive top-line benefits, and thus organizations are developing change management with an eye towards improving speed to market.
[005]
However, critical decision-making points in the change life cycle related to existing works are completely manual and highly dependent on human 10 intervention which is prone to errors. Lack of systematic intelligent change validation results in major production outages. There is a need for a tool which provides a systematic change record review based on multiple decisioning factor and historical data to avoid impact in the production environment.
15
SUMMARY
[006]
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for enhancing information technology (IT) change process 20 efficiency with machine learning powered automated change record review is provided. The method includes receiving, via one or more hardware processors, a continuous feed of one or more change requests from one or more sources, wherein the one or more change requests are associated with an organization; obtaining, via the one or more hardware processors, a plurality of historical parameters from one 25 or more applications, wherein the plurality of historical parameters comprises of a historical data specific to (i) one or more changes pertaining to one or more components of the one or more applications and (ii) one or more impacts pertaining to the one or more components of the one or more applications; extracting, via the one or more hardware processors, a plurality of parameters from the received one 30 or more change requests using one or more natural language processing (NLP)
4
techniques; comparing, via the one or more hardware processors, the extracted
plurality of parameters and the plurality of historical parameters to obtain at least one of a change risk score and a change record gap analysis by (i) calculating the change risk score of the one or more applications using a linear regression model by calculating dynamically one or more weightages of at least one of an application 5 instability score, a change proneness score, a changeability score, an impact score, and a likelihood score based on a change result of the one or more change requests, and wherein (a) the application instability score is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent applications, by the total 10 number of the one or more changes executed on the one or more applications; (b) the change proneness score is calculated for the one or more components pertaining to the one or more applications, as a ratio of the one or more changes corresponding to the one or more components and the total number of the one or more changes pertaining to the one or more applications; (c) the changeability score is calculated 15 as a ratio of the total number of successful changes for the one or more components pertaining to the one or more applications and the total number of the one or more changes for the one or more components pertaining to the one or more applications; (d) the impact score is measured as a deviation of an original transaction specific to one or more functions associated with the organization from an expected 20 transaction, wherein an expected transaction is measured from a historical transaction data derived from an application performance data using a time series model; and (e) the likelihood score is calculated from the historical data specific to the one or more changes pertaining to the one or more components of the one or more applications as an empirical probability; (ii) performing the change record gap 25 analysis on the one or more change requests by evaluating a set of rules against the extracted plurality of parameters, wherein the set of rules are defined based on one or more criteria using a rule-based decision tree technique based on the plurality of historical parameters; reviewing, via the one or more hardware processors, each of the one or more change requests automatically based on the calculated change risk 30 score and the change record gap analysis; determining, via the one or more
5
hardware processors, an overall review outcome for each of the one or more change
requests based on the automatic review; generating, by the one or more hardware processors, a feedback and one or more action items based on the overall review outcome; and refining, by the one or more hardware processors, the set of rules continuously based on the feedback. 5
[007]
In another aspect, there is provided a system for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review. The system comprises: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication 10 interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a continuous feed of one or more change requests from one or more sources, wherein the one or more change requests are associated with an organization. The system further includes obtaining a plurality of historical parameters from one or more applications, wherein the plurality of historical 15 parameters comprises of a historical data specific to (i) one or more changes pertaining to one or more components of the one or more applications and (ii) one or more impacts pertaining to the one or more components of the one or more applications; extracting, via the one or more hardware processors, a plurality of parameters from the received one or more change requests using one or more 20 natural language processing (NLP) techniques; comparing the extracted plurality of parameters and the plurality of historical parameters to obtain at least one of a change risk score and a change record gap analysis by (i) calculating the change risk score of the one or more applications using a linear regression model by calculating dynamically one or more weightages of at least one of an application 25 instability score, a change proneness score, a changeability score, an impact score, and a likelihood score based on a change result of the one or more change requests, and wherein (a) the application instability score is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent applications, by the total 30 number of the one or more changes executed on the one or more applications; (b)
6
the change proneness score is calculated for the one or more components pertaining
to the one or more applications, as a ratio of the one or more changes corresponding to the one or more components and the total number of the one or more changes pertaining to the one or more applications; (c) the changeability score is calculated as a ratio of the total number of successful changes for the one or more components 5 pertaining to the one or more applications and the total number of the one or more changes for the one or more components pertaining to the one or more applications; (d) the impact score is measured as a deviation of an original transaction specific to one or more functions associated with the organization from an expected transaction, wherein an expected transaction is measured from a historical 10 transaction data derived from an application performance data using a time series model; and (e) the likelihood score is calculated from the historical data specific to the one or more changes pertaining to the one or more components of the one or more applications as an empirical probability; (ii) performing the change record gap analysis on the one or more change requests by evaluating a set of rules against the 15 extracted plurality of parameters, wherein the set of rules are defined based on one or more criteria using a rule-based decision tree technique based on the plurality of historical parameters; reviewing each of the one or more change requests automatically based on the calculated change risk score and the change record gap analysis; determining an overall review outcome for each of the one or more change 20 requests based on the automatic review; generating a feedback and one or more action items based on the overall review outcome; and refining the set of rules continuously based on the feedback.
[008]
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more 25 instructions which when executed by one or more hardware processors cause receiving a continuous feed of one or more change requests from one or more sources, wherein the one or more change requests are associated with an organization; obtaining a plurality of historical parameters from one or more applications, wherein the plurality of historical parameters comprises of a historical 30 data specific to (i) one or more changes pertaining to one or more components of
7
the one or more applications and (ii) one or more impacts pertaining to the one or
more components of the one or more applications; extracting, via the one or more hardware processors, a plurality of parameters from the received one or more change requests using one or more natural language processing (NLP) techniques; comparing the extracted plurality of parameters and the plurality of historical 5 parameters to obtain at least one of a change risk score and a change record gap analysis by (i) calculating the change risk score of the one or more applications using a linear regression model by calculating dynamically one or more weightages of at least one of an application instability score, a change proneness score, a changeability score, an impact score, and a likelihood score based on a change result 10 of the one or more change requests, and wherein (a) the application instability score is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent applications, by the total number of the one or more changes executed on the one or more applications; (b) the change proneness score is calculated for the one or 15 more components pertaining to the one or more applications, as a ratio of the one or more changes corresponding to the one or more components and the total number of the one or more changes pertaining to the one or more applications; (c) the changeability score is calculated as a ratio of the total number of successful changes for the one or more components pertaining to the one or more applications and the 20 total number of the one or more changes for the one or more components pertaining to the one or more applications; (d) the impact score is measured as a deviation of an original transaction specific to one or more functions associated with the organization from an expected transaction, wherein an expected transaction is measured from a historical transaction data derived from an application 25 performance data using a time series model; and (e) the likelihood score is calculated from the historical data specific to the one or more changes pertaining to the one or more components of the one or more applications as an empirical probability; (ii) performing the change record gap analysis on the one or more change requests by evaluating a set of rules against the extracted plurality of 30 parameters, wherein the set of rules are defined based on one or more criteria using
8
a rule
-based decision tree technique based on the plurality of historical parameters; reviewing each of the one or more change requests automatically based on the calculated change risk score and the change record gap analysis; determining an overall review outcome for each of the one or more change requests based on the automatic review; generating a feedback and one or more action items based on the 5 overall review outcome; and refining the set of rules continuously based on the feedback.
[009]
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. 10
BRIEF DESCRIPTION OF THE DRAWINGS
[010]
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: 15
[011]
FIG. 1 illustrates an exemplary system for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review, according to some embodiments of the present disclosure.
[012]
FIG. 2 is a functional block diagram of the system for enhancing 20 information technology (IT) change process efficiency with machine learning powered automated change record review, according to some embodiments of the present disclosure.
[013]
FIGS. 3A and 3B are flow diagrams illustrating the steps involved in the method for enhancing information technology (IT) change process efficiency 25 with machine learning powered automated change record review, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[014]
Exemplary embodiments are described with reference to the 30 accompanying drawings. In the figures, the left-most digit(s) of a reference number
9
identifies the figure in which the reference number first appears.
Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. 5
[015]
Change Requests (CR) related to any IT (Information Technology) solution are raised in ITSM (IT Service Management) tool, in one example embodiment, by a requestor team to deploy latest functionalities. The custodian of production environment (Production Management Team) needs to validate the change request from multiple perspective to ensure the changes are not causing 10 outages in the critical live environment which will impact essential services for end customers. The production management team extracts the list of Change Requests (CR’s) scheduled for a specific time frame from the ITSM tool. The production management team tries to manually understand what change is planned, categorizes it based on domain & priority, manually tries to foresee the impact if the change is 15 deployed, verifies the change record details as per a predefined checklist manually. Follow-up with requestor/other concerned team/group/people if more details required and wait for their feedback. The manual decision-making process before CR approval/rejection takes place without any systematic impact/risk analysis. Consolidate all the CR (Change Request) approval status into a spreadsheet and 20 send it through email to the implementation team. Manual approval is done in ITSM tool. Critical decision-making points in the change life cycle are completely manual and highly dependent on human intervention which is prone to errors. Lack of systematic intelligent change validation results in major production outages. There is a need for a tool which provides a systematic change record review based on 25 multiple decisioning factor and historical data to avoid impact in the production environment.
[016]
To overcome the challenges of the conventional approaches, embodiments herein provide a method and system for enhancing information technology (IT) change process efficiency with machine learning powered 30 automated change record review. The present disclosure provides a method which
10
extracts a plurality of parameters from one or more change requests received from
one or more sources using one or more natural language processing (NLP) techniques. Further a change risk score and a change record gap analysis are obtained by comparing the extracted plurality of parameters with the plurality of historical parameters. The change risk score of the one or more applications is 5 calculated using a linear regression model by calculating dynamically one or more weightages of at least one of an application instability score, a change proneness score, a changeability score, an impact score, and a likelihood score. The one or more change requests are reviewed automatically based on the calculated change risk score and the change record gap analysis. Further, an overall review outcome 10 for each of the one or more change requests is determined based on the automatic review and a feedback and one or more action items is generated.
[017]
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and 15 these embodiments are described in the context of the following exemplary system and/or method.
[018]
FIG. 1 illustrates an exemplary system for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review, according to some embodiments of the present 20 disclosure. In an embodiment, the system 100 includes or is otherwise in communication with hardware processors 102, at least one memory such as a memory 104, and an I/O interface 112. The hardware processors 102, memory 104, and the Input /Output (I/O) interface 112 may be coupled by a system bus such as a system bus 108 or a similar mechanism. In an embodiment, the hardware 25 processors 102 can be one or more hardware processors.
[019]
The I/O interface 112 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 112 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a 30 mouse, an external memory, a printer and the like. Further, the I/O interface 112
11
may enable the system 100 to communicate with other devices, such as web servers,
and external databases.
[020]
The I/O interface 112 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as 5 Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface 112 may include one or more ports for connecting several computing systems with one another or to another server computer. The I/O interface 112 may include one or more ports for connecting several devices to one another or to another server.
[021]
The one or more hardware processors 102 may be implemented as 10 one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, node machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 102 is configured to fetch and execute computer-readable instructions stored in memory 104. 15
[022]
The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an 20 embodiment, the memory 104 includes a plurality of modules 106. The memory 104 also includes a data repository (or repository) 110 for storing data processed, received, and generated by the plurality of modules 106.
[023]
The plurality of modules 106 include programs or coded instructions that supplement applications or functions performed by the system 100 for 25 enhancing information technology (IT) change process efficiency with machine learning powered automated change record review. The plurality of modules 106, amongst other things, can include routines, programs, objects, components, and data structures, which perform particular tasks or implement particular abstract data types. The plurality of modules 106 may also be used as, signal processor(s), node 30 machine(s), logic circuitries, and/or any other device or component that
12
manipulates signals based on operational instructions. Further, the plurality of
modules 106 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 102, or by a combination thereof. The plurality of modules 106 can include various sub-modules (not shown). The plurality of modules 106 may include computer-readable instructions that 5 supplement applications or functions performed by the system 100 for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review. In an embodiment, the modules 106 include a change requests module 202, a historical data module 204, a parameters extraction module 206, a change risk score calculation module 208, a change record 10 gap analysis module 220, an automatic review module 222, a feedback module 224, and an action items module 226. The change risk score calculation module 208 further comprises an application instability score module 210, a change proneness score module 212, a changeability score module, an impact score module and a likelihood score module 218. These modules are depicted in FIG. 2. 15
[024]
The data repository (or repository) 110 may include a plurality of abstracted pieces of code for refinement and data that is processed, received, or generated as a result of the execution of the module(s) 106.
[025]
Although the data repository 110 is shown internal to the system 100, it will be noted that, in alternate embodiments, the data repository 110 can also 20 be implemented external to the system 100, where the data repository 110 may be stored within a database (repository 110) communicatively coupled to the system 100. The data contained within such an external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from 25 the database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS).
[026]
FIGS. 3A and 3B are flow diagrams illustrating a method for enhancing information technology (IT) change process efficiency with machine 30 learning powered automated change record review using the systems 100 of FIGS.
13
1
-2, according to some embodiments of the present disclosure. Steps of the method of FIG. 3 shall be described in conjunction with the components of FIG. 2. At step 302 of the method 300, the change requests module 202 executed via the one or more hardware processors 102 receives a continuous feed of one or more change requests from one or more sources. The one or more change requests are associated 5 with an organization. The one or more sources can include ITSM (IT Service Management) tool, in one example embodiment. The organization can be any financial organization, retail organization, manufacturing organization, healthcare organization and the like. The one or more change requests includes changes related to one or more IT applications of the organizations or Infra (Networks, hardware) 10 changes for which a solution needs to be provided.
[027]
At step 304 of the method 300, the historical data module 204 executed via the one or more hardware processors 102 obtains a plurality of historical parameters from one or more applications. The plurality of historical parameters comprises of a historical data specific to (i) one or more changes 15 pertaining to one or more components of the one or more applications and (ii) one or more impacts pertaining to the one or more components of the one or more applications. The one or more components of the one or more applications comprises at least one of one or more web services, one or more middleware components, an application component, a database component, a mainframe 20 component, a Tandem component, and a Unisys component. The one or more impacts pertaining to the one or more components of the one or more applications comprises at least one of an impact time, an impacted application, an impacted component, an impact duration, an impact severity, an impact RCA (Root Cause Analysis), an impact root cause category, and impact review details. It is to be 25 understood by a person having ordinary skill in the art or person skilled in the art that such examples of the above-mentioned components and impacts shall not be construed as limiting the scope of present disclosure.
[028]
At step 306 of the method 300, the parameters extraction module 206 executed via the one or more hardware processors 102 extracts a plurality of 30 parameters from the received one or more change requests using one or more
14
natural language processing (NLP) techniques
(e.g., as known in the art NLP techniques). The plurality of parameters from the received one or more change requests comprises at least one of an application identifier (ID), an application criticality, a number of major incidents, a change component, a change type, a change task group, a change duration multi days, a change time and duration, a 5 change description, a change backout plan, a change sanity plan, and one or more change tasks.
[029]
At step 308 of the method 300, the change risk score calculation module 208 executed via the one or more hardware processors 102 compares the extracted plurality of parameters and the plurality of historical parameters to obtain 10 at least of a change risk score and a change record gap analysis. The change risk score of the one or more applications is calculated using a linear regression model by calculating dynamically one or more weightages of at least one of an application instability score, a change proneness score, a changeability score, an impact score, and a likelihood based on a change result of the one or more change requests. The 15 record gap analysis is performed on the one or more change requests by evaluating a set of rules against the extracted plurality of parameters. The set of rules are defined based on one or more criteria using a rule-based decision tree technique based on a plurality of historical parameters. The one or more criteria comprise at least one of a change description, a change impact, a change backout plan, a change 20 timeline, and a change sanity plan.
The set of rules can include, for example:
Is the change related to some application fix?
Yes: Does the change description contain any Incident or Problem ticket number?
Yes: Is the Incident/ problem ticket number being still non-close state? 25
No: mark it as Failed as Change description is not mentioning right ticket.
[030]
In an embodiment of the present disclosure, the application instability score (represented by the application instability score module 210) is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent 30 applications, by the total number of the one or more changes executed on the one
15
or more applications
. For example, consider ‘X’ is an application which is dependent on ‘Y’ application, where the application ‘X’ is related to capture time efforts of an associate in the organization, and wherein the application ‘Y’ is the application for applying leaves. So, the application ‘X’ is dependent on the application ‘Y’. Consider another application ‘Z’ which is independent of the 5 application ‘X’, wherein the application ‘Z’ is related to performance management specific to the associate of the organization, and wherein the application ‘X’ and the application ‘Z’ are deployed in same application servers. As the application ‘X’ and the application ‘Z’ both are deployed in same application servers, some of the associate (employee of the organization) might restart the application ‘X’ instead 10 of application ‘Z’ as a human error, wherein the application ‘X’ is impacted or restarted along with the application ‘Z’. So, the system 100 reviews both the applications ‘X’ and ‘Z’, as there is human error probability to impact the application ‘X’ though it is independent. The application instability score is the probability of a software artifact to be impacted due to changes in other artifacts of 15 the system (Dependent applications and infra associated with that application). Integrated Change Review Management (ICRM) captures applications to application dependency matrix and infra to application dependency matrix.
For Application 𝐴𝑝(𝑘) 𝑘 € {1..𝑛}; we get the dependency vector of 𝐴𝑝𝑙 𝑙 €{1..𝑛} as 𝐴𝑝𝐷(𝑘,𝑙) (Application dependency of application 𝑘 for application 𝑙) 20
𝑛= Total Number of applications in scope 𝐴𝑝𝐷(𝑘,𝑙) = {
1 if 𝐴𝑝(𝑘) is dependent on 𝐴𝑝(𝑙).
0 if no dependency 25
}
Similarly for infra 𝐼𝑛𝐹(𝑝,𝑞) 𝑝 €{1..𝑚} 𝑚= total type of infra, {‘App erver’, ‘Web server’, ‘DB server’, ‘Network’, ‘Certificate’, ‘Proxy’, ‘hardware’, ‘SSO’,…, etc.}
𝑞€ {infra set of type p} 30
𝐴𝑝𝐼𝑛𝐹𝐷(𝑘,𝑝,𝑞) = {
16
1 if 𝐴𝑝(𝑘) is dependent on 𝐼𝑛𝐹(𝑝,𝑞)
0 if no dependency
}
There exists another factor, human error which happens when a change executor impacts application or infra in error which was not supposed to happen. From 5 historical outage data human error is calculated as
𝐻𝐸𝐴𝑝𝐷𝐼𝑛𝐹(𝑘,𝑙,𝑝,𝑞)(Human Error possibility for application 𝑘 due to change in application 𝑙 by 𝑝 type of infra change in 𝑞 infra) = {
1 if impact happens for 𝐴𝑝(𝑙) 𝐼𝑛𝐹(𝑝,𝑞) in last 𝑡 time (𝑡=10 1 year)
0 if no impact
}
From historical change caused outage data is calculated as, Pr(𝐴𝑝(𝑘)𝐴𝑝(𝑙)) (𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑘 𝑡𝑜 𝑏𝑒 𝑖𝑚𝑝𝑎𝑐𝑡𝑒𝑑 𝑏𝑦 𝐴𝑝(𝑙)) =15 (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑢𝑡𝑎𝑔𝑒 ℎ𝑎𝑝𝑝𝑒𝑛𝑠 𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑘 𝑖𝑛 𝑡𝑖𝑚𝑒 𝑡 𝑓𝑜𝑟 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑙 𝑡=1 𝑦𝑒𝑎𝑟; )/(𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 ℎ𝑎𝑝𝑝𝑒𝑛 ;
𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑙 𝑖𝑛 𝑡𝑖𝑚𝑒 𝑡) for all applications where 𝐴𝑝𝐷(𝑘,𝑙)= 1 or 𝐻𝐸𝐴𝑝𝐷𝐼𝑛𝐹(𝑘,𝑙,𝑝,𝑞) =1 Pr (𝐴𝑝(𝑘)/20 𝐼𝑛𝐹(𝑝,𝑞)) (𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑘 𝑡𝑜 𝑏𝑒 𝑖𝑚𝑝𝑎𝑐𝑡𝑒𝑑 𝑏𝑦 𝑝 𝑡𝑦𝑝𝑒 𝑖𝑛𝑓𝑟𝑎 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑓 𝑞 𝑖𝑛𝑓𝑟𝑎) = (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑢𝑡𝑎𝑔𝑒 ℎ𝑎𝑝𝑝𝑒𝑛𝑠 𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑘 𝑖𝑛 𝑇𝑖𝑚𝑒 𝑡 𝑓𝑜𝑟 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑖𝑛𝑓𝑟𝑎 𝑞 𝑜𝑓 𝑡𝑦𝑝𝑒 𝑝; 𝑡 = 1 𝑦𝑒𝑎𝑟; )/ (𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓
𝑖𝑛𝑓𝑟𝑎 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 ℎ𝑎𝑝𝑝𝑒𝑛 𝑖𝑛 𝑖𝑛𝑓𝑟𝑎 𝑞 𝑖𝑛 𝑡𝑖𝑚𝑒 𝑡) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖𝑛𝑓𝑟𝑎 𝑤ℎ𝑒𝑟𝑒 𝐴𝑝𝐼𝑛𝐹𝐷(𝑘,𝑝,𝑞) = 1 𝑜𝑟 𝐻𝐸𝐴𝑝𝐷𝐼𝑛𝐹(𝑘,𝑙,𝑝,𝑞) =1 25
[031]
In an embodiment of the present disclosure, the change proneness score (represented by the change proneness score module 212) is calculated for the one or more components pertaining to the one or more applications, as a ratio of the one or more changes corresponding to the one or more components and the total number of the one or more changes pertaining to the one or more applications. The 30
17
c
hange proneness score is defined as the probability of a software artifact to a change.
For application 𝐴𝑝(𝑙) 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑐𝑚(𝑖); 𝑖=1..𝑙𝑣;
where 𝑣 is the total component of application 𝑙, Change proneness score is: Pr(𝐶𝑚(𝑖)𝐴𝑝(𝑙))=5 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑖𝑛 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑐𝑚(𝑖)𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝐴𝑝(𝑙)𝑖𝑛
𝑡𝑖𝑚𝑒 𝑡,𝑡=1 𝑦𝑒𝑎𝑟/ 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑖𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝐴𝑝(𝑙) 𝑖𝑛 𝑡𝑖𝑚𝑒 𝑡. Similarly, for 𝐼𝑛𝑓𝑟𝑎 𝑃𝑟(𝐼𝑛𝐹(𝑞)𝑖𝑛𝐹(𝑝))= 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑖𝑛 𝑖𝑛𝑓𝑟𝑎 𝑞
𝑜𝑓 𝑡𝑦𝑝𝑒 𝑝/𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑖𝑛 𝑖𝑛𝑓𝑟𝑎 𝑞 10
[032]
In an embodiment of the present disclosure, the changeability score (represented by the changeability score module 214) is calculated as a ratio of the total number of successful changes for the one or more components pertaining to the one or more applications and the total number of the one or more changes for the one or more components pertaining to the one or more applications. The 15 changeability is the amount of easiness of performing changes. The changeability is defined as the probability of success of a particular software artifact.
Pr(𝐶𝑚(𝑖)𝐴𝑝(𝑙))= 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑓𝑢𝑙 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑓𝑜𝑟 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑐𝑚(𝑖) 𝑜𝑓 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝐴𝑝(𝑙)20 /𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐ℎ𝑎𝑛𝑔𝑒𝑠 𝑜𝑓 𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 𝑐𝑚(𝑖) 𝑜𝑓 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝐴𝑝(𝑙) 𝑖𝑛 𝑔𝑖𝑣𝑒𝑛 𝑡𝑖𝑚𝑒 𝑡.𝑡= 1 𝑦𝑒𝑎𝑟
[033]
In an embodiment of the present disclosure, the impact score (represented by the impact score module 216) is measured as a deviation of an original transaction specific to one or more functions associated with the 25 organization from an expected transaction. The one or more functions comprises at least one of a number of customer calls in call center function, a number of financial transactions in a payment function and a number of credit card applications received
18
in card functions.
An expected transaction is measured from a historical transaction data derived from an application performance data using a time series model. The potential impact of change caused outage causes customer dissatisfaction, financial loss and reputation loss. Calculation of Measure of impact varies application function wise. Based on application performance monitoring data, Impact is 5 measured by
1.
Deviation from expected transaction during impact time.
Transaction deviation percentage, 𝑇𝐷 (𝑇𝑟𝑎𝑛𝑠𝑎𝑐𝑡𝑖𝑜𝑛 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛) = (Σ 𝑡=1 𝑡𝑜 𝑛(𝑌(𝑡𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑)−𝑌(𝑡)) /Σ𝑡=1 𝑡𝑜 𝑛 𝑌(𝑡𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑)∗100 10
2.
Deviation from number of average logged in user during impact time.
UD (user Deviation) = (Σ𝑡=1 𝑡𝑜 𝑛(𝑈(𝑡𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑)− 𝑌(𝑡)) /Σ𝑡=1 𝑡𝑜 𝑛𝑌(𝑡𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑))∗100
Estimated transaction and user (of the applications) is being calculated using SARIMA (Seasonal Autoregressive Integrated Moving Average) model during 15 time of failure. When time is considered in 1 hour interval. Average logged in customer is extracted from application logs.
Impact score is calculated as average = (𝑇𝐷+𝑈𝐷)/2 which provides a score out of 100. The weekly batch calculates impact score value. In an embodiment of the present disclosure, 1 year’s average is being considered as the impact 20 score for each application.
[034]
In an embodiment of the present disclosure, the likelihood score (represented by the likelihood score module 218) is calculated from the historical data specific to one or more changes pertaining to one or more components of the one or more applications as an empirical probability. Likelihood is the probability 25 of change failing. Empirical probability is being calculated to for change failure chance based on attributes/features which includes at least one of an application identifier (ID), an application criticality, a number of major incidents, a number of change related major incidents, a change result, a change component, a change type (change type can include normal or expedite), a change task group, a 30 isMultidayChange (change duration multi days), an impacted component number,
19
an impacted application number, a causedMIM,
a changed day of week and one or more patching schedules on same application. MIM in causedMIM refers to any change caused during a major incident then it is marked as causedMIM=true. The causedMIM is a historical value which signifies how many changes (and what type of changes) of the one or more applications caused major incidents. 5
[035]
In an embodiment of the present disclosure, based on the application instability score, the change proneness score, the changeability score, the impact score, and the likelihood score and after change considering 4 values of change risk as actual: Actual change risk score 𝑅 =
0.0 change executed successfully without any issue. 10
0.5 change was assisted during execution.
0.7 change was reverted.
1.0 change caused impact.
Using linear regression model, change risk score 𝑅 is calculated as
𝑅 = 𝛼+ 𝛽1∗ (𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑖𝑛𝑠𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑠𝑐𝑜𝑟𝑒(𝑋1))+ 𝛽2∗15 (𝑐ℎ𝑎𝑛𝑔𝑒 𝑝𝑟𝑜𝑛𝑒𝑛𝑒𝑠𝑠 𝑠𝑐𝑜𝑟𝑒 (𝑋2))+ 𝛽3∗(𝑐ℎ𝑎𝑛𝑔𝑒𝑎𝑏𝑖𝑙𝑖𝑡𝑦 (𝑋3))+𝛽4∗(𝑖𝑚𝑝𝑎𝑐𝑡 𝑠𝑐𝑜𝑟𝑒 (𝑋4))+ 𝛽5∗(𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑠𝑐𝑜𝑟𝑒 (𝑋5)) +€
Here, 𝑅 refers to a dependent variable and (𝑋1,𝑋2,...,𝑋5) refers to independent variables which are assumed to influence the dependent variable. 𝑋1 refers to the application instability score, 𝑋2 refers to the change proneness score, 𝑋3 refers to 20 the changeability score, 𝑋4 refers to the impact score and 𝑋5 refers to the likelihood score.
To find a linear equation that best fits the relationship between 𝑅 and the 𝑋's, errors between predicted and actual R values are minimized. Further 𝛼 refers to an intercept (𝑅 value when all 𝑋′𝑠 are 0), 𝛽1,𝛽2,...,𝛽5 refers to regression 25 coefficients (slopes indicating the effect of each 𝑋 on 𝑅), € refers to an error term.
The linear regression model calculates dynamically one or more weightages of at least one of the application instability score, the change proneness score, the changeability score, the impact score, and the likelihood score based on a change result of the one or more change requests and generates the change risk score. 30
20
[036]
At step 310 of the method 300, the automatic review module 222 executed via the one or more hardware processors 102, reviews each of the one or more change requests automatically based on the calculated change risk score and the change record gap analysis. In an embodiment of the present disclosure, a rule-based decision tree (known in the art) is used to check whether change records 5 comprised in the one or more change requests are properly documented or not. The rule-based decision tree refers to an approach for building a decision tree using various techniques such as data-driven-based approach, wherein knowledge is stored in the form of a set of rules, which are converted into a decision tree when a decision-making process is required. From contextual knowledge, review rules 10 repository has been built for different kinds of changes. Application with rules have been extended from common rules by SME’s (Subject Matter Experts). A python-based program gets the change record attributes and executes the rule-based decision tree recommendation for the change. The system 100 validates all the rules and sends response as “FAIL” with the fail rule set or “PASS”. PASS decisions will 15 be reviewed by Stratified Sampling methodology. Each stratum has different categories or subgroups of changes (Application change, Database (DB) change, Certificate change.). “FAIL” changes are reviewed by stakeholders. Stratified sampling is used to make sure all subgroup representation is considered. Application change, DB (Database) change, certificate change are different 20 subgroups of change. Different subgroups of change being verified by a subject matter expert (SME) that the decision made by system is 100 is correct or some improvement is required. This helps the system 100 to continuously improve the proposed model.
[037]
At step 312 of the method 300, the feedback module 204 executed 25 via the one or more hardware processors 102, determines an overall review outcome for each of the one or more change requests based on the automatic review.
[038]
At step 314 of the method 300, the feedback module 224 and the action items module 226 executed via the one or more hardware processors 102, generate a feedback and one or more action items based on the overall review 30 outcome.
21
[039]
At step 316 of the method 300 executed via the one or more hardware processors 102, the system 100 refines the set of rules continuously based on the feedback.
[040]
In an embodiment of the present disclosure, the system 100 decides whether the one or more change requests will be automatically approved or requires 5 human intervention (non-automatic). A classification model has been built based on a plurality of attributes using a voting classifier which enables the system 100 to decide whether the one or more change requests will be automatically approved or requires human intervention (non-automatic). The plurality of attributes includes, but are not limited to, a) application criticality b) Number of Major c) ChangeResult 10 d) ChangeComponent e) ChangeType(Normal/Expedite) f) ChangeTaskGroup g) isMultidayChange h) ImpactedComponentNumber i) ImpactedApplicationNumber j) CausedMIM h) MIMSeverity k) ChangeRevert l) changeRiskScore m) patching scheduled. The idea behind the voting classifier is to combine conceptually different machine learning classifiers and use a majority vote or the average 15 predicted probabilities (soft vote) to predict the class labels eliminating weakness of individual models.
[041]
In an embodiment of the present disclosure, a change review reliability is the probability that a software change review identifies all the potential defects in the one or more change requests. The change review reliability is 20 measured by the amount of false negatives in change review result. The change review reliability indicates a) how efficiently the system is reviewing the change records, b) how efficiently humans are doing the review, and c) how efficiently tools and techniques are performing. Based on this reliability score SME (Subject Matter Experts) work to improve the ML (Machine Learning) models and process. 25 This change review reliability targets to reduce type 2 error, wherein the type 2 error is a mistake or error of concluding that there is no significant effect or relationship (concluding no issue in change record), when in reality there is a significant effect or relationship (change causes impact).
[042]
Use Case example: 30
22
Let's consider a digital application called “ABC”. As an architecture, the digital application “ABC” is composed of a web server, an application server, Tibco as a middleware, one or more micro services in different cloud server, a mainframe component, and a Unisys component and an Oracle® database. Let’s consider another application “LMN”, which is basically a Unisys application, a card system 5 which is a combination of web-based application and Tandem system and micro services. The digital application “ABC” is dependent on application “LMN”.
Components and Dependencies:
•
The digital application “ABC”: Web Server, Application Server, Tibco, microservices, Mainframe, Unisys, Oracle database 10
•
Dependencies:
o
Application: LMN (Unisys), Card System (web & Tandem)
o
Infrastructure: Web server, Tibco certificates, database server
1. Instability Score
An application instability score is a metric used to quantify the likelihood of an 15 application experiencing an outage or malfunction due to a change. The application instability score is calculated based on two key factors:
•
Historical change impact: This refers to the past instances where a change in the application or its dependencies caused an outage or issue affecting the digital application “ABC”. This data is used to calculate the instability score 20 for individual applications and infrastructure components.
•
Dependency vector: This identifies which applications and infrastructure elements the digital application “ABC” relies on. A value of 1 in the vector indicates a direct dependency, while 0 indicates no direct dependency.
•
Human error: This has been observed historically that web server is shared 25 with two more application with the digital application “ABC” which are not dependent on the digital application “ABC”. But due to a change in their web server the digital application “ABC” web server was mistakenly stopped and that caused an outage. So, dependency due to human error possibility is considered for that application webserver infra change. 30
23
Example Calculation:
•
The application “LMN”: Out of 100 changes, 7 impacted the digital application “ABC” (instability score = 7/100 = 0.07).
•
Card System: Out of 100 changes, 5 impacted the digital application “ABC” (instability score = 5/100 = 0.05). 5
•
Web Server: Out of 100 changes, 1 impacted the digital application “ABC” (instability score = 1/100 = 0.01).
•
The digital application “ABC” itself: Out of 100 changes, 4 impacted the digital application “ABC” (instability score = 4/100 = 0.04).
Application during Change Record Review: 10
•
While reviewing change records, the system considers the instability score for the digital application “ABC” itself (0.04).
•
If other changes involve directly dependent applications like the application “LMN” or Card (where association vector = 1), the system adds their respective instability scores (e.g., 0.07 for the application 15 “LMN”) to provide a summed instability score for that week.
•
This summed score indicates the cumulative risk of the digital application “ABC” facing an outage due to the planned changes in that week.
2. Change proneness Score:
Tibco: 20
Number of changes: 12
Total number of changes: 100
Change proneness score: 12/100 = 0.12
Mainframe:
Number of changes: 20 25
Total number of changes: 100
Change proneness score: 20/100 = 0.2
3. Changeability Score:
• Tibco:
Number of successful changes: 10 30
24
Total number of changes: 12
Changeability score: 10/12 = 0.83
• Mainframe:
Number of successful changes: 13
Total number of changes: 20 5
Changeability score: 13/20 = 0.65
4. Impact score calculation:
• Outage 1:
Estimated functional transactions: 10,000 (using SARIMA)
Elapsed functional transactions: 1,000. 10
Impact: (10,000 - 1,000)/10,000 = 0.9 (90% impact)
• Calculate impact similarly for the 4 stages based on elapsed and estimated transactions.
5. Likelihood score calculation:
• Middleware multiday change: 15
Number of multiday changes causing failures: 16
Total number f multiday changes: 20
Likelihood: 16/20 = 0.8 (80%)
• Middleware non-multiday change:
Number of non-multiday changes causing failures: 4 20
Total number of non-multiday changes: 30
Likelihood: 4/30 = 0.13 (13%)
• Calculate likelihood for their types of changes based on historical failure data.
[043]
The written description describes the subject matter herein to enable 25 any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent 30 elements with insubstantial differences from the literal language of the claims.
25
[044]
Systematic intelligent change validation in existing works is completely manual and highly dependent on human intervention which is prone to errors. The embodiment thus provides a system and method for enhancing information technology (IT) change process efficiency with machine learning powered automated change record review. The present disclosure enables 5 automatic review of the one or change requests based on the change risk score and the change record gap analysis. Further, an overall review outcome for each of the one or more change requests is determined based on the automatic review and a feedback and one or more action items is generated.
[045]
It is to be understood that the scope of the protection is extended to 10 such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of 15 computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with 20 software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs. 25
[046]
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a 30 computer-usable or computer readable medium can be any apparatus that can
26
comprise, store, communicate, propagate, or transport the program for use by or in
connection with the instruction execution system, apparatus, or device.
[047]
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. 5 These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, 10 variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items 15 following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[048]
Furthermore, one or more computer-readable storage media may be 20 utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or 25 stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any 30 other known physical storage media.
27
[049]
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.We Claim:
1. A processor implemented method (300), comprising:
receiving (302), via one or more hardware processors, a continuous feed of
one or more change requests from one or more sources, wherein the one or more
change requests are associated with an organization;
obtaining (304), via the one or more hardware processors, a plurality of
historical parameters from one or more applications, wherein the plurality of
historical parameters comprises of a historical data specific to (i) one or more
changes pertaining to one or more components of the one or more applications and
(ii) one or more impacts pertaining to the one or more components of the one or
more applications;
extracting (306), via the one or more hardware processors, a plurality of
parameters from the received one or more change requests using one or more
natural language processing (NLP) techniques;
comparing (308), via the one or more hardware processors, the extracted
plurality of parameters and the plurality of historical parameters to obtain at least
one of a change risk score and a change record gap analysis by –
(i) calculating the change risk score of the one or more
applications using a linear regression model by calculating dynamically one or more weightages of at least one of an application instability score, a change proneness score, a changeability score, an impact score, and a likelihood score based on a change result of the one or more change requests, and wherein -
(a) the application instability score is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent applications, by the total number of the one or more changes executed on the one or more applications;
(b) the change proneness score is calculated for the one or more components pertaining to the one or more applications, as a ratio of the one or more changes corresponding to the one or more
components and the total number of the one or more changes pertaining to the one or more applications;
(c) the changeability score is calculated as a ratio of the total number of successful changes for the one or more components pertaining to the one or more applications and the total number of the one or more changes for the one or more components pertaining to the one or more applications;
(d) the impact score is measured as a deviation of an original transaction specific to one or more functions associated with the organization from an expected transaction, wherein an expected transaction is measured from a historical transaction data derived from an application performance data using a time series model; and
(e) the likelihood score is calculated from the historical data specific to the one or more changes pertaining to the one or more components of the one or more applications as an empirical probability;
(ii) performing the change record gap analysis on the one or
more change requests by evaluating a set of rules against the extracted plurality of parameters, wherein the set of rules are defined based on one or more criteria using a rule-based decision tree technique based on the plurality of historical parameters; reviewing (310), via the one or more hardware processors, each of the one
or more change requests automatically based on the calculated change risk score
and the change record gap analysis;
determining (312), via the one or more hardware processors, an overall
review outcome for each of the one or more change requests based on the automatic
review;
generating (314), via the one or more hardware processors, a feedback and
one or more action items based on the overall review outcome; and
refining (316), via the one or more hardware processors, the set of rules
continuously based on the feedback.
2. The processor implemented method as claimed in claim 1, wherein the plurality of parameters from the received one or more change requests comprises an application identifier (ID), an application criticality, a number of major incidents, a change component, a change type, a change task group, a change duration multi days, a change time and duration, a change description, a change backout plan, a change sanity plan and one or more change tasks.
3. The processor implemented method as claimed in claim 1, wherein the one or more components of the one or more applications comprises at least one of one or more web services, one or more middleware components, an application component, a database component, a mainframe component, a tandam component, and a Unisys component.
4. The processor implemented method as claimed in claim 1, wherein the one or more impacts pertaining to the one or more components of the one or more applications comprises at least one of an impact time, an impacted application, an impacted component, an impact duration, an impact severity, an impact RCA (Root Cause Analysis), an impact root cause category, and an impact review details.
5. The processor implemented method as claimed in claim 1, wherein the one or more criteria comprises at least one of a change description, a change impact, a change backout plan, a change timeline, and a change sanity plan.
6. A system (100), comprising:
a memory (104) storing instructions;
one or more communication interfaces (112); and
one or more hardware processors (102) coupled to the memory (104) via the one or more communication interfaces (112), wherein the one or more hardware processors (102) are configured by the instructions to:
receive a continuous feed of one or more change requests from one or more sources, wherein the one or more change requests are associated with an organization;
obtain a plurality of historical parameters from one or more applications, wherein the plurality of historical parameters comprises of a historical data specific to (i) one or more changes pertaining to one or more components of the one or more applications and (ii) one or more impacts pertaining to the one or more components of the one or more applications;
extract a plurality of parameters from the received one or more change requests using one or more natural language processing (NLP) techniques;
compare the extracted plurality of parameters and the plurality of historical parameters to obtain at least one of a change risk score and a change record gap analysis by –
(i) calculating the change risk score of the one or more applications
using a linear regression model by calculating dynamically one or more weightages of at least one of an application instability score, a change proneness score, a changeability score, an impact score, and a likelihood score based on a change result of the one or more change requests, and wherein -
(a) the application instability score is calculated by dividing the total number of the one or more impacts obtained from at least one of one or more dependent applications and one or more non-dependent applications, by the total number of the one or more changes executed on the one or more applications;
(b) the change proneness score is calculated for the one or more components pertaining to the one or more applications, as a ratio of the one or more changes corresponding to the one or more
components and the total number of the one or more changes pertaining to the one or more applications;
(c) the changeability score is calculated as a ratio of the total number of successful changes for the one or more components pertaining to the one or more applications and the total number of the one or more changes for the one or more components pertaining to the one or more applications;
(d) the impact score is measured as a deviation of an original transaction specific to one or more functions associated with the organization from an expected transaction, wherein an expected transaction is measured from a historical transaction data derived from an application performance data using a time series model; and
(e) the likelihood score is calculated from the historical data specific to the one or more changes pertaining to the one or more components of the one or more applications as an empirical probability;
(ii) performing the change record gap analysis on the one or
more change requests by evaluating a set of rules against the extracted plurality of parameters, wherein the set of rules are defined based on one or more criteria using a rule-based decision tree technique based on the plurality of historical parameters; review each of the one or more change requests automatically based on the
calculated change risk score and the change record gap analysis;
determine an overall review outcome for each of the one or more change
requests based on the automatic review;
generate a feedback and one or more action items based on the overall
review outcome; and
refine the set of rules continuously based on the feedback.
7. The system as claimed in claim 6, wherein the plurality of parameters from
the received one or more change requests comprises an application identifier
(Id), an application criticality, a number of major incidents, a change component, a change type, a change task group, a change duration multi days, a change time and duration, a change description, a change backout plan, a change sanity plan, and one or more change tasks.
8. The system as claimed in claim 6, wherein the one or more components of the one or more applications comprises at least one of one or more web services, one or more middleware components, an application component, a database component, a mainframe component, a tandam component, and a Unisys component.
9. The system as claimed in claim 6, wherein the one or more impacts pertaining to the one or more components of the one or more applications comprises at least one of an impact time, an impacted application, an impacted component, an impact duration, an impact severity, an impact RCA (Root Cause Analysis), an impact root cause category, and an impact review details.
10. The system as claimed in claim 6, wherein the one or more criteria comprises at least one of a change description, a change impact, a change backout plan, a change timeline, and a change sanity plan.
| # | Name | Date |
|---|---|---|
| 1 | 202421008415-STATEMENT OF UNDERTAKING (FORM 3) [07-02-2024(online)].pdf | 2024-02-07 |
| 2 | 202421008415-REQUEST FOR EXAMINATION (FORM-18) [07-02-2024(online)].pdf | 2024-02-07 |
| 3 | 202421008415-FORM 18 [07-02-2024(online)].pdf | 2024-02-07 |
| 4 | 202421008415-FORM 1 [07-02-2024(online)].pdf | 2024-02-07 |
| 5 | 202421008415-FIGURE OF ABSTRACT [07-02-2024(online)].pdf | 2024-02-07 |
| 6 | 202421008415-DRAWINGS [07-02-2024(online)].pdf | 2024-02-07 |
| 7 | 202421008415-DECLARATION OF INVENTORSHIP (FORM 5) [07-02-2024(online)].pdf | 2024-02-07 |
| 8 | 202421008415-COMPLETE SPECIFICATION [07-02-2024(online)].pdf | 2024-02-07 |
| 9 | 202421008415-FORM-26 [16-03-2024(online)].pdf | 2024-03-16 |
| 10 | Abstract1.jpg | 2024-04-18 |
| 11 | 202421008415-FORM-26 [22-05-2025(online)].pdf | 2025-05-22 |