Sign In to Follow Application
View All Documents & Correspondence

Method And System For Performance Evaluation Using An Ai Model

Abstract: ABSTRACT Provided is a method and system for performance evaluation using an AI model comprising automatic extraction of accurate call disposition details from the interactions for any domain, so that interactions can be tagged consistently and accurately, and actionable insights can be derived, by training AI models. It provides a method and system that can be used in real time for automating the call disposition detailing for any conversation over a call. In an embodiment the call conversation may be a customer care call, or any other work-related call. It may allow users to extract issues, cause and resolution from a call transcript with utmost accuracy, also train it further on client specific data. The extracted text is then used to for performance evaluation using multiple KPIs and parameters.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 February 2022
Publication Number
35/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

INFOSYS LIMITED
44, Infosys Avenue, Electronics City, Hosur Road, Bangalore, 560100, Karnataka.

Inventors

1. AMIT KUMAR
Block 16, Flat no 703, Malaysian Township, Kukatpally Hyderabad, Telangana 500072
2. VENUGOPAL SUBARAO
Flat #404, Shriram Shankari Apartments, Ittamadu Road, Anjaneya Nagar, BSK 3rd , Bengaluru, Karnataka-560085
3. KOLICHINA RAVI KIRAN
Flat 303, Rama Residency, Ramalayam Road, Vivekananda Nagar, Kukatpally, Hyderabad, Telangana – 500072
4. VISHAL MANCHANDA
#5A, Bollywood Heights -2 , Adjoining Sec-20 PKL. Zirakpur Punjab -140603
5. ANAND SANTHANAM
28 Dowells Street, London, UK SE109FS

Specification

Claims:We Claim
1. A method for performance evaluation using a language model comprising,
training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters;
performing unsupervised training of one or more language model for a predecided domain, using the generated training data;
extracting one or more details from a transcript text of a call participant using the trained language model; and
evaluating the performance of the participant using the extracted details and the set of predetermined parameters.

2. The method as claimed in claim 1, wherein the data generation AI model is trained using a sample training data generated by a subject matter expert.

3. The method as claimed in claim 1, wherein the generated training data is labelled with one or more predefined labels.

4. The method as claimed in claim 3, wherein training of the language model comprises,
parsing the language model; and
configuring the language model using the labelled training data.

5. The method as claimed in claim 4, wherein generating a training data for each of one or more requirements comprises,
removing personal information and normalizing the training data;
labelling the normalized training data;
tokenizing the labelled data; and
finetuning the language model using the tokenized data.

6. A system for performance evaluation using a language model comprising,
a training engine for,
training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters;
performing unsupervised training of one or more language model for a predecided domain, using the generated training data; and

an inference engine for,
extracting one or more details from a transcript text of a call participant using the trained language model; and
evaluating the performance of the participant using the extracted details and the set of predetermined parameters.

7. The system as claimed in claim 6, wherein the data generation AI model is trained using a sample training data generated by a subject matter expert.

8. The system as claimed in claim 6, wherein the generated training data is labelled with one or more predefined labels.

9. The system as claimed in claim 8, wherein training of the language model comprises,
parsing the language model; and
configuring the language model using the labelled training data.

10. The system as claimed in claim 9, wherein the training engine is configured for generating a training data for each of one or more requirements and comprises,
a data collection and creation component for
removing personal information and normalizing the training data;
labelling the normalized training data;
tokenizing the labelled data; and
finetuning the language model using the tokenized data.

11. A computer program product comprising a computer-readable storage media having computer-executable instructions stored thereupon, which when executed by a processor cause the processor to perform a method for performance evaluation using a language model comprising,

training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters;
performing unsupervised training of one or more language model for a predecided domain, using the generated training data;
extracting one or more details from a transcript text of a call participant using the trained language model; and
evaluating the performance of the participant using the extracted details and the set of predetermined parameters.

12. The computer program product as claimed in claim 11, wherein the data generation AI model is trained using a sample training data generated by a subject matter expert.

13. The computer program product as claimed in claim 11, wherein the generated training data is labelled with one or more predefined labels.

14. The computer program product as claimed in claim 13, wherein training of the language model comprises,
parsing the language model; and
configuring the language model using the labelled training data.

15. The computer program product as claimed in claim 14, wherein generating a training data for each of one or more requirements comprises,
removing personal information and normalizing the training data;
labelling the normalized training data;
tokenizing the labelled data; and
finetuning the language model using the tokenized data.
, Description:FIELD
[0001] The present disclosure relates to an AI assisted method and system for performance evaluation. More particularly it relates to text disposition for performance evaluation.

BACKGROUND
[0002] In most domains, the process of extracting the disposition details from an interaction is either missing or manual. It is time consuming and inaccurate due to time constraint, inadequate checks, lack of automation etc. There are no trained disposition models available.

SUMMARY
[0003] Provided is a method for performance evaluation using a language model which comprises training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters. Unsupervised training is performed of one or more language model for a predecided domain, using the generated training data, followed by extracting one or more details from a transcript text of a call participant using the trained language model; and evaluating the performance of the participant using the extracted details and the set of predetermined parameters.

[0004] Provided is a system for performance evaluation using a language model which comprises a training engine for training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters. Unsupervised training is performed of one or more language model for a predecided domain, using the generated training data, followed by extracting one or more details from a transcript text of a call participant using the trained language model; and evaluating the performance of the participant using the extracted details and the set of predetermined parameters using an inference engine.

[0005] Provided is a computer program product for performance evaluation using a language model which comprises training one or more data generation AI model and generating a training data for each of one or more requirements, using the trained AI data generation model and a set of predetermined parameters. Unsupervised training is performed of one or more language model for a predecided domain, using the generated training data, followed by extracting one or more details from a transcript text of a call participant using the trained language model; and evaluating the performance of the participant using the extracted details and the set of predetermined parameters.

BRIEF DESCRIPTION OF DRAWINGS
[0006] Fig 1 relates to a general-purpose computing system to implement an embodiment of the process as disclosed;
[0007] Fig 2 relates to a flowchart relating to an embodiment of the process as disclosed; and
[0008] Fig 3 relates to an embodiment of the architecture overview to implement the process as disclosed.

DETAILED DESCRIPTION
[0009] An embodiment of the present disclosure provides automatic extraction of accurate call disposition details from the interactions for any domain, so that interactions can be tagged consistently and accurately, and actionable insights can be derived. It provides a method and system that can be used in real time for automating the call disposition detailing for any conversation over a call. In an embodiment the call conversation may be a customer care call, or any other work-related call. It may allow users to extract issues, cause and resolution from a call transcript with utmost accuracy, also train it further on client specific data. The extracted text is then used to for performance evaluation using multiple KPIs and parameters.

[0010] An exemplary environment 10 with a performance evaluation system 12 configured to extract and process information, is illustrated in FIG. 1, although this technology can be implemented on other types of devices, such as one of the web server devices 16(1)-16(n), or any other server computing apparatus configured to receive and process hypertext transfer protocol (HTTP) requests, by way of example only. The exemplary environment 10 includes an knowledge processing system 12, client devices 14(1)-14(n), the web server devices 16(1)-16(n), and communication networks 18(1)-18(2), although other numbers and types of systems, devices, and/or elements in other configurations and environments with other communication network topologies can be used. This technology provides several advantages including providing a method, computer readable medium and an apparatus that can provide knowledge processing system.

[0011] Referring more specifically to FIG. 1, the knowledge processing system 12 may include a central processing unit (CPU) or processor 13, a memory 15, and an interface system 17 which are coupled together by a bus 19 or other link, although other numbers and types of components, parts, devices, systems, and elements in other configurations and locations can be used. The processor 13 in the knowledge processing system 12 executes a program of stored instructions for one or more aspects of the present disclosure as described and illustrated by way of the embodiments herein, although the processor could execute other numbers and types of programmed instructions.

[0012] The memory 15 in the knowledge processing system 12 stores these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor 13, can be used for the memory 15 in the web content optimization computing apparatus 12.

[0013] The interface system 17 in the knowledge processing system 12 is used to operatively couple and communicate between the knowledge processing system 12 and the client devices 14(1)-14(n) and the web server devices 16(1)-16(n) via the communication networks 18(1) and 18(2), although other types and numbers of communication networks with other types and numbers of connections and configurations can be used. By way of example only, the communication networks 18(1) and 18(2) can use TCP/IP over Ethernet and industry-standard protocols, including HTTP, HTTPS, WAP, and SOAP, although other types and numbers of communication networks, such as a direct connection, a local area network, a wide area network, modems and phone lines, e-mail, and wireless and hardwire communication technology, each having their own communications protocols, can be used.

[0014] Each of the client devices 14(1)-14(n) enables a user to request, receive, and interact with web pages from one or more web sites hosted by the web server devices 16(1)-16(n) through the knowledge processing system 12 via one or more communication networks 18(1). Although multiple client devices 14(1)-14(n) are shown, other numbers and types of user computing systems could be used. In one example, the client devices 14(1)-14(n) comprise smart phones, personal digital assistants, computers, or mobile devices with Internet access that permit a website form page or other retrieved web content to be displayed on the client devices 14(1)-14(n).

[0015] Each of the client devices 14(1)-14(n) in this example is a computing device that includes a central processing unit (CPU) or processor 20, a memory 22, user input device 24, a display 26, and an interface system 28, which are coupled together by a bus 30 or other link, although one or more of the client devices 14(1)-14(n) can include other numbers and types of components, parts, devices, systems, and elements in other configurations. The processor 20 in each of the client devices 14(1)-14(n) executes a program of stored instructions for one or more aspects of the present invention as described and illustrated herein, although the processor could execute other numbers and types of programmed instructions.

[0016] The memory 22 in each of the client devices 14(1)-14(n) stores these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to processor 20 can be used for the memory 22 in each of the client devices 14(1)-14(n).

[0017] The user input device 24 in each of the client devices 14(1)-14(n) is used to input selections, such as requests for a particular website form page or to enter data in fields of a form page, although the user input device could be used to input other types of data and interact with other elements. The user input device can include keypads, touch screens, and/or vocal input processing systems, although other types and numbers of user input devices can be used.

[0018] The display 26 in each of the client devices 14(1)-14(n) is used to show data and information to the user, such as website or application page by way of example only. The display in each of the client devices 14(1)-14(n) can be a mobile phone screen display, although other types and numbers of displays could be used depending on the particular type of client device 14(1)-14(n).

[0019] The interface system 28 in each of the client devices 14(1)-14(n) is used to operatively couple and communicate between the client devices 14(1)-14(n), the knowledge processing system 12, and the web server devices 16(1)-16(n) over the communication networks 18(1) and 18(2), although other types and numbers of communication networks with other types and numbers of connections and configurations can be used.

[0020] The web server devices 16(1)-16(n) provide web content such as one or more pages from one or more web sites for use by one or more of the client devices 14(1)-14(n) via the web content optimization computing apparatus 12, although the web server devices 16(1)-16(n) can provide other numbers and types of applications and/or content and can provide other numbers and types of functions. Although the web server devices 16(1)-16(n) are shown for ease of illustration and discussion, other numbers and types of web server systems and devices can be used.

[0021] Each of the web server devices 16(1)-16(n) include a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although each of the web server devices 16(1)-16(n) could have other numbers and types of components, parts, devices, systems, and elements in other configurations and locations. The processor in each of the web server devices 16(1)-16(n) executes a program of stored instructions one or more aspects of the present invention as described and illustrated by way of the embodiments herein, although the processor could execute other numbers and types of programmed instructions.

[0022] The memory in each of the web server devices 16(1)-16(n) stores these programmed instructions for one or more aspects of the present invention as described and illustrated by way of the embodiments described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, can be used for the memory in each of the web server devices 16(1)-16(n).

[0023] The interface system in each of the web server devices 16(1)-16(n) is used to operatively couple and communicate between the web server devices 16(1)-16(n), the knowledge processing system 12, and the client devices 14(1)-14(n) via the communication networks 18(1) and 18(2), although other types and numbers of communication networks with other types and numbers of connections and configurations can be used.

[0024] Although embodiments of the knowledge processing system 12, the client devices 14(1)-14(n), and the web server devices 16(1)-16(n), are described and illustrated herein, each of the client devices 14(1)-14(n), the knowledge processing system 12, and the web server devices 16(1)-16(n), can be implemented on any suitable computer system or computing device. It is to be understood that the devices and systems of the embodiments described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the embodiments are possible, as will be appreciated by those skilled in the relevant art(s).

[0025] Furthermore, each of the systems of the embodiments may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, and micro-controllers, programmed according to the teachings of the embodiments, as described and illustrated herein, and as will be appreciated by those ordinary skill in the art.

[0026] In addition, two or more computing systems or devices can be substituted for any one of the systems in any of the embodiments. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the embodiments. The embodiments may also be implemented on computer system or systems that extend across any suitable network using any suitable interface mechanisms and communications technologies, including by way of example only telecommunications in any suitable form (e.g., voice and modem), wireless communications media, wireless communications networks, cellular communications networks, G3 communications networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.

[0027] The embodiments may also be embodied as a non-transitory computer readable medium having instructions stored thereon for one or more aspects of the present invention as described and illustrated by way of the embodiments herein, as described herein, which when executed by a processor, cause the processor to carry out the steps necessary to implement the methods of the embodiments, as described and illustrated herein.

[0028] An embodiment of the process to implement the present disclosure will now be explained using Fig 2. In an embodiment, the present process may be implemented for specific domains. It discloses providing a training data. The training data maybe a dataset from the required domain. In one embodiment, initially a SME or any other person can create and provide training data for the required domain (201). This training data can be processed as appropriate before being finalized (202). In one embodiment it can be reviewed continuously till it may meet some pre-decided requirements (203). In one embodiment, data templates can be used along with subject matter expert inputs for creating data set.

[0029] In another embodiment, a domain data creation model may be used alternatively for generating the training data. The domain data creation model may be a ML based model. The trained ML domain data generation model may then be used to generate domain-specific more realistic data. In one embodiment, grammar correction AI model for better quality domain data generation may be used. In another embodiment informal or formal conversations, paraphrasing, active to passive and other different techniques for domain data augmentation may be used for data creation. Accordingly, the model (206) may be trained for generating required data (205).

[0030] In an embodiment, the data generation AI model for a specific required domain can be used. For instance, if a user in the telecom industry uses the present technology, a telecom data generation model can be trained appropriately and then tuned for the proper requirements.

[0031] The generated training data maybe annotated with one or more predefined labels (204). Data labelling may be completely focused on creating good data rather having big or huge data. In an embodiment it includes label consistency such that whenever there is disagreement, a pipeline may make sure the analysis of the definition of input x and output y and its coverage. In an embodiment, the whole labelling at the end may go through multiple processing including voting, reviewing, and building consensus. In an embodiment label consistency maybe maintained. There may be a pipeline (208) which may make sure the definition of input x and output y and its coverage is used.

[0032] In an embodiment, only a predecided portion of the training data maybe labelled. The user may decide the required portion of the training data to be labelled based on the requirements or the environment or any other related parameter.

[0033] In an embodiment, once the training data is generated (207) using the trained data generation AI model, a language model is trained using the training data (209). In an embodiment, unsupervised training maybe used for the language models. Unsupervised learning may provide an advantage of better understanding of the domain literature. Appropriate raw data from data sources maybe used for the unsupervised learning. In an embodiment supervised learning maybe done for the language models using the labelled data.

[0034] In an embodiment, for unsupervised learning the language model may be parsed. This may enable appropriate training of the training model using raw domain data.

[0035] In an embodiment, a further training of the language model may be needed for another or second level domain training as appropriate.

[0036] Any existing language model may be used and trained further for the present requirements. In an example, BERT language model may be used. In an embodiment, for training the language model following process maybe used as described below.

[0037] Any personal information maybe removed and text maybe normalized. For each transcript, CLS and SEP token maybe added. Ex: "[CLS] " + transcript + "[SEP]".

[0038] Text maybe made case sensitive and tokenized. The tokens maybe randomly replaced with MASK token. The model is finetuned and loaded. Alternatively, a language model is created by initialing the weights with pretrained model weights.

[0039] Loss per calculation maybe calculated by various loss functions. Using loss, gradients may be calculated with respect to all trainable parameters.

[0040] Using the calculated gradients, the weights maybe updated by adding to it using an optimizer. The updated weights and graphs, and the trained weights and graph maybe saved for later use. In one embodiment the above process enables unsupervised training of language models.

[0041] In an embodiment the trained data generation AI model is used to generate training data for the context as needed by a user. In one embodiment the training data generated can be related to issue-cause resolution. Alternatively the training data can be a specific conversation generation. The training data may also be a task specific conversation (210, 211). Accordingly a user can train the data generation AI model as per the context or the requirements. The requirements for the data generation are not limited and can vary as per the requirements of the user. The language model may then be trained and tuned according to the training data generated by the data generation AI model.

[0042] The requirements based data generation can be done using tokenizing, calculating loss, backward pass, forward pass and other processes related to any AI models. The weights can be adjusted as per the requirements. For instance, for domain specific training of AI model, the weights can be initialized with pretrained Domain specific model weights. The labeling and tokenizing of the data may also be configured as per the requirements.

[0043] In an embodiment, once the AI data generation model and language model have been appropriately processed, they can be used for further requirements. In one embodiment, the trained and tuned language models may be used for evaluation of a caller or an agent participating in a call. A call disposition data maybe used after a call to evaluate a participant of the call (210/ 211).

[0044] For instance, in one embodiment, customer care call agents can be evaluated using a call disposition data, for a call they may have with a user. Similarly any call log, any other log data or text can be evaluated using the trained language model.

[0045] The AI models as trained above may be tested and evaluated on a test dataset (212). The dataset generated above, may be used for learning and testing as appropriate. In an embodiment 80% data maybe used for learning and 20% data maybe used for testing. The models can then be deployed for live implementation (213).

[0046] For the purpose of explanation, in this document, we will consider the evaluation of a customer service agent, using the language model. In an embodiment, a user may decide some parameters for the evaluation. In an embodiment, for the evaluation a user may use the below parameters. These parameters can be modified as per the requirements, domain along with other factors. This may not be an exhaustive list, and a user may add or remove parameters as appropriate and use the parameters following the process as explained in the present disclosure.

[0047] In an embodiment, a participant in a call maybe evaluated for Professional behavior. Professional behavior may include evaluation based on politeness; professionalism and, rapport through personalization, empathy. The AI models discussed above maybe appropriately trained for this evaluation.

[0048] In an embodiment, a participant in a call maybe evaluated for Discover. It may include capability to understand the issues faced by a caller. It may include parameters such as Listening; Repeating the discussed issues; Using system insights; Asking Open Ended questions and, Probing Questions and Closed Questions. The AI models discussed above maybe appropriately trained for this evaluation.

[0049] In an embodiment, a participant in a call maybe evaluated for Recommendation. It may include Recommending a solution reasonably aligned with the customer's needs based on discovery as discussed in previous paragraph. The AI models discussed above maybe appropriately trained for this evaluation.

[0050] In an embodiment, a participant in a call maybe evaluated for Overcoming all objections. It may include Brand value statement; comparison statement; clarification statement; and reassurance statement. The AI models discussed above maybe appropriately trained for this evaluation.

[0051] In an embodiment, a participant in a call maybe evaluated for Closure. It may include Recap solution, asked if additional issues; and thanking the caller.

[0052] In an embodiment, the call participant is ranked or evaluated on the above parameters using the AI model. The AI model may be trained to intelligently learn and evaluate the agent.

[0053] In one embodiment, the data generation AI model can be configured to generate test data for each of the listed parameters. The language model can be trained for each of the generated data, which can be used over the call log, or the call disposition, or in any other requirement for evaluation.

[0054] For each of these parameters, the call log maybe analyzed and a score maybe generated for the customer agent.

[0055] An embodiment of the computing architecture to implement the present process will now be explained using Fig 3. In an embodiment a natural language input (301) maybe provided. In an example, this may be a file a from call recording. This input maybe provided to a user interface (302) which may be configured to handle and process natural language input in audio, video or textual format. It maybe a multi modal or a micro frontend which adjust itself based on screen size and dimension of the devices. Its independent module and can run on web, mobile etc.

[0056] In one embodiment, a Smart Assistant Configurator 303 maybe provided for admin to enable configuring various consumer agent interaction and take action and insight based on certain input. It may help admin to filter agent performance based on various parameters.

[0057] In an embodiment, the architecture may include a training engine (304). In an embodiment, the training engine may include various components configured and integrated through various appropriate means to implement the process as explained earlier.

[0058] In an embodiment, the training engine may include a Data collection and Creation component (304.1). This may include data generation AI model which maybe trained as explained earlier. This may also receive input from the user interface (302) as well to generate datasets for training. It maybe configured to get initial input training data. The initial data may be provided by an SME or can be provided by any other system. The data collection and creation component may include a data creation AI model that can be trained using the initial data. The AI model can then be tuned to generate the required data. The required data may be specific to a domain, or specific to user, or specific to a task or any other way the user wants.

[0059] In an embodiment a data visualization and validation component (304.2) may annotate, label and tune data in a better way. A proper data quality check guidelines maybe based on one or more of – open source licenses and copyright for instance FOSS check, Internal tools etc; High predictability data signals analysis; Data coverage from different context & diversity; Data consistency and its types; Data outlier’s detection and other such appropriate parameters as per user requirements.

[0060] This data may be then be transferred to a Domain model (304.4) which maybe used for training the language models for a particular domain. This component may help train a language model using the generated and tuned data, specific to the requirement of the user. This finally helps generate a trained language model.

[0061] In another embodiment, the computing architecture may further include an inference engine (305). The inference engine may be connected to the trained AI models through a database (306) or any appropriate file system (307). The inference engine may receive inputs from the user interface (302). It may use the AI models provided by the training engine, which maybe stored in the database or accessed directly through the file system. The inference engine may then implement the data and the models for agent performance scoring using appropriately configured component (305.1). This component may be configured to define the scores that can be provided to a candidate. It may be configured according to the user desired parameters that are to be used for scoring the candidate. The inference engine may also use the call disposition component (305.2) independently or for assisting in the scoring. The agent scoring component may score agents or any participant of a call based on different segments and its elements. The Call disposition component may identify issue, cause and resolution. In an embodiment, based on training, AI model may identify call disposition (issue, cause, resolution) and participant scoring parameters and once these elements are identified for each of the interactions it may be placed under pre-defined segments and accordingly weighted average maybe calculated based on element frequency.

[0062] Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Documents

Application Documents

# Name Date
1 202241010779-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2022(online)].pdf 2022-02-28
2 202241010779-REQUEST FOR EXAMINATION (FORM-18) [28-02-2022(online)].pdf 2022-02-28
3 202241010779-POWER OF AUTHORITY [28-02-2022(online)].pdf 2022-02-28
4 202241010779-FORM 18 [28-02-2022(online)].pdf 2022-02-28
5 202241010779-FORM 1 [28-02-2022(online)].pdf 2022-02-28
6 202241010779-DRAWINGS [28-02-2022(online)].pdf 2022-02-28
7 202241010779-DECLARATION OF INVENTORSHIP (FORM 5) [28-02-2022(online)].pdf 2022-02-28
8 202241010779-COMPLETE SPECIFICATION [28-02-2022(online)].pdf 2022-02-28
9 202241010779-Proof of Right [19-04-2022(online)].pdf 2022-04-19
10 202241010779-FORM 3 [05-08-2022(online)].pdf 2022-08-05
11 202241010779-Power of Attorney [11-05-2023(online)].pdf 2023-05-11
12 202241010779-Form 1 (Submitted on date of filing) [11-05-2023(online)].pdf 2023-05-11
13 202241010779-Covering Letter [11-05-2023(online)].pdf 2023-05-11