Abstract: ABSTRACT SYSTEM AND METHOD OF NATURAL LANGUAGE PROCESSING The present disclosure relates to system and method for natural language processing. The method comprises receiving, at an input module [102], a user dialog via an input interface of a user device. Next, a tokenisation module [104] identifies one or more tokens in the user dialog. A parts-of-speech module [106] recognizes a part of speech for the one or more tokens, and a chunker module [108] maps the one or more tokens to one or more sub-sequences. A rule engine [110] co-references the one or more tokens, identifies them as one of at least one action, at least one target and at least one predicate based on the part of speech and the co-reference, and determines at least one intent. A natural language processor [112] determines a response to the user dialog based on the at least one intent.
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
AND
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“SYSTEM AND METHOD OF NATURAL LANGUAGE
PROCESSING”
We, Reliance Jio Infocomm Limited, an Indian National of, 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present disclosure generally relates to dialog systems and more specifically relates to a method and a system for natural language processing and accordingly providing at least one response to a user dialog.
BACKGROUND
The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
The dialog systems allow a user to communicate (e.g., have a conversation) with an assistant using a combined menu-based and natural-language-based user interface. In menu-based dialog system, the users are restricted to choose their input to the assistant via a menu-based user interface that may have only a limited number of options of queries for the user to choose from, and which may not cover the query of the user properly. Further, a menu-based dialog system may utilize a hierarchically organized sequence of menus to guide the user through all of the different actions that the assistant can perform in a step-by-step manner, which may be time-consuming.
On the other hand, in the natural-language-based dialog system, the user is free to dynamically (e.g., on-the-fly) choose between generating their input to the assistant either using a menu-based user interface that is navigated by the user, or using natural language that is either typed or spoken by the user.
Accordingly, when a user is communicating via a natural-language-based dialog system, they may encounter the need to input a large number of characters or speak a large number of words to accomplish the desired action.
Existing solutions provide context-based processing of the user’s query and allow processing user requests, which are not reasonably understandable if taken alone or in isolation, by identifying a speech or environmental context that encompasses the user requests. Another existing method suggests parsing an input stream of words by using word isolation, morphological analysis, dictionary look-up and grammar analysis.
However, none of the existing solutions for a
conventional natural language dialog system determine a correct meaning of the words detected by the speech recognition system and segregate linguistics of simple tasks from complex expressions to identify multiple intents of a user, which may lead to substantial delays as the user is required to restate the entire sentence or command. The primary cause of such delay is the finite speed of the processing resources as compared with a vast amount of information to be processed by such dialog systems. For example, in many conventional speech recognition programs, the time required to recognize the utterance is extended due to the size of the dictionary file being searched. An additional drawback of conventional speech recognition and natural language processing dialog systems is that they are not interactive, and thus are unable to cope with new situations.
Thus, there is a need in the art for a system and method of natural language processing for efficient identification and segregating of keywords in the natural language processing system to ensure adequate understanding of the context from a given text with its appropriate response. Therefore, in view of the above
shortcomings in the existing approaches, there is a need in the art to provide an efficient solution for a system and a method for natural language processor.
SUMMARY
This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present invention is to provide a system and a method for natural language processing. Another object of the present disclosure is to provide a dialog system for enhancing the user experience of the conventional system by catering to complex expressions which are not identified by the voice assistants. Yet another object of the present disclosure is to respond to the complete complex query of a user by breaking it down to smaller simpler queries. Yet another object of the present disclosure is to provide additional functionality of seamless conversation between the user and the dialog system. Yet another object of the present disclosure is to provide a system and method to identify multiple single task/intent per expression via an ATP (Action-Target-Predicate) mechanism. Yet another object of the present disclosure is to provide contextual meaning by generating co-reference in a user dialog.
In order to achieve at least some of the above-mentioned objectives, the present disclosure provides a method and system for natural language processing. A first aspect of the present invention relates to a method for natural language processing. The method comprising receiving, at an input module, a user dialog via an input interface of a user device. Subsequently, a tokenisation modules identifies one or more tokens in the user dialog based on one or more white
spaces occurring in the user dialog. Next, a parts-of-speech (POS) module recognises a part of speech for each of the one or more tokens. Further, a chunker module maps each of the one or more tokens to one or more sub¬sequences of the user dialog. Next, the rule engine co-references the one or more tokens in the one or more sub-sequences. Furthermore, a rule engine identifies each of the one or more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the coreference. Subsequently, the rule engine processes the at least one action, the at least one target and the at least one predicate to determine at least one intent. Thereafter, a natural language processor determines a response to the user dialog based on the at least one intent via an output interface of the user device.
Another aspect of the present disclosure relates to a system for natural language processing. The system comprises an input module, a tokenisation module, a parts-of-speech module, a chunker module, a rule engine and a natural language processor. The input module is configured to receive a user dialog via an input interface of a user device. The tokenisation module is connected to the input module, said tokenisation module configured to identify one or more tokens in the user dialog based on one or more white occurring in the user dialog. The parts-of-speech module is connected to the input module, the tokenisation module, said parts-of-speech module configured to recognise a part of speech for each of the one or more tokens. The chunker module is connected to the input module, the tokenisation module and the parts-of-speech module, said chunker module configured to map the one or more tokens to one or more sub¬sequences of the user dialog. The rule engine is connected to the input module, the tokenisation module, the parts-of-speech module and the chunker module, said rule engine is configured to co-reference the one or more tokens in the one or more sub-sequences. The rule engine is also configured to identify the one or
more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the co-reference. The rule engine is also configured to process the at least one action, the at least one target and the at least one predicate to determine at least one intent. The natural language processor is connected to the input module, the tokenisation module, the parts-of-speech module, the chunker module and the rule engine, said natural language processor configured to determine a response to the user dialog based on the at least one intent. Yet another aspect of the present disclosure relates to a user device. The user device comprises an input module, a tokenisation module, a parts-of-speech module, a chunker module, a rule engine and a natural language processor. The input module is configured to receive a user dialog via an input interface. The tokenisation module is connected to the input module, said tokenisation module configured to identify one or more tokens in the user dialog based on one or more white occurring in the user dialog. The parts-of-speech module is connected to the input module, the tokenisation module, said parts-of-speech module configured to recognise a part of speech for each of the one or more tokens. The chunker module is connected to the input module, the tokenisation module and the parts-of-speech module, said chunker module configured to map the one or more tokens to one or more sub-sequences of the user dialog. The rule engine is connected to the input module, the tokenisation module, the parts-of-speech module and the chunker module, said rule engine is configured to co-reference the one or more tokens in the one or more sub-sequences. The rule engine is also configured to identify the one or more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the co-reference. The rule engine is also configured to process the at least one action, the at least one target and the at least one predicate to determine at least one intent. The natural language processor is connected to the input module, the tokenisation
module, the parts-of-speech module, the chunker module and the rule engine, said natural language processor configured to determine a response to the user dialog based on the at least one intent.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the invention. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
Fig.1 illustrates an exemplary block diagram of a system [100] for natural language processing, in accordance with exemplary embodiments of the present disclosure.
Fig. 2 illustrates an exemplary method flow diagram depicting a method [200] for natural language processing, in accordance with exemplary embodiments of the present disclosure.
Fig. 3 illustrates an exemplary signal flow diagram [300] depicting an exemplary implementation of the method for natural language processing, in accordance with exemplary embodiments of the present disclosure.
The foregoing shall be more apparent from the following more detailed description of the disclosure.
DESCRIPTION OF THE INVENTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
As used herein, a “user device” or "user equipment“, refers to any electrical, electronic, electromechanical and computing device. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices. The user device may have a processor, a display, a memory unit, a battery and an input-means such as a hard keypad and/or a soft keypad. The user device may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, etc. The user device may operate at all the seven levels of ISO reference model, and also works on the application layer along with the network, session and presentation layer with any additional features of a touch screen, apps ecosystem, physical and biometric security, etc. For instance, the user devices may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, pager, laptop, a general-purpose computer, desktop,
personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art.
The user device may also have an interface which typically includes a display with or without keypad including a set of alpha-numeric (ITU-T type) keys that may be real keys or virtual keys. The input interface also comprises touch/acoustic/video components for touch/sound/video input and output. The output interface may comprise a microphone, a speaker, camera and additionally audio/video I/O ports in an accessories interface, wherein the speaker normally serves to provide acoustic output in the form of human speech, ring signals, music, etc.
As used herein, a “processor” or “module” includes at least one processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, at least one microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processor is a hardware processor.
As used herein, a “controller” or “control unit” includes at least one controllers, wherein the controller refers to any logic circuitry for processing instructions. A controller may be a general-purpose controller, a special-purpose controller, a conventional controller, a digital signal controller, a plurality of microcontrollers, at least one microcontrollers in association with a DSP core, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits,
any other type of integrated circuits, etc. The controller may perform signal coding, data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the controller or control unit is a hardware processor.
The present disclosure provides a system and a method for natural language processing. The present invention aims to give a response to the complex query of a user by breaking it down to smaller simpler queries and to determine the similarity between textual inputs providing multi-sentence classification using intent.
As used herein, a “network entity” is an entity that serves a cellular network for providing voice services (calls) and the data services to the user equipment. The network entity may include, but not limited to, a base station controller, a base transceiver station, a cell site, a Node B, an eNodeB, a radio network controller, and any such entity obvious to a person skilled in the art.
As used herein, “natural language processing" generally refers to determining a conceptual “meaning” (e.g., what meaning the speaker intended to convey) of the detected words by analyzing their grammatical relationship and relative context. Natural Language Processing (NLP) involves the lexical, syntactic (grammatical), and semantic domain analysis of the user input using both statistical observations of the various surface forms and a broader interpretation of the relationships and dependencies among words, phrases, and concepts. Such dialog systems are used to process the detected words using a natural language processing system.
As used herein, “assistants” are generally designed to convincingly simulate how a human would interact and behave as a conversational/chat partner. A general goal of an assistant is to provide value and ease of use to users by trying to understand what they want and then providing them with the information they need or performing the action(s) they are requesting. Beyond this general goal, some sophisticated assistants also attempt to pass the conventional Turing Test and thus make each user that is communicating with the chatbot think that they are talking to another person rather than interacting with a computer program. As used herein, “language granularity” refers to the fact that a limited assortment of words can only be arranged in so many ways, which severely limits the range and detail of things that it is possible to say. These points of possible meaning in a vast sea of things that cannot be said are different in each human language, and are radically different in computers, making correct translation impossible, except in some rare instances. This phenomenon severely limits the prospects for human-level machine understanding.
The term “dialog system” refers to one or more of the following: chat information system, spoken dialogue system, conversational agent, chatter robot, chatterbot, chatbot, chat agent, digital personal assistant, automated online assistant, and so forth.
Referring to Fig. 1, illustrates a general architecture of the system [100] for natural language processing in which the present invention is implemented, in accordance with exemplary embodiments of the present disclosure. The system [100] comprises an input module [102], a tokenisation module [104], a parts-of-speech module [106], a chunker module [108], a rule engine [110], a natural language processor [112], a preprocessing module [114], an output module [116] and a memory unit [118]. The present invention encompasses that all the components of the system [100] for natural language processing are connected
to each other, and work in conjunction to achieve the objectives of the present invention.
The input module [102] of the system [100] is configured to receive a user dialog via an input interface of a user device. The input module [102] may be previously connected to the user device, and accordingly, when the input interface of the user device receives an input from a user, say a user dialog, the input interface of the user device transmits the user dialog to the input module [102]. For instance, the user provides, or speaks, the phrase “What is the time now?” in the user device, the same is received at the input interface of the user device, which then transmits the same to the input module [102]. The present invention encompasses that the user dialog is a natural language speech input.
In another instance, the present invention also encompasses that a user may provide a first input to invoke the system [100] for natural language processing. For instance, the user may provide a speech input for invoking the system [100], like the user may speak “Good Morning” into the user device in order to invoke the system [100] for natural language processing.
The input module [102] then transmits the received user dialog to the tokenisation model [104]. As used herein, “Tokenisation” refers to the process of dividing the input into tokens, i.e., into groups of words. The tokenisation also takes into consideration any punctuations, abbreviations, dates, times, numbers, typographical errors, etc., while breaking the input into smaller groups of words.
Accordingly, the tokenisation module [104] is configured to identify one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. Thus, the tokenisation module [104] divides the user dialog, say a long sequence of words, into tokens by identifying the white space occurring in
the sequence of the sentence. The whitespaces may include, but not limited to, pauses, punctuations, abbreviations, dates, times, numbers, typographical errors. For instance, if a user speaks “What is the weather like in Ayodhya?”, takes a pause and continues to speak “How about Delhi?”, the tokenisation module [104] may divide the user dialog into two groups, one being “What is the weather like in Ayodhya?”, and the other being “How about Delhi?”.
The tokenisation module [104] then transmits the identified one or more tokens for the user dialog to the parts-of-speech module [106]. According to Grammar, there may be eight parts of a speech, comprising, a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, and an interjection. The parts-of-speech module [106] is, thus, configured to recognise a part of speech for each of the one or more tokens. The parts-of-speech module [106] identifies that the one or more tokens is one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection, or a combination thereof.
Simultaneously, the tokenisation module [104] also transmits the identified one or more tokens for the user dialog to the chunker module [108]. The chunker module [108] is configured to map the one or more tokens to one or more sub¬sequences of the user dialog. The sub-sequence of a user dialog may be defined as a beginning phase, an intermediate phase or an end phase of the user dialog. Identifying a sub-sequence, say placing, of the one or more tokens in the user dialog also aids the natural language processor to identify an importance factor of the one or more tokens and to form a context of the user dialog. The present invention further encompasses that the chunker module [108] maps the one or more tokens to one or more sub-sequences of the user dialog based on the semantics of the user dialog.
The parts-of-speech module [106] and the chunker module [108] then transmit their output to the rule engine [110]. Accordingly, the parts-of-speech module [106] transmits the recognised part of speech for each of the one or more tokens, to the rule engine [110]. The chunker module [108] transmits the information regarding the one or more sub-sequences of the user dialog for each of the one or more tokens to the rule engine [110].
The rule engine [110] is configured to co-reference the one or more tokens in the one or more sub-sequences. By co-referencing the one or more tokens, the rule engine [110] identifies the dependency of each of the one or more tokens on the other tokens. The one or more tokens in the one or more sub-sequences is based on the one or more tokens recognised as one of a pronoun and a noun. In an instance of the present invention, the rule engine may check if the one or more tokens refer to I, me, he, she, herself, you, it, that, they, each, few, many, who, whoever, whose, someone, everybody, etc., then the one or more tokens are a pronoun. For example, if the user speaks “Please set up an alarm for Amar to wake up at 6.00 am and also put Potatoes on his reminders list”, the rule engine determines that the one or more tokens of the user dialog refers to a proper noun “Amar” and a pronoun “his”, then the rule engine [110] replaces ”his” with “Amar”.
In another example, if a user speaks “It’s Sita’s birthday tomorrow add a reminder to get six chocolates and a cake for her at noon and send SMS to Ram to get the balloons”. The rule engine identifies the pronoun and the nouns in the user dialog, here pronouns are “her” and“I”, and the proper nouns are “Sita” and “Ram”. The rule engine then identifies the token related to the pronouns and proper nouns. The rule engine [110] relates “her” and “Sita” with the tokens “birthday tomorrow”. The rule engine [110] relates “I” with “reminder to get six
chocolates and a cake for her at noon”. The rule engine [110] relates “Ram” with “to get the balloons”.
The rule engine [110] is configured to identify each of the one or more tokens of the user dialog as at least one action, at least one target and at least one predicate. The rule engine [110] identifies at least one action, at least one target and at least one predicate for each of the one or more tokens of the user dialog based on the part of speech for each of the one or more tokens received from the parts-of-speech module [106] and the co-reference. The rule engine [110] uses the inputs from the parts-of-speech module [106] and the co-reference to identify the relevance of each of the one or more tokens in the sequence of words of the user dialog.
The present invention further encompasses that each of the one or more tokens is identified as one of at least one action based on a comparison with a database comprising one or more predefined actions. The present invention encompasses that the rule engine [110] may use linguistic rules well known in the Grammar of a language to achieve the objectives of the present invention.
In another instance, the present invention encompasses that each of the one or more tokens is identified, by the rule engine [110], as one of at least one action based on a comparison with a database comprising one or more predefined actions. For instance, each of the one or more token may be compared with the Action-Target-Predicate (ATP) hash set for extracting action items. For example, the database may contain customised verbs like set, recharge, message, buy, tell, etc.
While identifying the at least one action, the at least one target and the at least one predicate in the user dialog, the rule engine [110] takes into consideration
punctuations, conjunctions, any special characters, sentence enders, etc. For instance, is the rule engine [110] identifies conjunctions, any special characters, sentence enders (like then, also, etc.) etc., being present in the user dialog, it terminates the process of identifying the at least one action, the at least one target and the at least one predicate in the user dialog, else it continues to identify the at least one action, the at least one target and the at least one predicate in the user dialog.
The present invention further encompasses that after the rule engine [110] has identified the at least one action, the rule engine [110] identifies one or more tokens as the at least one predicate present in the user dialog. Accordingly, the rule engine [110] then relies on the co-reference to identify any proper noun or pronoun present in the user dialog, and may recognise said proper noun and pronoun as at least one target. Subsequently, the rule engine [110] may map the at least one predicate and the at least one action to the proper noun or the pronoun. For instance, if a user speaks “Hello remind me to get a cake for Sita, it’s her birthday” then the rule engine identifies that the at least target is “me” being the user himself, “Sita” is a proper noun, “remind” and “get” are the actions, and “a cake for Sita’s birthday” is the predicate.
The rule engine [110] is also configured to process the at least one action, the at least one target and the at least one predicate to determine at least one intent. The present invention further encompasses that rule engine [110] may determine multiple intents with respect to the user query. The “at least one intent”, as used herein, may refer to an intention or purpose of the user that may be derived/identified from the at least one action, the at least one target and the at least one predicate to.
For example. if a user speaks “Remind me to get a cake for Sita, it’s her birthday” then the rule engine identifies that the at least target is “me” being the user himself, “Sita” is a proper noun, “remind” and “get” are the actions, and “a cake for Sita’s birthday” is the predicate. The rule engine [110] determines that the intent is to remind the user to get a cake. The rule engine [110] then transmits the at least one intent along with the at least one action, the at least one target and the at least one predicate to the natural language processor [112].
The natural language processor [112] is, thus, configured to determine a response to the user dialog based on the at least one intent. The natural language processor [112] may also use the at least one action, the at least one target and the at least one predicate along with the at least one intent to determine a response to the user dialog. Thus, the natural language processor [112] aims to provide a response based on the intent of the user, which response can be understood by the user.
The present invention further encompasses that the system [100] for natural language processing further comprises an output module [116]. Accordingly, the natural language processor [112] transmits the response determined by it to the output module [116]. The output module [116] is configured to provide the response to the user dialog via an output interface of the user device. The output module [116] may also be configured to process the response into a format suitable for the output interface of the user device.
For instance, in operation, if a user speaks “It’s Sita’s birthday tomorrow add a reminder to get six chocolates and a cake for her at noon and send SMS to Ram to get the balloons”, the input is received at the input module [102]. The tokenisation module [104] divides the long sequence of words into tokens, like small groups of words. The parts-of-speech module [106] identifies each of the
tokens as being one of at least one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection and a combination thereof. Sita and Ram are identified as pronouns, reminder and SMS are identified as verbs, etc.
The chunker module [108] maps each of the one or more tokens to one or more sub-sequences of the user dialog, to identify dependence of the one or more tokens on each other. For example, it identifies that the reminder is for Sita’s birthday. The rule engine [110] co-references the token and then identifies the at least one action, the at least one target and the at least one predicate, wherein the co-reference is based on identification to at least one proper noun at least one pronoun in the user dialog. For example, it identifies that a first action is a reminder and a first predicate is to get chocolates for Sita at noon. It also identifies that a second action is SMS, and a second target is Ram, and that a second predicate is get balloons.
Accordingly, the rule engine [110] identifies the multiple intent of the user to set a reminder for himself to get a cake for Sita’s birthday and to SMS Ram to get balloons. The rule engine [110] transmits the multiple intents to the natural language processor [112], which then provides a response to the user dialog, for example, the natural language processor determines “SMS will be sent to Ram” and “Reminder to get cake set”.
In another instance, the system [100] of the present invention encompasses comprising a preprocessing module [114]. The input module [102] may then transmit the received user dialog to the preprocessing module [114]. The preprocessing module [114] may process the user dialog and covert the received user dialog into a semantic expression.
The present invention further encompasses that the system [100] comprises a memory unit [118]. As used herein, “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory unit (“ROM”), random access memory unit (“RAM”), magnetic disk storage media, optical storage media, flash memory unit devices or other types of machine-accessible storage media.
The memory unit [118] may be configured to store the user dialog received at the input module [102]. The memory unit [118] may also be configured to store the one or more tokens identified by the tokenisation module [104]. The memory unit [118] may also be configured to store the part of speech for each of the one or more tokens recognised by the parts-of-speech module [106]. The memory unit [118] may also be configured to store a mapping of each of the one or more tokens to one or more sub-sequences of the user dialog conducted by the chunker module [108].
The memory unit [118] may also be configured to store the at least one action, the at least one target and the at least one predicate identified by the rule engine [110]. The memory unit [118] may also be configured to store the at least one intent determined by the rule engine [110]. The memory unit [118] may also be configured to store the at least one response determined by the natural language processor [112]. The memory unit [118] may also be configured to maintain a database comprising one or more predefined actions.
The present invention also encompasses that in another instance the system [100] may reside inside the user device. Accordingly, the user device may then comprise of an input module [102], a tokenisation module [104], a parts-of-speech module [106], a chunker module [108], a rule engine [110], a natural
language processor [112], a preprocessing module [114], an output module [116], a memory unit [118], an input interface and an output interface.
Referring to Fig. 2 illustrates an exemplary method flow diagram depicting a method [200] for natural language processing, in accordance with exemplary embodiments of the present disclosure.
The method [200] begins at step [202]. At step [204], the method begins with receiving a user dialog via an input interface of a user device at an input module [102]. The input module [102] may be previously connected to the user device, and accordingly, when the input interface of the user device receives an input from a user, say a user dialog, the input interface of the user device transmits the user dialog to the input module [102]. For instance, the user provides, or speaks, the phrase “What is the time now?” in the user device, the same is received at the input interface of the user device, which then transmits the same to the input module [102].
In another instance, the method [200] further encompasses that a user may provide a first input to invoke the system [100] for natural language processing. For instance, the user may provide a speech input for invoking the system [100], like the user may speak “Good Morning” into the user device in order to invoke the system [100] for natural language processing.
The method [200] further encompasses that the input module [102] transmits the received user dialog to the tokenisation model [104]. Next, at step [206], the tokenisation module [104] identifies one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. In operation, the tokenisation module [104] divides the user dialog, say a long sequence of words, into tokens by identifying the white space occurring in the sequence of the
sentence. The whitespaces may include, but not limited to, pauses, punctuations, abbreviations, dates, times, numbers, typographical errors. For instance, if a user speaks “What is the weather like in Ayodhya?”, takes a pause and continues to speak “How about Delhi?”, the tokenisation module [104] may divide the user dialog into two groups, one being “What is the weather like in Ayodhya?”, and the other being “How about Delhi?”.
The method [200] further encompasses that the tokenisation module [104] then transmits the identified one or more tokens for the user dialog to the parts-of-speech module [106]. Next, at step [208], parts-of-speech module [106] recognises a part of speech for each of the one or more tokens. The parts-of-speech module [106] identifies that the one or more tokens is one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection, or a combination thereof.
Simultaneously, the tokenisation module [104] also transmits the identified one or more tokens for the user dialog to the chunker module [108]. So, at step [210], the chunker module [108] maps the one or more tokens to one or more sub-sequences of the user dialog. The sub-sequence of a user dialog may be defined as a beginning phase, an intermediate phase or an end phase of the user dialog. Identifying a sub-sequence, say placing, of the one or more tokens in the user dialog also aids the natural language processor to identify an importance factor of the one or more tokens and to form a context of the user dialog. The method [200] further encompasses that the chunker module [108] maps the one or more tokens to one or more sub-sequences of the user dialog based on the semantics of the user dialog.
The parts-of-speech module [106] and the chunker module [108] then transmit their output to the rule engine [110]. Accordingly, the parts-of-speech module
[106] transmits the recognised part of speech for each of the one or more tokens, to the rule engine [110]. The chunker module [108] transmits the information regarding the one or more sub-sequences of the user dialog for each of the one or more tokens to the rule engine [110].
At step [212], the rule engine [110] co-references the one or more tokens in the one or more sub-sequences. By co-referencing the one or more tokens, the rule engine [110] identifies the dependency of each of the one or more tokens on the other tokens. The one or more tokens in the one or more sub-sequences is based on the one or more tokens recognised as one of a pronoun and a noun. In an instance of the present invention, the rule engine may check if the one or more tokens refer to I, me, he, she, herself, you, it, that, they, each, few, many, who, whoever, whose, someone, everybody, etc., then the one or more tokens are a pronoun.
For example, if a user speaks “It’s Sita’s birthday tomorrow add a reminder to get six chocolates and a cake for her at noon and send SMS to Ram to get the balloons”. The rule engine identifies the pronoun and the nouns in the user dialog, here pronouns are “her” and“I”, and the proper nouns are “Sita” and “Ram”. The rule engine then identifies the token related to the pronouns and proper nouns. The rule engine [110] relates “her” and “Sita” with the tokens “birthday tomorrow”. The rule engine [110] relates “I” with “reminder to get six chocolates and a cake for her at noon”. The rule engine [110] relates “Ram” with “to get the balloons”.
At step [214], the rule engine [110] identifies the one or more tokens of the user dialog as at least one action, at least one target and at least one predicate. The rule engine [110] identifies at least one action, at least one target and at least one predicate for each of the one or more tokens of the user dialog based on the
part of speech for each of the one or more tokens received from the parts-of-speech module [106] and the co-reference. The rule engine [110] uses the inputs from the parts-of-speech module [106] and the co-reference to identify the relevance of each of the one or more tokens in the sequence of words of the user dialog.
In another instance, the method [200] encompasses that each of the one or more tokens is identified, by the rule engine [110], as at least one action based on a comparison with a database comprising one or more predefined actions. For instance, each of the one or more token may be compared with the Action-Target-Predicate (ATP) hash set for extracting action items. For example, the database may contain customised verbs like set, recharge, message, buy, tell, etc.
The method [200] further encompasses that while identifying the at least one action, the at least one target and the at least one predicate in the user dialog, the rule engine [110] takes into consideration punctuations, conjunctions, any special characters, sentence enders, etc. For instance, is the rule engine [110] identifies conjunctions, any special characters, sentence enders (like then, also, etc.) etc., being present in the user dialog, it terminates the process of identifying the at least one action, the at least one target and the at least one predicate in the user dialog, else it continues to identify the at least one action, the at least one target and the at least one predicate in the user dialog.
The method [200] further encompasses that after the rule engine [110] has identified the at least one action, the rule engine [110] identifies one or more tokens as the at least one predicate present in the user dialog. Accordingly, the rule engine [110] then scans the one or more sub-sequences received from the parts-of-speech module [106] to identify any proper noun present in the user
dialog, and subsequently, maps the at least one predicate and the at least one action to the proper noun. For instance, if a user speaks “Hello remind me to get a cake for Sita, it’s her birthday” then the rule engine identifies that the at least one target is the user himself, Sita is a proper noun, remind is the at least one action, and get a cake for Sita’s birthday is the predicate. The method [200] encompasses that the rule engine [110] may use linguistic rules well known in the Grammar of a language to achieve the objectives of the present invention.
Next, at step [216], the rule engine [110] processes the at least one action, the at least one target and the at least one predicate to determine at least one intent. The method [200] further encompasses that rule engine [110] may determine multiple intents with respect to the user query. For example. if a user speaks “Remind me to get a cake for Sita, it’s her birthday” then the rule engine identifies that the at least target is “me” being the user himself, “Sita” is a proper noun, “remind” and “get” are the actions, and “a cake for Sita’s birthday” is the predicate. The rule engine [110] determines that the intent is to remind the user to get a cake. The rule engine [110] then transmits the at least one intent along with the at least one action, the at least one target and the at least one predicate to the natural language processor [112].
Thereafter, at step [218], the natural language processor [112] determines a response to the user dialog based on the at least one intent. The natural language processor [112] may also use the at least one action, the at least one target and the at least one predicate along with the at least one intent to determine a response to the user dialog. Thus, the natural language processor [112] aims to provide a response based on the intent of the user, which response can be understood by the user.
The method [200] further encompasses that the system [100] for natural language processing further comprises an output module [116]. Accordingly, the natural language processor [112] transmits the response determined by it to the output module [116]. The output module [116] is configured to provide the response to the user dialog via an output interface of the user device. The output module [116] may also be configured to process the response into a format suitable for the output interface of the user device.
For instance, in operation, if a user speaks “It’s Sita’s birthday tomorrow add a reminder to get six chocolates and a cake for her at noon and send SMS to Ram to get the balloons”, the input is received at the input module [102]. The tokenisation module [104] divides the long sequence of words into tokens, like small groups of words. The parts-of-speech module [106] identifies each of the tokens as being one of at least one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection and a combination thereof. Sita and Ram are identified as pronouns, reminder and SMS are identified as verbs, etc.
The chunker module [108] maps each of the one or more tokens to one or more sub-sequences of the user dialog, to identify dependence of the one or more tokens on each other. For example, it identifies that the reminder is for Sita’s birthday. The rule engine [110] co-refernces the one or more tokens, and then identifies the at least one action, the at least one target and the at least one predicate. For example, it identifies that a first action is a reminder and a first predicate is to get chocolates for Sita at noon. It also identifies that a second action is SMS, and a second target is Ram, and that a second predicate is get balloons.
Accordingly, the rule engine [110] identifies the multiple intent of the user to set a reminder for himself to get a cake for Sita’s birthday and to SMS Ram to get balloons. The rule engine [110] transmits the multiple intents to the natural language processor [112], which then provides a response to the user dialog, for example, the natural language processor determines “SMS will be sent to Ram” and “Reminder to get cake set”.
In another instance, the present invention encompasses that the input module [102] may transmit the received user dialog to the preprocessing module [114]. Accordingly, the method [200] encompasses that the preprocessing module [114] processes the user dialog and covert the received user dialog into a semantic expression. In operation, the preprocessing module [114] may be implement by a speech recognition unit and a speech to text converter present in the user device.
The method [200] further encompasses storing the user dialog received at the input module [102] in the memory unit [118], and maintaining a database comprising one or more predefined actions at the memory unit [118].
The present invention also encompasses that in another instance the system [100] may reside inside the user device. Accordingly, the method [200] may be performed at the user device by the components residing at the user device, namely, an input module [102], a tokenisation module [104], a parts-of-speech module [106], a chunker module [108], a rule engine [110], a natural language processor [112], a preprocessing module [114], an output module [116], a memory unit [118], an input interface and an output interface. The method [200] ends at step [220].
Referring to Fig. 3 illustrates an exemplary signal flow diagram [300] depicting an exemplary implementation of the method for natural language processing, in accordance with exemplary embodiments of the present disclosure.
The method begins with receiving a user dialog via an input interface of a user device at an input module [302]. For instance, the user provides, or speaks, the phrase “What is the time now?” in the user device, the same is received at the input interface of the user device, which then transmits the same to the input module [302].
The input module [302] transmits the received user dialog to the tokenisation model [304], the parts-of-speech module [306] and the chunker module [308]. The tokenisation module [304] identifies one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. For instance, if a user speaks “What is the weather like in Ayodhya?”, takes a pause and continues to speak “How about Delhi?”, the tokenisation module [304] may divide the user dialog into two groups, one being “What is the weather like in Ayodhya?”, and the other being “How about Delhi?”.
The tokenisation module [304] transmits the identified one or more tokens to the parts-of-speech module [306], the chunker module [308], and the rule engine [310]. The parts-of-speech module [306] recognises a part of speech for each of the one or more tokens. The parts-of-speech module [306] identifies that the one or more tokens is one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection, or a combination thereof. The chunker module [308] maps the one or more tokens to one or more sub¬sequences of the user dialog being one of a beginning phase, an intermediate phase or an end phase of the user dialog.
The parts-of-speech module [306] and the chunker module [308] then transmit their respective outputs to the rule engine [310]. Accordingly, the parts-of-speech module [306] transmits the recognised part of speech for each of the one or more tokens, to the rule engine [310]. The chunker module [308] transmits the information regarding the one or more sub-sequences of the user dialog for each of the one or more tokens to the rule engine [310].
The rule engine [310] identifies each of the one or more tokens as one of at least one predefined action [314] based on a comparison with an ATP database [316] comprising one or more predefined actions. For instance, each of the one or more token may be compared with the Action-Target-Predicate (ATP) hash set for extracting action items. For example, the database may contain customised verbs like set, recharge, message, buy, tell, etc. The rule engine [310] also identifies [318] punctuations, conjunctions, any special characters, sentence enders, etc. present in the user dialog.
The co-ref rule engine [322] co-references the one or more tokens in the one or more sub-sequences. By co-referencing the one or more tokens, the co-ref rule engine [322] identifies the dependency of each of the one or more tokens on the other tokens. The one or more tokens in the one or more sub-sequences is based on the one or more tokens recognised as one of a pronoun and a noun. In an instance of the present invention, the rule engine may check if the one or more tokens refer to I, me, he, she, herself, you, it, that, they, each, few, many, who, whoever, whose, someone, everybody, etc., then the one or more tokens are a pronoun.
Also, the co-ref rule engine [322] identifies the one or more tokens of the user dialog as at least one action, at least one target and at least one predicate. The co-ref rule engine [322] identifies at least one action, at least one target and at
least one predicate for each of the one or more tokens of the user dialog based on the part of speech for each of the one or more tokens received from the parts-of-speech module [306] and the co-reference. The co-ref rule engine [322] uses the inputs from the parts-of-speech module [306] and the co-reference to identify the relevance of each of the one or more tokens in the sequence of words of the user dialog.
In another instance, the method [200] encompasses that each of the one or more tokens is identified, by the co-ref rule engine [322], as at least one action based on a comparison with a database comprising one or more predefined actions. For instance, each of the one or more token may be compared with the Action-Target-Predicate (ATP) hash set for extracting action items. For example, the database may contain customised verbs like set, recharge, message, buy, tell, etc
Next, the result of co-referencing is transmitted to the natural language processor [312] along with the identified at least one action, at least one target and at least one predicate..The natural language processor [312] processes the at least one action, the at least one target and the at least one predicate to determine at least one intent, and further determines a response to the user dialog based on the at least one intent. Thus, the natural language processor [312] provides a response based on the intent of the user, which response can be understood by the user.
It shall be appreciated by any person skilled in the art, and from the preceding description of the present invention, that the present invention may be implemented in any type of communication technology, where a system [200] may be conversing with a user. While the implementation of the solution of the present invention has been discussed to a very few usages including, the
invention may also be used in many other applications that may be known to a person skilled in the art, all of which are objectives of the present invention.
Therefore, as is evident from the above method, the present invention overcomes the shortcomings of the menu-based assistant systems and also improves the existing natural-language-based assistant systems by increasing the speed and accuracy by which the user can effectively communicate with a dialog system. The present invention ensures availability of effective user experience to cater to complex expressions of a user which are not easily identified by conventional dialog systems. The present invention breaks down the complex query of a user into smaller queries which may be processed faster, and such feature is highly advantageous in languages like Hindi, etc.
The interface, module, memory, database, processor and component depicted in the figures and described herein may be present in the form of a hardware, a software and a combination thereof. The connection shown between these components/module/interface in the system [100] are exemplary, and any components/module/interface in the system [100] may interact with each other through various logical links and/or physical links. Further, the components/module/interface may be connected in other possible ways.
Though a limited number of servers, gateways, user equipment, wireless network, interface, module, memory, database, processor and component have been shown in the figures, however, it will be appreciated by those skilled in the art that the overall system of the present invention encompasses any number and varied types of the entities/elements such as servers, gateways, user equipment, wireless network, interface, module, memory, database, processor and any other component that may be required by a person skilled in the art to work the present invention.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present invention. These and other changes in the embodiments of the present invention will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
We Claim
1. A method [200] for natural language processing:
- receiving, at an input module [102], a user dialog via an input interface of a user device;
- identifying, by a tokenisation module [104], one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog;
- recognising, by a parts-of-speech module [106], a part of speech for each of the one or more tokens;
- mapping, by a chunker module [108], each of the one or more tokens to one or more sub-sequences of the user dialog;
- co-referencing, by a rule engine [110], the one or more tokens in the one or more sub-sequences;
- identifying, by the rule engine [110], each of the one or more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the coreference;
- processing, by the rule engine [110], the at least one action, the at least one target and the at least one predicate to determine at least one intent; and
- determining, by a natural language processor [112], a response to the user dialog based on the at least one intent.
2. The method [200] as claimed in claim 1, further comprising converting, by a
preprocessing module [114], the user dialog received at the input module
[102] into a semantic expression.
3. The method [200] as claimed in claim 1, further comprising providing, by an output module [116], the response to the user dialog via an output interface of the user device.
4. The method [200] as claimed in claim 1, wherein the part of speech is at least one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection and a combination thereof.
5. The method [200] as claimed in claim 4, wherein the co-referencing, by the rule engine [110], the one or more tokens in the one or more sub-sequences is based on the one or more tokens recognised as one of a pronoun and a noun.
6. The method [200] as claimed in claim 1, wherein the sub-sequence is at least one of a beginning phase, an intermediate phase and an end phase.
7. The method [200] as claimed in claim 1, wherein each of the one or more tokens is identified, by the rule engine [110], as one of at least one action based on a comparison with a database comprising one or more predefined actions.
8. The method [200] as claimed in claim 1, wherein the user dialog is a natural language speech input.
9. A system [100] for natural language processing, said system [100] comprising:
- an input module [102] configured to receive a user dialog via an input interface of a user device;
- a tokenisation module [104] connected to the input module [102], said tokenisation module [104] configured to identify one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog;
- a parts-of-speech module [106] connected to the input module [102], the tokenisation module [104], said parts-of-speech module [106] configured to recognise a part of speech for each of the one or more tokens;
- a chunker module [108] connected to the input module [102], the tokenisation module [104] and the parts-of-speech module [106], said chunker module [108] configured to map the one or more tokens to one or more sub-sequences of the user dialog;
- a rule engine [110] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106] and the chunker module [108], said rule engine [110] configured to:
- co-reference the one or more tokens in the one or more sub-sequences;
- identify the one or more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the co-reference, and
- process the at least one action, the at least one target and the at least one predicate to determine at least one intent;
- a natural language processor [112] connected to the input module [102],
the tokenisation module [104], the parts-of-speech module [106], the
chunker module [108] and the rule engine [110], said natural language
processor [112] configured to determine a response to the user dialog
based on the at least one intent.
10. The system [100] as claimed in claim 9, further comprising a preprocessing module [114] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108],
the rule engine [110] and the natural language processor [112], said preprocessing module [114] configured convert the user dialog received at the input module [102] into a semantic expression.
11. The system [100] as claimed in claim 9, further comprising an output module [116] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108], the rule engine [110], the natural language processor [112] and the preprocessing module [114], said output module [116] configured to provide the response to the user dialog via an output interface of the user device.
12. The system [100] as claimed in claim 9, further comprising a memory unit [118] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108], the rule engine [110], the natural language processor [112], the preprocessing module [114] and the output module [116], said memory unit [118] configured to:
- store the user dialog received at the input module [102], and
- maintain a database comprising one or more predefined actions.
13. A user device comprising:
- an input module [102] configured to receive a user dialog via an input interface;
- a tokenisation module [104] connected to the input module [102], said tokenisation module [104] configured to identify one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog;
- a parts-of-speech module [106] connected to the input module [102], the tokenisation module [104], said parts-of-speech module [106] configured to recognise a part of speech for each of the one or more tokens;
- a chunker module [108] connected to the input module [102], the tokenisation module [104] and the parts-of-speech module [106], said chunker module [108] configured to map the one or more tokens to one or more sub-sequences of the user dialog;
- a rule engine [110] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106] and the chunker module [108], said rule engine [110] configured to:
- co-reference the one or more tokens in the one or more sub-sequences;
- identify the one or more tokens as one of at least one action, at least one target and at least one predicate based on the part of speech for each of the one or more tokens and the co-reference, and
- process the at least one action, the at least one target and the at least one predicate to determine at least one intent;
- a natural language processor [112] connected to the input module [102],
the tokenisation module [104], the parts-of-speech module [106], the
chunker module [108] and the rule engine [110], said natural language
processor [112] configured to determine a response to the user dialog
based on the at least one intent.
14. The user device as claimed in claim 13, further comprising a preprocessing module [114] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108], the rule engine [110] and the natural language processor [112], said preprocessing module [114] configured convert the user dialog received at the input module [102] into a semantic expression.
15. The user device as claimed in claim 13, further comprising an output module [116] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108], the rule engine [110], the natural language processor [112] and the preprocessing module [114], said output module [116] configured to provide the response to the user dialog via an output interface of the user device.
16. The user device as claimed in claim 13, further comprising a memory unit [118] connected to the input module [102], the tokenisation module [104], the parts-of-speech module [106], the chunker module [108], the rule engine [110], the natural language processor [112], the preprocessing module [114] and the output module [116], said memory unit [118] configured to:
- store the user dialog received at the input module [102], and
- maintain a database comprising one or more predefined actions.
| # | Name | Date |
|---|---|---|
| 1 | 201921011067-STATEMENT OF UNDERTAKING (FORM 3) [22-03-2019(online)].pdf | 2019-03-22 |
| 2 | 201921011067-PROVISIONAL SPECIFICATION [22-03-2019(online)].pdf | 2019-03-22 |
| 3 | 201921011067-FORM 1 [22-03-2019(online)].pdf | 2019-03-22 |
| 4 | 201921011067-FIGURE OF ABSTRACT [22-03-2019(online)].pdf | 2019-03-22 |
| 5 | 201921011067-Proof of Right (MANDATORY) [22-05-2019(online)].pdf | 2019-05-22 |
| 6 | 201921011067-FORM-26 [22-05-2019(online)].pdf | 2019-05-22 |
| 7 | 201921011067-ORIGINAL UR 6(1A) FORM 1 & FORM 26-270519.pdf | 2019-08-02 |
| 8 | 201921011067-ENDORSEMENT BY INVENTORS [21-03-2020(online)].pdf | 2020-03-21 |
| 9 | 201921011067-DRAWING [21-03-2020(online)].pdf | 2020-03-21 |
| 10 | 201921011067-COMPLETE SPECIFICATION [21-03-2020(online)].pdf | 2020-03-21 |
| 11 | 201921011067-FORM 18 [13-04-2020(online)].pdf | 2020-04-13 |
| 12 | 201921011067-Request Letter-Correspondence [04-06-2020(online)].pdf | 2020-06-04 |
| 13 | 201921011067-Power of Attorney [04-06-2020(online)].pdf | 2020-06-04 |
| 14 | 201921011067-Form 1 (Submitted on date of filing) [04-06-2020(online)].pdf | 2020-06-04 |
| 15 | 201921011067-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(5-6-2020).pdf | 2020-06-30 |
| 16 | Abstract1.jpg | 2020-07-31 |
| 17 | 201921011067-FER.pdf | 2021-10-19 |
| 18 | 201921011067-FER_SER_REPLY [16-11-2021(online)].pdf | 2021-11-16 |
| 19 | 201921011067-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 20 | 201921011067-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 21 | 201921011067-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 22 | 201921011067-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 23 | 201921011067-ORIGINAL UR 6(1A) FORM 26-121022.pdf | 2022-10-26 |
| 24 | 201921011067-US(14)-HearingNotice-(HearingDate-20-10-2023).pdf | 2023-09-15 |
| 25 | 201921011067-Correspondence to notify the Controller [16-10-2023(online)].pdf | 2023-10-16 |
| 26 | 201921011067-FORM-26 [20-10-2023(online)].pdf | 2023-10-20 |
| 27 | 201921011067-Written submissions and relevant documents [01-11-2023(online)].pdf | 2023-11-01 |
| 28 | 201921011067-ORIGINAL UR 6(1A) FORM 26)-041223.pdf | 2023-12-09 |
| 29 | 201921011067-PatentCertificate26-12-2023.pdf | 2023-12-26 |
| 30 | 201921011067-IntimationOfGrant26-12-2023.pdf | 2023-12-26 |
| 1 | SearchStrategyE_13-05-2021.pdf |