Abstract: ABSTRACT SYSTEM AND METHOD OF DETECTING EXPRESSIONS The present invention relates to a system and a method for filtering junk query. An input module [102] receives a user dialog via an input interface of a user device. A tokenization module [104] identifies one or more tokens in the user dialog. A parts-of-speech module [106] recognizes a part of speech for the one or more tokens. A garbage detection unit [108] detects at least one of one or more stop words and one or more unique words from the one or more tokens. The garbage detection unit [108] determines a context of the user dialog based on the recognized part of speech and the said detection, filters the user dialog as a junk query based on the determined context and provides a response to the user dialog.
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
AND
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“SYSTEM AND METHOD OF DETECTING EXPRESSIONS”
We, Reliance Jio Infocomm Limited, an Indian National of, 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present invention generally relates to dialog systems and more specifically relates to a method and a system for filtering out junk queries in natural language processing and accordingly providing a response to a user’s dialog.
BACKGROUND OF THE INVENTION
The following description of the related art is intended to provide background information pertaining to the field of the invention. This section may include certain aspects of the art that may be related to various features of the present invention. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present invention, and not as admissions of prior art.
Dialog systems (for e.g., chatbots, etc.) allow a user to communicate (e.g., have a conversation) with an assistant using a combined menu-based and natural-language-based user interface. Conventional (e.g., existing) chatbots can be broadly classified into the following two categories, menu-based dialog system and natural-language-based dialog system. In menu-based dialog system, the users are restricted to choose their input to the assistant via a menu-based user interface that may have only a limited number of options of queries for the user to choose from, and which may not cover the query of the user properly. Further, a menu-based dialog system may utilize a hierarchically organized sequence of menus to guide the user through all of the different actions that the assistant can perform in a step-by-step manner, which may be time-consuming.
On the other hand, in the natural-language-based dialog system, the user is free to dynamically (e.g., on-the-fly) choose between generating their input to the assistant either using a menu-based user interface that is navigated by the user, or using natural language that is either typed or spoken by the user. Accordingly, when a user is communicating via a natural-language-based dialog system, they may encounter
the need to input a large number of characters or speak a large number of words to accomplish the desired action. Traditionally, due to the complexity of regional languages (for example Indian languages, including but not limited to, Hindi) where numerable language scripts and words merge to frame sentences, the diversity of the context is difficult to fathom and more importantly difficult to judge and hence, to filter out the junk queries made by the user is an inherent challenge within the existing chatbot interaction ecosystem.
Conventional chatbot interaction and dialogue systems consisting of multi-lingual queries related to Indian languages such as but not limited to the Hindi language are widely used especially in the Indian subcontinent in the form of live interaction with the chatbot/ video bot for making a product and services related queries. For example, an existing solution proposes iteratively searching large amounts of data in response to a user request by traversing a conversational scaffold and producing a document set in response to the request, producing category descriptors for the document set, transmitting the category descriptors to a chatterbot response composer for producing a chatterbot response and responding to the user. Another existing solution enables a conversation between a chatbot and a user, such that the chatbot may switch between topics, keep state information, disambiguate utterances, and learn about the user as the conversation progresses using each of the plurality of dialogues. Users/ developers may expose several dialogues each specializing in a conversational subject as a part of the chatbot. Yet another existing solution describe proactive speech detection on behalf of a user and alerting the user when a specific word, name, etc. is detected. Speech detection is actively executed through a computing device, where the speech detection analyzes spoken utterances in association with a dynamic grammar file stored locally on the computing device.
However, the existing solutions fail to filter the words detected by the speech recognition system spoken in region languages (such as but not limited to Hindi) and are not capable of segregating the linguistics involved in the complex expressions. Accordingly, the existing solutions also fails to identify the correct intent amongst
multiple intents presented in such regional language user query and often the existing solutions does not even possess the intelligence to understand whether a query is worthy of being answered by the chatbot or not. Also, the conventional chatbot/video bot query response system is also unable to identify the context that the bot is expected to capture while ensuring that such expressions that do not pertain to the context are excused from being answered.
Thus, there is a need in the art for a system and method of filtering junk queries in natural language processing for detecting, identifying and filtering context-specific and garbage Indian language expressions for interaction through chat, video, text and other mediums to chatbots, ensuring the correct response to the users and other sub-systems that are both human and non-human. Therefore, in view of the above shortcomings in the existing approaches, there is a need in the art to provide an efficient solution for a system and a method for filtering junk queries natural language processor.
SUMMARY
This section is provided to introduce certain objects and aspects of the present invention in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present invention is to provide a system and a method for filtering junk queries in natural language processing. Another object of the present invention is to provide a dialog system for enhancing the user experience of the conventional system by catering to complex expressions which are not identified by the voice assistants, for example, Indian language expressions. Yet another object of the present invention is to provide a response to the user query based on identifying and filtering the relevant context and garbage from a complex query (for example, Hindi language queries) by breaking it down to
smaller more straightforward queries by the voice assistant/ chatbot. Yet another object of the present invention is to provide seamless conversation between the user and the voice assistant/ chatbot.
In order to achieve at least some of the above-mentioned objectives, the present invention provides a method and system for filtering junk query in natural language processing. A first aspect of the present invention relates to a method for filtering junk query in natural language processing. The method comprising receiving, at an input module, a user dialog via an input interface of a user device. Subsequently, a tokenization module identifies one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. Next, a parts-of-speech (POS) module recognizes a part of speech for each of the one or more tokens. Further, a garbage detection unit detects at least one of one or more stop words and one or more unique words from the one or more tokens. Next, the garbage detection unit determines a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words. Subsequently, the garbage detection unit filters the user dialog as a junk query based on the determined context. Lastly, the garbage detection unit provides a default response for the user dialog at an output interface of the user device.
Another aspect of the present invention relates to a system for filtering junk query in natural language processing. The system comprises an input module, a tokenization module, a parts-of-speech module and a garbage detection module. The input module is configured to receive a user dialog via an input interface of a user device. The tokenization module is connected to the input module, said tokenization module configured to identify one or more tokens in the user dialog based on one or more white occurring in the user dialog. The parts-of-speech module is connected to the input module, the tokenization module, said parts-of-speech module configured to recognize a part of speech for each of the one or more tokens. The garbage detection unit connected to the input module, tokenization module and the parts-of-speech module, said garbage detection unit is configured to detect at least one of
one or more stop words and one or more unique words from the one or more tokens. The garbage detection unit is also configured to determine a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words. The garbage detection unit is also configured to filter the user dialog as a junk query based on the determined context and provide a default response for the user dialog at an output interface of the user device.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this invention, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Also, the embodiments shown in the figures are not to be construed as limiting the invention, but the possible variants of the method and system according to the invention are illustrated herein to highlight the advantages of the invention. It will be appreciated by those skilled in the art that invention of such drawings includes invention of electrical components or circuitry commonly used to implement such components.
Fig. 1 illustrates an exemplary block diagram of a system [100] for filtering junk query in natural language processing, in accordance with exemplary embodiments of the present invention.
Fig. 2 illustrates an exemplary method flow diagram depicting a method [200] for filtering junk query in natural language processing, in accordance with exemplary embodiments of the present invention.
Fig. 3 illustrates an exemplary signal block diagram of a user device [300] with natural language processing capability, in accordance with exemplary embodiments of the present invention.
Fig. 4 illustrates an exemplary training depiction of the parts-of-speech module, in accordance with exemplary embodiments of the present invention.
Fig. 5 illustrates an exemplary signal flow diagram depicting method of filtering junk queries in natural language processing when a user dialog comprises at least one proper noun, in accordance with exemplary embodiments of the present invention.
Fig. 6 illustrates an exemplary signal flow diagram depicting method of filtering junk queries in natural language processing when a user dialog does not comprise proper noun, in accordance with exemplary embodiments of the present invention.
Fig. 7 illustrates an exemplary training data set of the logistic regression unit, in accordance with exemplary embodiments of the present invention.
Fig. 8 illustrates an exemplary grammar database of stop words and unique words, in accordance with exemplary embodiments of the present invention.
Fig. 9 illustrates an exemplary implementation of the method of the present invention when a proper noun matches the proper noun dataset, in accordance with exemplary embodiment of the present invention.
Fig. 10 illustrates an exemplary implementation of the method of the present invention when a proper noun does not match the proper noun dataset, in accordance with exemplary embodiment of the present invention.
Fig. 11 illustrates an exemplary implementation of the method of the present invention when a proper noun matches the proper noun dataset, in accordance with exemplary embodiment of the present invention.
Fig. 12 illustrates an exemplary implementation of the method of the present invention when a user dialog doesn’t comprise of a proper noun, in accordance with exemplary embodiment of the present invention.
Fig. 13 illustrates another exemplary implementation of the method of the present invention when a user dialog doesn’t comprise of a proper noun, in accordance with exemplary embodiment of the present invention.
The foregoing shall be more apparent from the following more detailed description of the invention.
DESCRIPTION OF THE INVENTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, that embodiments of the present invention may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
As used herein, the term “infers” or “inference” refers generally to the process of reasoning about or inferring states of the system, environment, user, and/or intent from a set of observations as captured via events and/or data. Captured data and events can include user data, device data, environment data, data from sensors, sensor data, application data, implicit data, explicit data, etc. Inference can be employed to identify a specific context or action or can generate a probability distribution over states of interest based on a consideration of data and events, for example. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
Various classification schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, and data fusion engines) can be employed in connection with performing automatic and/or inferred action in connection with the disclosed subject matter.
As used herein, “user device” or "user equipment“, “mobile station,” “mobile subscriber station,” “access terminal,” “terminal,” “handset,” “computing device,” and similar terminology refers to any electrical, electronic, electromechanical and computing wireless device utilized by a subscriber or user of a wireless communication service to receive and/or convey data associated with voice, video, sound, and/or substantially any data-stream or signalling-stream. Further, the foregoing terms are utilized interchangeably in the subject specification and related drawings. The user device is capable of receiving and/or transmitting one or parameters, performing function/s, communicating with other user devices and transmitting data to the other user devices. The user device may have a processor, a display, a memory unit, a battery and an input-means such as a hard keypad and/or a soft keypad. The input interface also comprises touch/acoustic/video components for touch/sound/video input and output. The output interface may comprise a microphone, a speaker, camera and additionally audio/video I/O ports in an accessories interface, wherein the speaker normally serves to provide acoustic output in the form of human speech, ring signals, music, etc. The user device may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, NB-IoT etc. For instance, the user devices may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, pager, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art.
Furthermore, the terms “user,” “subscriber,” “customer,” “consumer,” “agent,”, “owner”, “client” and the like are employed interchangeably throughout the subject specification and related drawings, unless context warrants particular distinction(s)
among the terms. It should be appreciated that such terms can refer to human entities, or automated components supported through artificial intelligence, e.g., a capacity to make inference based on complex mathematical formulations, that can provide simulated vision, sound recognition, decision making, etc. In addition, the terms “wireless network” and “network” are used interchangeable in the subject application, unless context warrants particular distinction(s) among the terms.
As used herein, a “processor” or “processing unit” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, a low-end microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present invention. More specifically, the processor or processing unit is a hardware processor.
As used herein, a “communication unit” or a “transceiver unit” may include at least one of a “transmitter unit” configured to transmit at least one data and/or signals to one or more destination and a “receiver unit” configured to receive at least one data and/or signals from one or more source. The “communication unit” or the “transceiver unit” may also be configured to process the at least one data and/or signal received or transmitted at the “communication unit” or the “transceiver unit”. Also, the “communication unit” or the “transceiver unit” may further include, any other similar units obvious to a person skilled in the art, required to implement the features of the present invention.
As used herein, “memory unit”, “storage unit” and/or “memory” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. ‘Computer storage media’ refers to
volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information Such as computer-readable instructions, data structures, program modules or other data. For example, computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store the desired information and that may be accessed by computing device.
As used herein, a “controller” or “control unit” includes at least one controller, wherein the controller refers to any logic circuitry for processing instructions. A controller may be a general-purpose controller, a special-purpose controller, a conventional controller, a digital signal controller, a plurality of microcontrollers, at least one microcontroller in association with a DSP core, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The controller may perform signal coding, data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present invention. More specifically, the controller or control unit is a hardware processor that comprises a memory and a processor. The memory is configured to store the modules and the processor is specifically configured to execute said modules to perform one or more processes which are described further below.
As used herein, “natural language processing" generally refers to determining a conceptual “meaning” (e.g., what meaning the speaker intended to convey) of the detected words by analyzing their grammatical relationship and relative context. Natural Language Processing (NLP) involves the lexical, syntactic (grammatical), and semantic domain analysis of the user input using both statistical observations of the various surface forms and a broader interpretation of the relationships and dependencies among words, phrases, and concepts. Such speech recognition systems are used to process the detected words using a natural language processing system. Natural language processing, when used in connection with speech
recognition provides a powerful tool for operating a computer using spoken words rather than manual input such as a keyboard or mouse. NLP employs statistical but also deep language processing techniques. Thus, in order to take advantage of both apparent features of the resulting text, such as domain vocabulary (i.e. keywords), but also the chosen structure of language through which the user expresses their intention, and the global context of each word, i.e. syntactic patterns and word collocation. Both types of linguistic analysis are based on application-specific data that has been semantically annotated.
As used herein, “assistants,” “chatbots” and the like terms refer to a computer-based, artificially intelligent conversational agent/entity that is designed to conduct a natural human conversation (e.g., chat) with one or more users. More particularly, a chatbot responds to input from users in a way that moves the conversation forward in a contextually meaningful way, thus generating the illusion of intelligent understanding. They are generally designed to convincingly simulate how a human would interact and behave as a conversational/chat partner. A general goal of an assistant is to provide value and ease of use to users by trying to understand what they want and then providing them with the information they need or performing the action(s) they are requesting. Beyond this general goal, some sophisticated assistants also attempt to pass the conventional Turing Test and thus make each user that is communicating with the chatbot think that they are talking to another person rather than interacting with a computer program. As used herein, the term “dialog system” refers to one or more of the following: chat information system, spoken dialogue system, conversational agent, chatter robot, chatterbot, chatbot, chat agent, digital personal assistant, automated online assistant, and so forth.
As used herein, “language granularity” refers to the fact that a limited assortment of words can only be arranged in so many ways, which severely limits the range and detail of things that it is possible to say. These points of possible meaning in a vast sea of things that cannot be said are different in each human language, and are radically different in computers, making correct translation impossible, except in
some rare instances. This phenomenon severely limits the prospects for human-level machine understanding.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention.
The present invention relates to a system and a method for filtering junk queries during natural language processing. More specifically, the present invention is directed towards detecting, identifying and filtering context-specific and garbage Indian language expressions, such as but not limited to Hindi language, for interaction through chat, video, text and other mediums between a user and a voice assistant/ chatbot/ intelligent Integrated Conversational Interface (ICI), to ensure the correct response to the users and other sub-systems that are both human and non-human.
Referring to Fig. 1 illustrates an exemplary block diagram of a system [100] for filtering junk query in natural language processing, in accordance with exemplary embodiments of the present invention. The system [100] comprises an input module [102], a tokenization module [104], a parts-of-speech module [106], a garbage detection module [108], a natural language processor [112], a preprocessing module [114], an abusive-greeting check module [110], an output module [116], a memory unit [118], a logistic regression (LR) unit [120], a long short-term memory unit [122] and a hashing module [124] [124]. The present invention encompasses that all the components of the system [100] for natural language processing are connected to each other, and work in conjunction to achieve the objectives of the present invention.
The input module [102] of the system [100] is configured to receive a user dialog via an input interface of a user device. The input module [102] may be previously connected to the user device, and accordingly, when the input interface of the user device receives an input from a user, say a user dialog, the input interface of the
user device transmits the user dialog to the input module [102]. The present invention encompasses that the user dialog is a natural language speech input. For instance, the user provides, or speaks, the phrase “What is the time now?” in the user device, the same is received at the input interface of the user device, which then transmits the same to the input module [102]. In another instance, the present invention also encompasses that user dialog is a Hindi language speech input. For instance, the user may provide a speech input for invoking the system [100], like the
user may speak
into the user device for natural language processing.
The input module [102] then transmits the received user dialog to the tokenization model [104]. As used herein, “tokenization” refers to the process of dividing the input into tokens, i.e., into groups of words. The tokenization also takes into consideration any punctuations, abbreviations, dates, times, numbers, typographical errors, etc., while breaking the input into smaller groups of words. Accordingly, the tokenization module [104] is configured to identify one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. Thus, the tokenization module [104] divides the user dialog, say a long sequence of words, into tokens by identifying the white space occurring in the sequence of the sentence. The whitespaces may include, but not limited to, pauses, punctuations, abbreviations, dates, times, numbers, typographical errors. For instance, if a user
speaks the
tokenization module [104] may divide the user dialog into one or more tokens, say,
The tokenization module [104] then transmits the identified one or more tokens for the user dialog to the parts-of-speech module [106]. According to Grammar, there may be eight parts of a speech, comprising, a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, and an interjection. The parts-of-speech module [106] is, thus, configured to recognize a part of speech for each of the one or more tokens. The parts-of-speech module [106] identifies that the one or more
tokens is one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection, or a combination thereof. In an instance of the present invention, the parts-of-speech module [106] recognizes a part of speech for the one or more tokens based on a Hindi language dictionary. In yet another instance of the present invention, the parts-of-speech module [106] is further configured to identify at least one proper noun token from the one or more tokens, and to compare each of the at least one proper noun token with a proper noun dataset.
For example, the parts-of-speech module [106] is capable of tagging the words which tend to be tagged as “Unk”, that is “Unknown” token when a particular word is unable to get a proper parts-of-speech identification. The parts-of-speech module [106] also tags Indian names in Indian languages, including but not limited to Hindi
names, such as (“Shahid Bhagat Singh”), etc. which helps to detect
the proper nouns within a user dialog sentence. In an instance, where a user dialog
is (“Bhagat Singh ki movie dekhna hai”), while a
conventional parts-of-speech module may fails to give proper parts-of-speech tag according to the grammar of the sentence, and if a particular word’s tag cannot be allocated, then the conventional parts-of-speech module will tag it as “UNK”, which is the Unknown tag. However, the parts-of-speech module [106] of the present invention is capable of tagging the words which are denoted as “Unknown” tokens by conventional parts-of-speech module, when a particular word is unable to get a proper parts-of-speech tag (as illustrated in Table 1 below).
In yet another instance, (“Balram Singh Yadav”) which is one of
the most commonly used Indian names, is being tagged as “NNP” which is a Proper Noun (as illustrated in Table 2 below). Hence, it is illustrative from the example that
the parts-of-speech module [106] can quickly identify the parts-of-speech tag including Hindi proper nouns in a sentence/query which is made by the user to the chat/video bot.
The parts-of-speech module [106] transmits the recognized parts-of-speech information along with the one or more tokens to the garbage detection unit [108]. The garbage detection unit [108] detects at least one of one or more stop words and one or more unique words from the one or more tokens. As used herein, “stop words” are frequently occurring words in a language that might not be efficient to help with natural language processing. As used herein, “unique words” act as the main vocabulary list are the unique words which are stored for building further use-case. In an instance of the present invention, the garbage detection unit [108] detects at least one of the one or more stop words and the one or more unique words from the one or more tokens further comprises comparing the one or more tokens with a grammar database. As used herein, “grammar” refers to a system and a structure of a language or of languages in general, usually taken as consisting of syntax, morphology, phonology and semantics. As used herein, “grammar database” refers to a collection of said system and structure, for example, a dictionary, etc. Referring to Fig. 8 illustrates an exemplary grammar database of stop words and unique words, in accordance with exemplary embodiments of the present invention. In an event, no unique word is detected in a user dialog, the garbage detection unit [108] determines a vector similarity score for each of the one or more tokens based on the comparison of the one or more tokens with the grammar database. The garbage detection unit [108] identifies the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceed a threshold score.
The garbage detection unit [108] determines a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words. The garbage detection unit [108] filters the user dialog as a junk query based on the determined context and provides a response for the user dialog at an output interface of the user device.
The present invention further encompasses that the garbage detection unit [108] is further configured to filter the user dialog as a junk query in an event the at least one proper noun token does not match the proper noun dataset upon comparison by the parts-of-speech module [106]. The proper noun dataset is a part of the grammar database. Accordingly, the garbage detection unit [108] provides a response for the user dialog at the output interface of the user device. In this regard, the invention further encompasses the garbage detection unit [108] to train the parts-of-speech module [106] for maintaining the proper noun dataset, where training of the parts-of-speech module [106] is based on an Indian corpus, where each word with its respective part-of-speech tag is provided.
Referring to Fig. 4 illustrates an exemplary training depiction of the parts-of-speech module [106] by the garbage detection unit [108], in accordance with exemplary embodiments of the present invention. Hence, if POS use_flag is TRUE, then the POS response will be used by the client. Otherwise, the query would be sent to the Logistic Regression model directly for further processing.
In an instance of the present invention, the LR unit [120] performs determining vector similarity score for each of the one or more tokens for the garbage detection unit [108]. The LR unit [120] is also configured to determine an intent of a user dialog and to provide a contextual response to the user dialog based on the determined intent. In operation, the LR unit [120] works on a statistical method for analyzing the dataset which contains one or more independent variables that help determine an outcome. The outcome would be measured by a dichotomous variable, where the output could have two possible outcomes. The predictions are
made based on the probability of occurrence of an event by fitting the data to a logit function. In this regards, the LR unit [120] maintains a training data set based on the grammar database, to store the past user dialog and the intents of the past user dialog. Referring to Fig. 7 illustrates an exemplary training data set of the logistic regression unit, in accordance with exemplary embodiments of the present invention.
The abusive-greeting check module [110] is configured to determine that the user dialog comprises at least one of an abuse and a greeting. This module is used for checking the presence of any abusive words in the user query. If any abusive word is detected in the user dialog, then an ‘abusive flag’ is set to ‘true’, and an appropriate subsequent response is provided to the user dialog is via the output interface of the user device. This module is also used for checking, if there are any greeting words
like “hi”, “good morning”, (“Namaste") etc. in the user dialog If any such
greeting sentences are encountered during the interaction with the intelligent Integrated Conversational Interface (ICI), then a reply based on corresponding intents (examples: bot_hi, bot_good_morning) are provided in response to the user dialog via the output interface.
In another instance of the present invention further encompasses that the natural language processor [112] transmits the response determined by the garbage detection module [108] to the output module [116]. The output module [116] is configured to provide the response to the user dialog via an output interface of the user device. The output module [116] may also be configured to process the response into a format suitable for the output interface of the user device. In yet another instance, the system [100] of the present invention encompasses comprising a preprocessing module [114]. The input module [102] may then transmit the received user dialog to the preprocessing module [114]. The preprocessing module [114] may process the user dialog and covert the received user dialog into a semantic expression.
In yet another instance of the present invention, the LSTM unit [122] is configured to convert a user dialog to “word indexes” and to store the word indexes (say, a sequence of integers) that are followed by the current context. LSTM unit [122] aids in intent classification if the user dialog is a complex sentence and is implemented for predicting intents for queries which consist of longer sentences. For example, the training in LSTM unit [122] is performed, wherein a word and integer list is trained in such a way that “happy” and “happiness” corresponds to similar words which will have a nearby integer list. This process would consequently help the LSTM unit [122] to understand the presented context in a better manner. In operation, the LSTM unit [122], firstly, embeds the words of the user dialog in the form of integers. It, then, sequentially learns the integer-order or ‘sequence’ of words in the expression. Accordingly, it predicts the intent of the user dialog, wherein the intent of the user about the context user is trying to say is mapped to a unique integer identified in the index dictionary of intent and their corresponding index.
In yet another instance of the present invention, the hashing module [124] is configured to provide bigram of characters in words which have similar frequency counts, and to normalize such characters for each expression in the training data set of the LR unit [120]. The hashing module [124] accurately identifies spelling errors based on splitting of the words into bigrams and subsequently tuples of these bigrams. The hashing module [124] is capable of processing both long and short
multi-lingual sentences. For example, for a user dialog (“Sab accha
hai"), the hashing module [124] prepares the following bigrams illustrated below in Table 3:
Next, the hashing module [124] prepares sequence of integers, for example, [(‘1’,’2’), (‘2’,’3’) ...]. The corresponding trained LR unit [120] can learn coefficients
based on the input features and the bigram of the hashing module [124] and, thus, consequently provide the predicted intent.
The memory unit [118] may be configured to store the user dialog received at the input module [102]. The memory unit [118] may also be configured to store the one or more tokens identified by the tokenization module [104]. The memory unit [118] may also be configured to store the part of speech for each of the one or more tokens recognized by the parts-of-speech module [106]. The memory unit [118] may also be configured to store a mapping of each of the one or more tokens to one or more sub-sequences of the user dialog conducted by the chunker module [108].
Referring to Fig. 2 illustrates an exemplary method flow diagram depicting a method [200] for filtering junk query in natural language processing, in accordance with exemplary embodiments of the present invention. The method [200] begins at step [202]. At step [204], the method begins with receiving a user dialog via an input interface of a user device at an input module [102]. The input module [102] may be previously connected to the user device, and accordingly, when the input interface of the user device receives an input from a user, say a user dialog, the input interface of the user device transmits the user dialog to the input module [102]. For instance, the user provides, or speaks, the phrase “What is the time now?” in the user device, the same is received at the input interface of the user device, which then transmits the same to the input module [102]. In another instance, the present invention also encompasses that user dialog is a Hindi language speech input. For instance, the user may provide a speech input for invoking the system [100], like the
user may speak (“Jio mein naya kya hai”; “What’s new in Jio”)
into the user device for natural language processing.
The method [200] further encompasses that the input module [102] transmits the received user dialog to the tokenization model [104]. Next, at step [206], the tokenization module [104] identifies one or more tokens in the user dialog based on one or more white spaces occurring in the user dialog. In operation, the tokenization module [104] divides the user dialog, say a long sequence of words, into tokens by
identifying the white space occurring in the sequence of the sentence. The whitespaces may include, but not limited to, pauses, punctuations, abbreviations,
dates, times, numbers, typographical errors. For instance, if a user speaks
(“Jio mein naya kya hai”; “What’s new in Jio”), the tokenization module
[104] may divide the user dialog into one or more tokens, say,
The method [200] further encompasses that the tokenization module [104] then transmits the identified one or more tokens for the user dialog to the parts-of-speech module [106]. Next, at step [208], parts-of-speech module [106] recognizes a part of speech for each of the one or more tokens. The parts-of-speech module [106] identifies that the one or more tokens is one of a noun, a pronoun, a verb, an adjective, an adverb, a preposition, a conjunction, an interjection, or a combination thereof. In an instance of the present invention, the parts-of-speech module [106] recognizes a part of speech for the one or more tokens based on a Hindi language dictionary. In yet another instance of the present invention, the parts-of-speech module [106] is further configured to identify at least one proper noun token from the one or more tokens, and to compare each of the at least one proper noun token with a proper noun dataset. In an event the at least one proper noun token does not match the proper noun dataset, the garbage detection unit [108] filters the user dialog as a junk query and provides the default response for the user dialog at the output interface.
The parts-of-speech module [106] transmits the recognized parts-of-speech information along with the one or more tokens to the garbage detection unit [108]. At step [210], the garbage detection unit [108] detects at least one of one or more stop words and one or more unique words from the one or more tokens. In an instance of the present invention, the garbage detection unit [108] detects at least one of the one or more stop words and the one or more unique words from the one or more tokens further comprises comparing the one or more tokens with a grammar database. In an event no unique word is detected in a user dialog, the
garbage detection unit [108] determines a vector similarity score for each of the one or more tokens based on the comparison, and identifies the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceeds a threshold score.
At step [212], the garbage detection unit [108] determines a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words. The present invention encompasses in an event no unique word is detected in a user dialog, the garbage detection unit [108] determines a vector similarity score for each of the one or more tokens based on the comparison of the one or more tokens with the grammar database. Accordingly, the garbage detection unit [108] identifies the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceeds a threshold score. At step [214], the garbage detection unit [108] filters the user dialog as a junk query based on the determined context and, lastly, provides a response for the user dialog at an output interface of the user device at step [216]. The method ends at step [218].
In another instance, the method of the present invention further encompasses determining, by an abusive-greeting check module, that the user dialog comprises at least one of an abuse and a greeting. Accordingly, the garbage detection unit [108] provides a corresponding response to the user dialog based on the determination at the output interface of the user device. In yet another instance, the method of the present invention further encompasses the parts-of-speech module [106] identifying at least one proper noun token from the one or more tokens. The parts-of-speech module [106] compares each of the at least one proper noun token with a proper noun dataset. In an event the at least one proper noun token does not match the proper noun dataset, the garbage detection unit [108] filters the user dialog as a junk query and provides the default response for the user dialog at the output interface.
Referring to Fig. 3 illustrates an exemplary signal block diagram of a user device [300] with natural language processing capability, in accordance with exemplary embodiments of the present invention. The user device comprises a processor [302], a memory [304], a radio transceiver [306], a display [308] and an input/ output (I/O) interface [310]. In an embodiment, the user equipment may be connected via any cellular or wireless network including, but not limited to, 5G network, Long-Term Evolution (LTE) network and a Global System for Mobile communication (GSM) network. The user equipment may receive user inputs through the I/O interface [310] (say, Intelligent Integrated Conversational Interface (ICI)). In one embodiment, the display [308] may be utilized to receive user inputs from a user using the user device, wherein the display [308] may be a touch screen display. The I/O interfaces [310] may include a variety of software and hardware interfaces, for instance, interface for peripheral device(s) such as a keyboard, a mouse, a scanner, an external memory, a printer and the like. The processor [302] is configured to implement the system [100] of the present invention in the user device. Accordingly, the user device also comprises of comprises an input module [102], a tokenization module [104], a parts-of-speech module [106], a garbage detection module [108], a natural language processor [112], a preprocessing module [114], an abusive-greeting check module [110], an output module [116], a memory unit [118], a logistic regression (LR) unit [120], a long short-term memory unit [122] and a hashing module [124] [124], said components working according to the description of the present invention.
Referring to Fig. 5 illustrates an exemplary signal flow diagram depicting method of filtering junk queries in natural language processing when a user dialog comprises at least one proper noun, in accordance with exemplary embodiments of the present invention. At step [502], the input module [102] receives a user dialog via an input interface of a user device. The method comprises the parts-of-speech module [106] determining whether the user dialog comprises at least one proper noun and comparing the at least one proper noun token with a proper noun dataset. The method proceeds further in event the at least one proper noun matches with a proper noun dataset. Referring to Fig. 9 illustrates an exemplary implementation of
the method of the present invention when a proper noun matches the proper noun dataset, in accordance with exemplary embodiment of the present invention. However, in an event the at least one proper noun does not match with a proper noun dataset, the garbage detection unit [108] filters the user dialog as a junk query (also referred to as garbage query) and provides a corresponding response. Referring to Fig. 10 illustrates an exemplary implementation of the method of the present invention when a proper noun does not match the proper noun dataset, in accordance with exemplary embodiment of the present invention.
At step [504], the tokenization module [104] identifies one or more tokens in the user dialog. At step [506], garbage detection unit [108] removes the one or more tokens identified as stop words based on comparison with the grammar database. At step [508], the garbage detection unit [108] iteratively determines the one or more tokens are unique words based on comparison with the grammar database at step [510A-B]. In event, all the one or more tokens are unique words, at step [512], all the one or more tokens are sent to the LR unit [120] for determination of a response to the user dialog.
In event few of the one or more tokens are not unique words, at step [514], the garbage detection unit [108] determines a vector similarity score for each of the few one or more tokens based on the comparison with the grammar database. At step [516], the garbage detection unit [108] identifies the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceed a threshold score (for example, 0.61 here). In an event, at step [518], the score is less than the threshold score, the garbage detection unit [108] filters the user dialog as a junk query (also referred to as garbage query) and provides a corresponding response at step [520]. At step [522], in event the vector similarity score of the one or more tokens exceed the threshold score (for example, 0.61 here), the garbage detection unit [108] stores the score and corresponding vector of the one or more tokens in the grammar database. At step [526], the garbage detection unit [108] replaces the one or more token with the corresponding of the grammar database and sends the modified one or more tokens to the LR unit [120].
Referring to Fig. 11 illustrates an exemplary implementation of the method of the present invention when a proper noun matches the proper noun dataset, in accordance with exemplary embodiment of the present invention.
Referring to Fig. 6 illustrates an exemplary signal flow diagram depicting method of filtering junk queries in natural language processing when a user dialog does not comprise proper noun, in accordance with exemplary embodiments of the present invention. Contrary to method of Fig. 5, if the user dialog does not contain any proper noun, then the user dialog is directly passed to the garbage detection system [108]. In operation, at step [602], the input module [102] receives a user dialog via an input interface of a user device. At step [604], the tokenization module [104] identifies one or more tokens in the user dialog. At step [606], garbage detection unit [108] removes the one or more tokens identified as stop words based on comparison with the grammar database. At step [608], the garbage detection unit [108] iteratively determines the one or more tokens are unique words based on comparison with the grammar database at step [610A-B]. In event, all the one or more tokens are unique words, at step [612], all the one or more tokens are sent to the LR unit [120] for determination of a response to the user dialog at step [616].
In event few of the one or more tokens are not unique words, at step [612], the garbage detection unit [108] determines a vector similarity score for each of the few one or more tokens based on the comparison with the grammar database. At step [618], the garbage detection unit [108] identifies the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceed a threshold score (for example, 0.61 here). In an event, at step [626], the score is less than the threshold score, the garbage detection unit [108] filters the user dialog as a junk query (also referred to as garbage query) and provides a corresponding response at step [628]. Referring to Fig. 13 illustrates an exemplary implementation of the method of the present invention when a user dialog doesn’t comprise of a proper noun, in accordance with exemplary embodiment of the present invention.
At step [620], in event the vector similarity score of the one or more tokens exceed the threshold score (for example, 0.61 here), the garbage detection unit [108] stores the score and corresponding vector of the one or more tokens in the grammar database. At step [624], the garbage detection unit [108] replaces the one or more token with the corresponding vector of the grammar database and sends the modified one or more tokens to the LR unit [120] at step [616]. Referring to Fig. 12 illustrates an exemplary implementation of the method of the present invention when a user dialog doesn’t comprise of a proper noun, in accordance with exemplary embodiment of the present invention.
Thus, the present invention provides a novel solution for detecting, identifying and filtering context-specific and garbage Indian language expressions for interaction through chat, video, text and other mediums to chatbots, ensuring the correct response to the users and other sub-systems that are both human and non-human. The present invention, hence, solves the problem of filtering out junk queries made by the user while interacting with chatbots and ensures that chat or video bots can give replies only to Indian language queries which are based on the context. Hence, the present solution addresses the above problem to a more significant degree, such that the context which the intelligent Integrated Conversational Interface (ICI) is expected to identify while ensuring that such expressions that do not pertain to the context are excused from being answered a contextual response. In addition to the above improvements in the existing solution, critical value addition is proposed to the existing conventional chatbot/ video bots query response system, wherein an increased level of multi-lingual query filtration occurs at several levels. Such filters are implemented in the system to provide an added intelligence level to the existing solution in understanding, whether the query made by the user is worthy of being answered by the intelligent Integrated Conversational Interface (ICI) or not. Hence, efficient handling of such expressions in Indian language such as but not limited to the Hindi language is performed in the present invention, wherein the queries are filtered based on the relevant context by the bot while relegating the others for a standard non-contextual response.
It shall be appreciated by any person skilled in the art, and from the preceding description of the present invention, that the present invention may be implemented in any type of communication technology, where a system [200] may be conversing with a user. While the implementation of the solution of the present invention has been discussed to a very few usages including, the invention may also be used in many other applications that may be known to a person skilled in the art, all of which are objectives of the present invention.
Therefore, as is evident from the above method, the present invention overcomes the shortcomings of the menu-based assistant systems and also improves the existing natural-language-based assistant systems by increasing the speed and accuracy by which the user can effectively communicate with a dialog system. The present invention ensures availability of effective user experience to cater to complex expressions of a user which are not easily identified by conventional dialog systems. The present invention breaks down the complex query of a user into smaller queries which may be processed faster, and such feature is highly advantageous in languages like Hindi, etc.
The interface, module, memory, database, processor and component depicted in
the figures and described herein may be present in the form of a hardware, a
software and a combination thereof. The connection shown between these
components/module/interface in the system [100] are exemplary, and any
components/module/interface in the system [100] may interact with each other
through various logical links and/or physical links. Further, the
components/module/interface may be connected in other possible ways.
Though a limited number of servers, gateways, user equipment, wireless network, interface, module, memory, database, processor and component have been shown in the figures, however, it will be appreciated by those skilled in the art that the overall system of the present invention encompasses any number and varied types of the entities/elements such as servers, gateways, user equipment, wireless
network, interface, module, memory, database, processor and any other component that may be required by a person skilled in the art to work the present invention.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present invention. These and other changes in the embodiments of the present invention will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
We Claim
1. A method for filtering junk query in natural language processing, the method
comprising:
- receiving, at an input module [102], a user dialog via an input interface of a user device;
- identifying, by a tokenization module [104], one or more tokens in the user dialog;
- recognizing, by a parts-of-speech module [106], a part of speech for each of the one or more tokens;
- detecting, by a garbage detection unit [108], at least one of one or more stop words and one or more unique words from the one or more tokens;
- determining, by the garbage detection unit [108], a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words;
- filtering, by the garbage detection unit [108], the user dialog as a junk query based on the determined context; and
- providing, by the garbage detection unit [108], a response for the user dialog at an output interface of the user device.
2. The method as claimed in claim 1, further comprising:
- determining, by an abusive-greeting check module [110], that the user dialog comprises at least one of an abuse and a greeting; and
- providing, by the abusive-greeting check module [110], a response to the user dialog based on the determination at the output interface of the user device.
3. The method as claimed in claim 1, further comprising:
- identifying, by the parts-of-speech module [106], at least one proper noun
token from the one or more tokens
- comparing, by the parts-of-speech module [106], each of the at least one proper noun token with a proper noun dataset;
- filtering, by the garbage detection unit [108], the user dialog as a junk query in an event the at least one proper noun token does not match the proper noun dataset; and
- providing, by the garbage detection unit [108], the response for the user dialog at the output interface.
4. The method as claimed in claim 1, wherein detecting, by the garbage detection unit [108], at least one of the one or more stop words and the one or more unique words from the one or more tokens further comprises comparing the one or more tokens with a grammar database.
5. The method as claimed in claim 4, in an event no unique word is detected in a user dialog, the method further comprising:
- determining, by the garbage detection unit [108], a vector similarity score for each of the one or more tokens based on the comparison; and
- identifying, by the garbage detection unit [108], the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceed a threshold score.
6. The method as claimed in claim 1, wherein:
- the user dialog is a Hindi language speech input; and
the parts-of-speech module [106] recognizes a part of speech for the one or more tokens based on a Hindi language dictionary..
7. A system for filtering junk query in natural language processing, said system
comprising:
- an input module [102] configured to receive a user dialog via an input
interface of a user device;
- a tokenization module [104] connected to the input module [102], said tokenization module [104] configured to identify one or more tokens in the user dialog;
- a parts-of-speech module [106] connected to the input module [102] and the tokenization module [104], said parts-of-speech module [106] configured to recognize a part of speech for each of the one or more tokens; and
- a garbage detection unit [108] connected to the input module [102], the tokenization module [104] and the parts-of-speech module [106], said garbage detection unit [108] configured to:
- detect at least one of one or more stop words and one or more unique words from the one or more tokens,
- determine a context of the user dialog based on the recognized part of speech for each of the one or more tokens and the detection of at least one of the one or more stop words and the one or more unique words,
- filter the user dialog as a junk query based on the determined context, and
- provide a response for the user dialog at an output interface of the user device.
8. The system as claimed in claim 7, further comprising an abusive-greeting check
module [110] connected to the input module [102], the tokenization module
[104], the parts-of-speech module [106] and the garbage detection unit [108],
said abusive-greeting check module [110] configured to:
- determine that the user dialog comprises at least one of an abuse and a greeting, and
- provide a response to the user dialog based on the determination at the output interface of the user device.
9. The system as claimed in claim 7, wherein
- the parts-of-speech module [106] is further configured to:
- identify at least one proper noun token from the one or more tokens, and
- compare each of the at least one proper noun token with a proper noun
dataset; and
- the garbage detection unit [108] is further configured to:
- filter the user dialog as a junk query in an event the at least one proper noun token does not match the proper noun dataset, and
- provide the response for the user dialog at the output interface of the user device.
10. The system as claimed in claim 7, wherein the garbage detection unit [108] is further configured to:
- compare the one or more tokens with a grammar database,
- determine a vector similarity score for each of the one or more tokens based on the comparison in an event no unique word is detected in a user dialog, and
- identify the one or more tokens as the one or more unique words in an event the vector similarity score of the one or more tokens exceed a threshold score.
| # | Name | Date |
|---|---|---|
| 1 | 201921026186-Correspondence to notify the Controller [08-01-2025(online)].pdf | 2025-01-08 |
| 1 | 201921026186-FORM-8 [17-09-2024(online)].pdf | 2024-09-17 |
| 1 | 201921026186-IntimationOfGrant12-02-2025.pdf | 2025-02-12 |
| 1 | 201921026186-STATEMENT OF UNDERTAKING (FORM 3) [01-07-2019(online)].pdf | 2019-07-01 |
| 2 | 201921026186-FORM-26 [28-03-2024(online)].pdf | 2024-03-28 |
| 2 | 201921026186-PatentCertificate12-02-2025.pdf | 2025-02-12 |
| 2 | 201921026186-PROVISIONAL SPECIFICATION [01-07-2019(online)].pdf | 2019-07-01 |
| 2 | 201921026186-US(14)-HearingNotice-(HearingDate-15-01-2025).pdf | 2024-11-27 |
| 3 | 201921026186-Correspondence to notify the Controller [27-03-2024(online)].pdf | 2024-03-27 |
| 3 | 201921026186-FORM 1 [01-07-2019(online)].pdf | 2019-07-01 |
| 3 | 201921026186-FORM-8 [17-09-2024(online)].pdf | 2024-09-17 |
| 3 | 201921026186-Written submissions and relevant documents [28-01-2025(online)].pdf | 2025-01-28 |
| 4 | 201921026186-Correspondence to notify the Controller [08-01-2025(online)].pdf | 2025-01-08 |
| 4 | 201921026186-FIGURE OF ABSTRACT [01-07-2019(online)].pdf | 2019-07-01 |
| 4 | 201921026186-FORM-26 [28-03-2024(online)].pdf | 2024-03-28 |
| 4 | 201921026186-US(14)-HearingNotice-(HearingDate-04-04-2024).pdf | 2023-12-15 |
| 5 | 201921026186-US(14)-HearingNotice-(HearingDate-15-01-2025).pdf | 2024-11-27 |
| 5 | 201921026186-Proof of Right (MANDATORY) [31-07-2019(online)].pdf | 2019-07-31 |
| 5 | 201921026186-ORIGINAL UR 6(1A) FORM 26-121022.pdf | 2022-10-26 |
| 5 | 201921026186-Correspondence to notify the Controller [27-03-2024(online)].pdf | 2024-03-27 |
| 6 | 201921026186-US(14)-HearingNotice-(HearingDate-04-04-2024).pdf | 2023-12-15 |
| 6 | 201921026186-FORM-8 [17-09-2024(online)].pdf | 2024-09-17 |
| 6 | 201921026186-FORM-26 [31-07-2019(online)].pdf | 2019-07-31 |
| 6 | 201921026186-FER_SER_REPLY [28-05-2022(online)].pdf | 2022-05-28 |
| 7 | 201921026186-FORM-26 [28-03-2024(online)].pdf | 2024-03-28 |
| 7 | 201921026186-ORIGINAL UR 6(1A) FORM 1 & FORM 26-060819.pdf | 2019-11-26 |
| 7 | 201921026186-ORIGINAL UR 6(1A) FORM 26-121022.pdf | 2022-10-26 |
| 7 | 201921026186-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 8 | 201921026186-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 8 | 201921026186-Correspondence to notify the Controller [27-03-2024(online)].pdf | 2024-03-27 |
| 8 | 201921026186-FER_SER_REPLY [28-05-2022(online)].pdf | 2022-05-28 |
| 8 | 201921026186-FORM 18 [01-07-2020(online)].pdf | 2020-07-01 |
| 9 | 201921026186-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 9 | 201921026186-ENDORSEMENT BY INVENTORS [01-07-2020(online)].pdf | 2020-07-01 |
| 9 | 201921026186-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 9 | 201921026186-US(14)-HearingNotice-(HearingDate-04-04-2024).pdf | 2023-12-15 |
| 10 | 201921026186-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 10 | 201921026186-DRAWING [01-07-2020(online)].pdf | 2020-07-01 |
| 10 | 201921026186-ORIGINAL UR 6(1A) FORM 26-121022.pdf | 2022-10-26 |
| 10 | 201921026186-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 11 | 201921026186-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 11 | 201921026186-COMPLETE SPECIFICATION [01-07-2020(online)].pdf | 2020-07-01 |
| 11 | 201921026186-FER.pdf | 2021-11-30 |
| 11 | 201921026186-FER_SER_REPLY [28-05-2022(online)].pdf | 2022-05-28 |
| 12 | 201921026186-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 12 | 201921026186-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 12 | Abstract1.jpg | 2021-10-19 |
| 13 | 201921026186-FER.pdf | 2021-11-30 |
| 13 | 201921026186-COMPLETE SPECIFICATION [01-07-2020(online)].pdf | 2020-07-01 |
| 13 | 201921026186-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 14 | 201921026186-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 14 | 201921026186-DRAWING [01-07-2020(online)].pdf | 2020-07-01 |
| 14 | 201921026186-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 14 | Abstract1.jpg | 2021-10-19 |
| 15 | 201921026186-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 15 | 201921026186-ENDORSEMENT BY INVENTORS [01-07-2020(online)].pdf | 2020-07-01 |
| 15 | 201921026186-COMPLETE SPECIFICATION [01-07-2020(online)].pdf | 2020-07-01 |
| 15 | 201921026186-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 16 | 201921026186-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 16 | 201921026186-DRAWING [01-07-2020(online)].pdf | 2020-07-01 |
| 16 | 201921026186-FER.pdf | 2021-11-30 |
| 16 | 201921026186-FORM 18 [01-07-2020(online)].pdf | 2020-07-01 |
| 17 | Abstract1.jpg | 2021-10-19 |
| 17 | 201921026186-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 17 | 201921026186-ENDORSEMENT BY INVENTORS [01-07-2020(online)].pdf | 2020-07-01 |
| 17 | 201921026186-ORIGINAL UR 6(1A) FORM 1 & FORM 26-060819.pdf | 2019-11-26 |
| 18 | 201921026186-FORM 18 [01-07-2020(online)].pdf | 2020-07-01 |
| 18 | 201921026186-FORM-26 [31-07-2019(online)].pdf | 2019-07-31 |
| 18 | 201921026186-FER_SER_REPLY [28-05-2022(online)].pdf | 2022-05-28 |
| 18 | 201921026186-COMPLETE SPECIFICATION [01-07-2020(online)].pdf | 2020-07-01 |
| 19 | 201921026186-DRAWING [01-07-2020(online)].pdf | 2020-07-01 |
| 19 | 201921026186-ORIGINAL UR 6(1A) FORM 1 & FORM 26-060819.pdf | 2019-11-26 |
| 19 | 201921026186-ORIGINAL UR 6(1A) FORM 26-121022.pdf | 2022-10-26 |
| 19 | 201921026186-Proof of Right (MANDATORY) [31-07-2019(online)].pdf | 2019-07-31 |
| 20 | 201921026186-US(14)-HearingNotice-(HearingDate-04-04-2024).pdf | 2023-12-15 |
| 20 | 201921026186-FORM-26 [31-07-2019(online)].pdf | 2019-07-31 |
| 20 | 201921026186-FIGURE OF ABSTRACT [01-07-2019(online)].pdf | 2019-07-01 |
| 20 | 201921026186-ENDORSEMENT BY INVENTORS [01-07-2020(online)].pdf | 2020-07-01 |
| 21 | 201921026186-Correspondence to notify the Controller [27-03-2024(online)].pdf | 2024-03-27 |
| 21 | 201921026186-FORM 1 [01-07-2019(online)].pdf | 2019-07-01 |
| 21 | 201921026186-FORM 18 [01-07-2020(online)].pdf | 2020-07-01 |
| 21 | 201921026186-Proof of Right (MANDATORY) [31-07-2019(online)].pdf | 2019-07-31 |
| 22 | 201921026186-FIGURE OF ABSTRACT [01-07-2019(online)].pdf | 2019-07-01 |
| 22 | 201921026186-FORM-26 [28-03-2024(online)].pdf | 2024-03-28 |
| 22 | 201921026186-ORIGINAL UR 6(1A) FORM 1 & FORM 26-060819.pdf | 2019-11-26 |
| 22 | 201921026186-PROVISIONAL SPECIFICATION [01-07-2019(online)].pdf | 2019-07-01 |
| 23 | 201921026186-FORM 1 [01-07-2019(online)].pdf | 2019-07-01 |
| 23 | 201921026186-FORM-26 [31-07-2019(online)].pdf | 2019-07-31 |
| 23 | 201921026186-FORM-8 [17-09-2024(online)].pdf | 2024-09-17 |
| 23 | 201921026186-STATEMENT OF UNDERTAKING (FORM 3) [01-07-2019(online)].pdf | 2019-07-01 |
| 24 | 201921026186-Proof of Right (MANDATORY) [31-07-2019(online)].pdf | 2019-07-31 |
| 24 | 201921026186-PROVISIONAL SPECIFICATION [01-07-2019(online)].pdf | 2019-07-01 |
| 24 | 201921026186-US(14)-HearingNotice-(HearingDate-15-01-2025).pdf | 2024-11-27 |
| 25 | 201921026186-FIGURE OF ABSTRACT [01-07-2019(online)].pdf | 2019-07-01 |
| 25 | 201921026186-STATEMENT OF UNDERTAKING (FORM 3) [01-07-2019(online)].pdf | 2019-07-01 |
| 25 | 201921026186-Correspondence to notify the Controller [08-01-2025(online)].pdf | 2025-01-08 |
| 26 | 201921026186-Written submissions and relevant documents [28-01-2025(online)].pdf | 2025-01-28 |
| 26 | 201921026186-FORM 1 [01-07-2019(online)].pdf | 2019-07-01 |
| 27 | 201921026186-PROVISIONAL SPECIFICATION [01-07-2019(online)].pdf | 2019-07-01 |
| 27 | 201921026186-PatentCertificate12-02-2025.pdf | 2025-02-12 |
| 28 | 201921026186-IntimationOfGrant12-02-2025.pdf | 2025-02-12 |
| 28 | 201921026186-STATEMENT OF UNDERTAKING (FORM 3) [01-07-2019(online)].pdf | 2019-07-01 |
| 1 | SearchHistory(30)E_29-11-2021.pdf |
| 1 | SearchHistoryAE_30-01-2023.pdf |
| 2 | SearchHistory(30)E_29-11-2021.pdf |
| 2 | SearchHistoryAE_30-01-2023.pdf |