Sign In to Follow Application
View All Documents & Correspondence

Multi Lingual Translator Using Deep Learning Mounted On Edge Device

Abstract: The present invention relates to a multi-lingual translator system. The system presents a method to translate multiple Languages to a common user desired language on a server-less edge device (203) by sensing visual information of surrounding of the user and sending further to a processing module (208). The processing module (208) detects/identify multiple languages in the visual information and translates the detected multiple languages. The multiple detected/identified/translated languages are sent to the display output module for displaying a language selected by a user.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 March 2021
Publication Number
39/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@krishnaandsaurastri.com
Parent Application

Applicants

Bharat Electronics Limited
Outer Ring Road, Nagavara, Bangalore - 560045, Karnataka, India

Inventors

1. Amitesh kumar sharma
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India
2. Joydev Ghosh
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India
3. Sahil Tomar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India
4. Kalpana A
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India

Specification

DESC:TECHNICAL FIELD
[0001] The present invention relates generally to a translator system. The invention, more particularly, relates to a handheld translator system and a method for language translation.
BACKGROUND
[0002] In India, every state has its language. People face various challenges in reading and understanding various other Indian languages. In general, people are not able to understand signboards, milestones, etc. which are written in other languages. When people visit historical places, the writings are written in a multilingual format, which they are not able to understand. The visiting person takes the help of native people for understanding what is the information written in other languages. Also, the native people sometimes are not able to communicate properly in the language that the visiting person understands. Nowadays, most of the relevant information is available in multiple languages. For a person who doesn't know these languages, one can use the invention to get an understanding of the information in their native language. Also, if a person is stuck alone somewhere then a person might be facing a lot of issues. Various conventional solutions are available for helping the person stuck in the above situations as explained above, but each has its limitations.
[0003] There are many conventional solutions that exist for helping the person stuck in the above situations as explained above, for example, one of a conventional solution is proposed in US20010056342 titled “Voice enabled digital camera and language translator” discloses a digital camera that recognizes printed or written words and converts those words into recognizable speech in either native or foreign tongue. The user points the camera at a printed/text object and the camera will speak (or optionally display) the words. Using this device, a blind or visually disabled person can point at an object, press the shutter button to “take a picture” of the words before him/her, and the camera will speak those words in his/her native language. In a second and more advanced configuration, a person can point this camera at a worded object, press the shutter button to “take a picture” of the words before him/her and the camera will speak those words in a foreign language. Alternatively, he/she may point at text in a foreign language and have those words translated and spoken in his/her native language. This camera includes resident software that: a) captures the digital image, b) uses OCR (Optical Character Recognition) software/algorithms to detect written words (text) within the image, c) converts the text from the language A to language B, and either: c1) use text-to-speech (TTS) software to synthesize speech and audibly “speak” the words to you, or c2) display the words on a display screen in Language B.
[0004] Modern technologies like Machine Learning, Image, and Speech processing are being used to help persons. The tools available currently are addressing one or the other need of the persons.
[0005] One more limitation of the conventional solutions is that most of the tools are for the translation of foreign languages.
[0006] Thus, there is a need for an invention that solves the above-defined problems and provides a system and method for language translation.
SUMMARY OF THE INVENTION
[0007] This summary is provided to disclose a multi-lingual translator system using a deep learning model mounted on a serverless edge device. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[0008] For example, various embodiments herein may include one or more systems and methods for translating multiple languages to a common user's desired language.
[0009] In an embodiment, the present invention describes a handheld translator system adapted for language translation. A plurality of visual sensors (including but not limited to an Image capturing sensor) of the system is configured to sense one or more visual information (including but not limited to captured images) present in the surrounding of a user. The system further includes a processing module to identify multiple languages in the received one or more visual information and select a two-mode operation to translate the identified multiple languages. The system further includes an output display module to receive multiple translated languages from the processing module and outputs the translated language in a language that the user has selected on the display module. The system further includes a serverless edge device to switch the process into any one of the two-mode operations as per the input selected by the user on the display module.
[0010] In another embodiment, the present invention describes a translation method using a handheld translator. The method includes the steps of sensing, by a plurality of sensors, one or more visual information. In the next step identifying, by a processing module (208) of a serverless edge device, multiple languages in the received one or more visual information. Further, in the next step selecting, by the processing module, a two-mode operation to translate the identified multiple languages. Further, in the next step transferring, by the processing module (212), the multiple translated languages to a display module (220). Further, in the next step displaying, by the display output module (220) a user-desired language is selected from the multiple translated languages.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0011] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules.
[0012] Fig. 1 illustrates a block diagram depicting a multi-lingual translator system, according to an embodiment of the present invention.
[0013] Fig. 2 illustrates a schematic diagram depicting multiple languages as input, according to an embodiment of the present invention.
[0014] Fig. 3 illustrates a flow diagram depicting a translator for translating, according to an exemplary implementation of the present invention.
[0015] Fig. 4 illustrates a flow chart of the translator system, according to an exemplary implementation of the present invention.
[0016] Fig. 5 illustrates a translation method using a handheld translator for translating multiple languages, according to an exemplary implementation of the present invention.
[0017] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in a computer-readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0018] The various embodiments of the present invention describe a handheld translator system and method for language translation.
[0019] According to a novel aspect of the present invention, a system and method to translate multiple languages (including but not limited to Indian regional languages) into a common user desired language as an output on a server-less edge device is disclosed.
[0020] In the following description, for purpose of explanation, specific details are outlined to provide an understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these details. One skilled in the art will recognize that embodiments of the present disclosure, some of which are described below, may be incorporated into a number of systems.
[0021] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the presently disclosure and are meant to avoid obscuring the presently disclosure.
[0022] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0023] In an embodiment, a handheld translator system to translate multiple Indian Languages to a common user-defined output on a server-less edge system is disclosed. The system comprises a plurality of visual sensors to sense one or more visual information. The processing module receives the one or more visual information and then performs trained deep learning on the sensed visual information so as to detect/identify multiple Indian languages. The multiple languages detected in the visual information are translated by the processing module and are then sent to the display output module.
[0024] In another embodiment, the display output module receives multiple translated languages from the processing module and outputs the translated information relating to the detected/identified languages to the user.
[0025] In another embodiment, a portable serverless edge device switches the process of translation in any one of the two-mode operations. The two-mode of operations include a word-to-word translation operation and a meaningful translation operation. The two-mode operation is switched as per the input of the user.
[0026] In another embodiment, a handheld multi-translator system adapted for language translation is disclosed. The multi translator portable edge device comprises a portable server-less edge device, wherein the portable-serverless edge device further comprising a plurality of visual sensors configured to sense one or more visual information. The system further includes a processing module configured to identify multiple languages in the received visual information and select two-mode operations to translate the identified multiple languages. The two-mode translation operations include a meaningful translation operation and a word-to-word translation operation. The meaningful translation operation includes comparing the detected multiple languages with the pre-stored multiple languages, wherein the comparison of the detected languages with the pre-stored languages is a sentence-by-sentence comparison. Whereas the word-to-word translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the multi-lingual language dictionary.
[0027] The system further includes a display output module configured to receive the translated languages and display a user desired language. The user-desired language is selected from the received translated languages.
[0028] In another embodiment, the present invention discloses the processing module further comprising a language detection module (210) configured to perform the meaningful translation operation, a language processing module configured to perform the word-to-word translation operation, and a memory module.
[0029] In another embodiment, the present invention discloses the memory module to store a multi-lingual language dictionary, pre-stored languages, the sensed one or more visual information (including but not limited to still photography, motion picture photography, video or audio recording, graphic arts, visual aids, models, display, visual presentation, etc.), the processed visual information, and a trained deep learning model.
[0030] In another embodiment, the system of the present invention discloses the identification and the translation of the detected multiple languages, obtained through the trained deep learning model.
[0031] In another embodiment, a multi-lingual translator system for a person who is not able to understand any other speaking languages is disclosed. The multi-lingual translator system translates multiple languages to a common user-defined language.
[0032] The multi-lingual translator system comprising of a portable server-less edge device comprising a plurality visual sensors (including but not limited to an image capturing sensor) is disclosed. The one or more visual sensors as an image capturing sensor is configured to capture images of the surroundings of the user. The system further includes a processing module configured to receive the captured images of the surroundings of the user and then process the captured images to detect/identify multiple languages (including but not limited to Indian languages) in the captured image. The system further includes an output display module configured to receive multiple translated languages from the processing module and outputs the translated information relating to the detected/identified languages to the user, and a communicable input system for changing the two-mode of operation of the serverless edge device as per the input selected by the user.
[0033] In an exemplary implementation, a translation method using a handheld translator is disclosed. The method includes capturing, by an image capturing sensor, one or more images of the surrounding of the user. The method further includes identifying, by a processing module of a serverless edge device, multiple languages in the captured images of the surrounding of the user. The method further includes selecting, by the processing module, a two-mode operation to translate the identified multiple languages. The system further includes transferring, by the processing module, the translated language to a display module. The method further includes displaying, by the display output module, a user desired language selected from the multiple translated languages.
[0034] In another exemplary implementation, the present discloses a method of translating multiple Indian languages to a user retaining the meaning of the sentence. The method comprising the steps of capturing, by an image capturing module, one or more images of the surroundings of the user. In the next step, receiving, by a processing module, one or more captured images of the surroundings of the user for detecting/identifying the multiple languages. In the next step, multiple languages detected in the one or more captured image is translated by the processing module and is then sent to a display output module. The next step involves analysis of translated language to retain the meaning of the sentence and sending the output to the display output module, a message relating to the translated language to the user.
[0035] In another embodiment, the present invention discloses a server-less edge device comprising of a sensor to capture images, a processing module, and a display output module is disclosed. The processing module is configured to receive the captured images of the surroundings of the user and process the captured images to detect/identify a plurality of languages in the captured image and then translate the detected multiple languages to the output language. The display module of the system receives the plurality of detected/identified/translated languages from the processing module and outputs the translated language relating to the detected/identified languages to the user.
[0036] In another embodiment, the present invention discloses a fast and more reliable method for the detection and the recognition of the region of interest with respect to the presence of language has been used.
[0037] In another embodiment, the present invention discloses a system, that takes input from a user to switch the process from word-by-word translation to meaningful translation and vice-versa.
[0038] In another embodiment, the present invention discloses a system to take commands. The commands include selecting two-mode of operations of the system, where the two-mode of operations includes word-by-word translation and a meaningful translation.
[0039] In another embodiment, the present invention discloses a processing module that is backed by a server-less edge device/system. The processing module of the edge device further comprises: (a). of a language detection module configured to detect/identify multiple languages in the captured image and outputs the translated language to the display output module, where the language detection module detects/identifies the multiple languages in the captured videos by comparing multiple languages with pre-stored languages in the memory module. (b). a language processing module configured to translate the detected languages word by word to a single language checks the output language requested by the user, and outputs the translated language in the requested language to the display output module.
[0040] In another embodiment, the present invention discloses a processing module. The processing module further comprises a graphics accelerator, on-board memory devices to store a plurality of languages, captured images, processed language information and a trained deep learning model to detect/identify and then translate the language.
[0041] Further, the two-mode language translation operation is performed to translate the detected multiple languages into the user-selected language.
[0042] In another embodiment, the present invention discloses that the language detection module detects the multiple languages in the received visual information by comparing the detected multiple languages with the multi-lingual language dictionary.
[0043] In another embodiment, the present invention discloses a two-mode language translation operation that includes a word-by-word language translation operation and a meaningful language translation operation. The word-by-word translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the dictionary. However, the meaningful language translation operation includes translating sentence by sentence of the detected multiple languages to the user-selected language.
[0044] In another exemplary implementation, the present invention discloses a translation method using a handheld translator. The method comprising steps of sensing, by a plurality of visual sensors, one or more visual information. In the next step, performing, by a processing module, a trained deep learning model on the sensed one or more visual information. Further, in the next step, detecting, by a language detection module, multiple languages present in the received visual information. In the next step, performing, by a language processing module, a two-mode language translation operation on the detected multiple languages. In the next step, translating, by the language processing module, the multiple languages into a user-selected language. In the next step, transferring, by the language processing module, the multiple translated language to a display module, and displaying, by a display output module, the translated language.
[0045] In another exemplary implementation, the present invention discloses a fast and more reliable method for the detection and the recognition of the region of interest with respect to the presence of language has been used.
[0046] Fig. 1 illustrates a block diagram depicting a handheld multi-lingual translator system, according to an embodiment of the present invention.
[0047] The multi-lingual translator system of the present invention comprises a plurality of visual sensors (202), a portable serverless edge device (203), a memory module (204), a graphic accelerator (206), a processing module (208), and a display output module (220). The processing module (208) of the present system further comprises a language detection module (210) and a language processing module (212).
[0048] Initially, the plurality of visual sensors (202) senses one or more visual information present in the surroundings of a user. The sensed one or more visual information is sent further to a processing module (208). The processing module (208) is configured to receive the sensed one or more visual information. The sensed visual information is then processed by the processing module (208) to detect/identify multiple languages (including but not limited to regional languages) from the received visual information. After identification, the multiple detected language is translated to a user-desired language.
[0049] After receiving the command, the processing module (208) switches to any one of the two-mode translation operations. The two-mode operation includes a word-to-word translation operation and a meaningful translation operation. The meaningful translation operation is performed by a language detection module (210) of the processing module (208). Whereas the word-to-word translation operation is performed by a language processing module (212) of the processing module (208).
[0050] In, the meaningful translation operation comparison of the detected multiple languages with the pre-stored multiple languages stored in the memory module (204) is performed. The comparison of the detected languages with the pre-stored languages is a sentence-by-sentence comparison. Wherein the word-to-word translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the multi-lingual language dictionary stored in the memory module (204).
[0051] The system further includes a display output module (220) configured to receive the multiple translated languages and display the user desired language. The user-desired language is selected from the received translated languages.
[0052] The system further includes a numeric keypad (214) and a battery (216). The numeric keypad (214) is configured to take input from the user, whereas the battery (216) provides power to the system.
[0053] In another exemplary implementation, the present invention relates to a multi-lingual translator system for a person who is not able to understand any other languages. The system comprising of a portable server-less edge device (203) which further comprises an image capturing sensor (an example of the visual sensors) configured to capture images of surroundings of the user, the processing module (208) to receive the captured images of the surroundings of the user and process the captured images to detect/identify multiple Indian languages in the captured image, and an output display module (220) to receive multiple translated languages from the processing module (208) and outputs the translated information relating to the detected/identified languages to the user. The system further includes a communicable input system (for example display or numeric keypad) for changing the modes (two-mode operation) of the edge device (203).
[0054] In another exemplary implementation, the present invention relates to a method of translating multiple Indian languages to a user retaining the meaning of the sentence. The method comprising steps of capturing, by an image capturing sensor, an image of the surroundings of the user. In the next step, receiving, by the processing module (208), the captured images of the surroundings of the user and detecting/identifying the language. Multiple languages detected in the captured image are translated by the language detection module (210) & the language processing module (212) of the processing module (208) and are then sent to the display output module (220) for displaying the user-selected/choice language. The next step involves analysis of translated language to retain the meaning of the sentence and sending the output to the display output module (220). A message relating to the translated language is sent to the user.
[0055] Fig. 2 illustrates a schematic diagram depicting multiple languages as an input, according to an embodiment of the present invention.
[0056] Visual information of surrounding of the user as an input from the plurality of visual sensors (202), as an image capturing sensor (herein taken as an example for the easy understanding of the invention) is sensed. The visual information is sent to the processing module (208) for detection/identification of multiple languages present in the sensed visual information.
[0057] Fig. 3 illustrates a flow diagram depicting a translator for translating, according to an exemplary implementation of the present invention.
[0058] Referring now to Fig. 3 which illustrates a flow diagram (300) for translating, according to an exemplary implementation of the present invention
[0059] At step 302, start the process of Lingual Translation using Deep Learning. At step 304, image capturing sensor (as an example of visual sensor) captures the information of the surroundings of the users and is transmitted towards the processing module (208). At step 306, the processing module (208) receives the of the captured images of the surroundings of the user and process the captured images to detect/identify multiple Indian languages in the captured image. At step 308, displaying the language detected image. At step 318, image output of the result in the display module (220). If the language detected image is same as the user selected language, then stop, and if not then, go to step 312. At step 312, two-mode translation operation is performed. The processing module (208) select any one of the two-mode translation operation. The switching on any one of the two-mode translation operation is decided based on the input of the user. Further, in the next step once the two-mode operation is decided, move to next step. At step 314, word-by-word translation operation by the language processing module (212) is carried out. The translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the multi-lingual language dictionary. At step 316, meaning full translation operation by the language detection module (210) is carried out. The meaningful translation operation includes comparing the detected multiple languages with the pre-stored multiple languages. The comparison of the detected languages with the pre-stored languages is a sentence-by-sentence comparison. Further, at step 318, image output of the result in the display module (220). At step 320, go to start.
[0060] Fig. 4 illustrates a flowchart (400) of the translator system, according to an exemplary implementation of the present invention.
[0061] At step 402, the image capturing sensor captures the images of the surrounding of the User as an input. At step 404, Language L1…., Language L3… Language L5…. Language Ln, from the captured images is detected and recognized using a OCR. At step 406, cleaning of language L1 using dictionary of Language L1 stored in the memory module is performed. Like, L3 and L5, cleaning of corresponding language using the corresponding dictionary is performed. At step 408, output is merged. At step 410, conversion of multiple languages into the user desired language is performed, and at step 412, desired output is displayed to the user.
[0062] Fig. 5 illustrates a translation method using a handheld translator for translating multiple languages, according to an exemplary implementation of the present invention.
[0063] Referring now to Fig. 5 which illustrates a flowchart (500) of translating multiple languages into a user desired language, according to an exemplary implementation of the present invention. The flow chart (500) of Fig. 5 is explained below with reference to Fig.1 as described above.
[0064] At step 502, sensing, by a plurality of visual sensors (202), one or more visual information;
[0065] At step 504, identifying, by a processing module (208) of a serverless edge device (203), multiple languages in the received visual information;
[0066] At step 506, selecting, by the processing module (208), a two-mode operation to translate the identified multiple languages;
[0067] At step 508, transferring, by the processing module (208), the translated language to a display module (220);
[0068] At step 510, displaying, by the display output module (220), a user desired language selected from the translated languages.
[0069] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the substance of the invention may occur to a person skilled in the art, the invention should be construed to include everything within the scope of the invention.

,CLAIMS:
1. A handheld multi translator system adapted for language translation, the system comprising:
a portable server-less edge device, the portable-serverless edge device further comprising:
a plurality of sensors (202) configured to sense one or more visual information;
a processing module (208) configured to identify multiple languages in the received visual information, and
select two-mode operations to translate the identified multiple languages; and
a display output module (220) configured to receive the translated languages, and
display a user desired language, the user desired language is selected from the received translated languages.
2. The system as claimed in claim 1, wherein the two-mode operation include a word-to-word translation operation and a meaningful translation operation.
3. The system as claimed in claim 1, wherein the processing module (208) further comprising:
a language detection module (210) configured to perform the meaningful translation operation;
a language processing module (212) configured to perform the word-to-word translation operation; and
a memory module (204).
4. The system as claimed in claim 3, wherein the memory module (204) is configured to store a multi-lingual language dictionary, a prestored languages, the sensed visual information, the processed visual information, and a trained deep learning model.
5. The system as claimed in claim 3, wherein the meaningful translation operation includes comparing the detected multiple languages with the pre-stored multiple languages.
6. The system as claimed in claim 5, wherein the comparison of the detected languages with the pre-stored languages is a sentence-by-sentence comparison.
7. The system as claimed in claim 3, wherein the word-to-word translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the multi-lingual language dictionary.
8. The system as claimed in claim 1, wherein the identification and the translation of the detected multiple language is obtained through the trained deep learning model.
9. A translation method using a handheld translator, the method comprising:
sensing, by a plurality of sensors (202), one or more visual information;
identifying, by a processing module (208) of a serverless edge device, multiple languages in the received visual information;
selecting, by the processing module, a two-mode operation to translate the identified multiple languages;
transferring, by the processing module (212), the translated language to a display module (220);
displaying, by the display output module (220), a user desired language selected from the translated languages.
10. The method as claimed in claim 9, wherein the two-mode operation language includes a meaningful translation operation and a word-to-word translation operation.
11. The method as claimed in claim 10, wherein the meaningful translation operation includes comparing the detected multiple languages with the pre-stored multiple languages.
12. The method as claimed in claim 11, wherein comparing of the detected languages with the pre-stored languages is a sentence-by-sentence comparison.
13. The method as claimed in claim 11, wherein the word-to-word translation operation includes translating each word individually present in the detected multiple languages, into words of the user-selected language, by using the multi-lingual language dictionary.

Documents

Application Documents

# Name Date
1 202141013413-PROVISIONAL SPECIFICATION [26-03-2021(online)].pdf 2021-03-26
2 202141013413-FORM 1 [26-03-2021(online)].pdf 2021-03-26
3 202141013413-DRAWINGS [26-03-2021(online)].pdf 2021-03-26
4 202141013413-Proof of Right [04-05-2021(online)].pdf 2021-05-04
5 202141013413-FORM-26 [15-07-2021(online)].pdf 2021-07-15
6 202141013413-FORM 3 [05-10-2021(online)].pdf 2021-10-05
7 202141013413-ENDORSEMENT BY INVENTORS [05-10-2021(online)].pdf 2021-10-05
8 202141013413-DRAWING [05-10-2021(online)].pdf 2021-10-05
9 202141013413-COMPLETE SPECIFICATION [05-10-2021(online)].pdf 2021-10-05
10 202141013413-FORM 18 [22-07-2022(online)].pdf 2022-07-22
11 202141013413-FER.pdf 2022-12-05
12 202141013413-FER_SER_REPLY [05-06-2023(online)].pdf 2023-06-05
13 202141013413-DRAWING [05-06-2023(online)].pdf 2023-06-05
14 202141013413-COMPLETE SPECIFICATION [05-06-2023(online)].pdf 2023-06-05
15 202141013413-CLAIMS [05-06-2023(online)].pdf 2023-06-05
16 202141013413-POA [07-10-2024(online)].pdf 2024-10-07
17 202141013413-FORM 13 [07-10-2024(online)].pdf 2024-10-07
18 202141013413-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
19 202141013413-Response to office action [01-11-2024(online)].pdf 2024-11-01
20 202141013413-Response to office action [25-06-2025(online)].pdf 2025-06-25
21 202141013413-US(14)-HearingNotice-(HearingDate-04-12-2025).pdf 2025-10-24

Search Strategy

1 SearchHistory(3)E_02-12-2022.pdf