Sign In to Follow Application
View All Documents & Correspondence

System And Method For Speech To Text Translation Based On Continual Learning Technique

Abstract: Disclosed is a speech-to-text translation system (100). The speech-to-text translation system (100) includes an input unit (102), a storage unit (104), processing circuitry (106), and an output unit (108). The input unit (102) is configured to receive a current language pair. The storage unit (104) coupled to the input unit (102) is configured to store previous language pairs and the current language pair. The processing circuitry (106) coupled to the storage unit (104) and is configured to (i) retain the previous language pairs in equal proportions to the current language pair by augmented proportional language sampling (APLS) technique, (ii) balance the previous language pairs and the current language pairs by random sampling (RS) technique, (iii) maintain and select a representative set by gradient representative sampling (GRS) technique, and an output unit (108) coupled to the processing circuitry (106) configured to display the text corresponding to the current language pair. FIG. 1A is the reference figure.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 October 2023
Publication Number
33/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-11-06
Renewal Date

Applicants

IITI Drishti CPS Foundation
IIT Indore, Indore, Madhya Pradesh, 453552, India

Inventors

1. Chandresh Kumar Maurya
CSE department, IIT Indore, Indore, Madhya Pradesh, 453552, India
2. Balaram Sarkar
Abhaycharan Ashram, Mayapur, Nadia, West Bengal, 741313, India
3. Ankit Malviya
202 A Block, Phulwani Plaza, Mahashweta Nagar, Ujjain, Madhya Pradesh, 456010, India
4. Pranav Karande
3, Sainath Colony Sector-B, Indore, Madhya Pradesh, 452018, India

Specification

DESC:TECHNICAL FIELD
The present disclosure relates generally to the field of language translations. More specifically, the present disclosure relates to a system and a method for speech-to-text translation based on a continual learning technique.
BACKGROUND
Conventional speech-to-text translation techniques have long grappled with the challenge of adapting to new language pairs and evolving linguistic contexts. These methods traditionally relied on rigid paradigms that struggled to seamlessly integrate emerging languages into their frameworks. As a result, the process of incorporating new linguistic nuances often necessitated extensive retraining efforts, consuming valuable time and resources. This inflexibility hindered the scalability and practicality of speech-to-text translation systems, particularly in real-world applications requiring multilingual capabilities.
Moreover, the phenomenon of Catastrophic Forgetting (CF) has posed a significant obstacle in the continual learning journey of speech-to-text models. CF describes a scenario where the proficiency gained in previously mastered language pairs deteriorates as the model attempts to acclimate to new linguistic environments. This detrimental decline in performance undermines the reliability and effectiveness of the translation process, making it challenging for speech-to-text systems to maintain consistent accuracy across different languages over time. Addressing CF has thus become a pivotal endeavor in advancing the capabilities of speech-to-text translation technologies.
Therefore, there remains a need to provide an efficient translation system and method that is capable of solving aforementioned problems of conventional translation systems.

SUMMARY
Disclosed is a speech-to-text translation system based on continual learning technique. The speech-to-text translation system includes an input unit, a storage unit, processing circuitry, and an output unit. The input unit is configured to receive a current language pair. The storage unit coupled to the input unit is configured to store previous language pairs and the current language pair. The processing circuitry coupled to the storage unit and is configured to (i) combine the previous language pairs with the current language pairs, (ii) retain the previous language pairs in equal proportions to the current language pair by augmented proportional language sampling (APLS) technique, (iii) balance the previous language pairs and the current language pairs in the combined language pair by randomly pairing the current language pair with the previous language pairs by way of a random sampling (RS) technique, (iv) select and maintain a representative set of the combined language pair that approximates the gradient of the previous language pairs with the current language pair by a gradient representative sampling (GRS) technique.
In some embodiments of the present disclosure, the current language pair corresponds to a first language and a second language, wherein the first language is given as input as speech and the second language is selected from the input unit.
In some embodiments of the present disclosure, the processing circuitry facilitates the training of the speech-to-text translation model in a continual manner.
In some embodiments of the present disclosure, the system further comprises an output unit that is configured to display text corresponding to the first language of the current language pair.
In some embodiments of the present disclosure, the APLS technique incorporates an octave value based pitch augmentation technique to balance the number of previous language pairs, wherein the octave value may preferably range between -0.5 to 0.5.
In some embodiments of the present disclosure, the processing circuitry facilitates the speech-to-text translation in a continual manner.
In some embodiments of the present disclosure, the processing circuitry utilizes losses produced by the APLS technique and the RS technique for backpropagation and updating of the previous language pairs.
In an embodiment of the present disclosure, a method for speech-to-text translation is disclosed. The method facilitates receiving a current language pair, by way of an input unit. The method further facilitates combining the current language pair with previous language pairs, by way of processing circuitry. The method further facilitates retaining the previous language pairs in equal proportion to the current language pair. The method further facilitates maintaining a representative set that approximates a gradient of the previous language pairs, by way of the processing circuitry. Furthermore, the method facilitates displaying a text corresponding to the current language pair, by way of the output unit.
BRIEF DESCRIPTION OF THE DRAWINGS
The description refers to provided drawings in which similar reference characters refer to similar parts throughout the different views, and in which:
FIG. 1A illustrates a block diagram of a system for speech-to-text translation based on continual learning techniques, in accordance with an embodiment of the present disclosure;
FIG. 1B illustrates an architectural diagram of processing circuitry for speech-to-text translation based on continual learning technique, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates an architectural diagram of the processing circuitry involving an proportional language sampling engine, random sampling engine, and speech-to-text transforming engine for the speech-to-text translation, in accordance with an embodiment of the present disclosure; and
FIG. 3 illustrates a flow chart that depicts a method for speech-to-text translation, in accordance with an embodiment of the present disclosure.
To facilitate understanding, like reference numerals have been used, where possible to designate like elements common to the figures.
DETAILED DESCRIPTION OF DRAWINGS
As noted above, there exists a long felt need in the art for an end-to-end speech to text framework that may learn continually. According, the present disclosure provides a system and a method for speech-to-text translation based on a continual learning technique. The system may employ various sampling techniques to process language pairs that may be used by the system for speech-to-text translation. Specifically, the system may employ a gradient representative sampling, an augmented propagation language sampling, and a random sampling for training the speech to text translation model of the system. The system may be trained using the continual learning approach that may continually update the speech-to-text translation while maintaining a buffer of previously trained language pairs.
The term “continual learning technique” as used herein refers to a technique in which a speech-to-text translation is trained in a continual manner to facilitate speech-to-text translation.
The term “language pair” as used herein refers to 2 different languages that are involved in speech-to-text translation by the system 100. For example, one language pair refers to 2 different languages, out of which, one language is given as an input (in form of speech) and the other language is obtained as an output (in the form of text). For example, if the system translates a German audio into an English text, then the two languages, German and English form one language pair.
The term “current language pair” as used herein refers to 2 different languages that are involved in speech-to-text translation by the system. For example, one language pair refers to 2 different languages, both of which may be given as input through the input unit, one of which may be given in the form of speech and other may be selected from the input unit by the user. In other words, the term “current language pair” refers to 2 different languages i.e., first and second languages, such that the second language is obtained or retrieved upon translation of the first language.
The term “previous language pair” or “previous language pairs” as used herein refers to a pair of 2 different languages that are already translated by system of the present disclosure. In other words, the term “previous language pair” or “previous language pair” as used herein refers to a pair of 2 different languages on which the system of the present disclosure is already trained. For example, in first task, the system of the present disclosure is trained to translate speech in French language to text of Hindi language. The language pair consisting of French and Hindi languages may be considered as a previous language pair for second task. Specifically, each translation, prior to the current language pair may be considered as the previous language pair with respect to the current language pair.
FIG. 1A illustrates a block diagram of a system 100 for speech-to-text translation based on a continual learning technique, in accordance with an embodiment of the present disclosure. Specifically, the system 100 may translate a speech in one language into a text in a different language. In other words, the system 100 may be adapted to translate an audio of the first language to the text of second language. For example, the system 100 may be adapted to convert audio of German language to text of English language to facilitate, speech-to-text translation, the system 100 may employ three sampling techniques, such as, an Augmented Proportional Language Sampling (APLS) technique, a Random Sampling (RS) technique, and a Gradient Representative Sampling (GRS) technique.
The system 100 may include an input unit 102, a storage / buffer unit 104, processing circuitry 106, and an output unit 108.
The input unit 102 may be configured to receive a current language pair. Specifically, the input unit 102 may be configured to receive one or more voice signals (hereinafter referred to as “voice signals”) from the user in one language that may be translated and another language in which the speech from the user is to be translated. For example, the input unit 102 may be configured to receive the speech from the user in Russian language and English language as another language in which the speech from the user is to be translated. In other words, the input unit 102 may be configured to receive a first language i.e., in form of the speech from the user, and a second language in which the user intends to translate the first language.
In some embodiments of the present disclosure, the input unit 102 may include, one of, a microphone, a smartphone and a tablet, a voice-activated remote control, in-car voice command systems, a smartwatch, a headset with microphones, a voice assistant device and/or a combination thereof. Embodiments of the present disclosure are intended to include or otherwise cover any type of known and later developed input unit, without deviating from the scope of the present disclosure.
The storage unit 104 may be coupled to the input unit 102. The storage unit 104 may be configured to store previous language pairs and the current language pair. The storage unit 104 may maintain a replay buffer to store the previous language pairs. The replay buffer may be updated only upon training of one language pair.
In some embodiment of the present disclosure, a buffer update method, following the completion of task t and with the existing buffer Bt in place, focuses on replenishing the updated buffer Bt+1 with fresh task data. This process entails crucial steps, such as calculating the proportion of samples allocated to each language within the updated buffer (denoted as f), and subsequently conducting random uniform sampling from both the existing buffer and the newly acquired task data.
This technique may ensure a balanced representation of the previous language pairs and the and current language pair in the replay memory buffer, facilitating continual learning across multiple tasks.
The processing circuitry 106 may be coupled to the input unit 102, the storage unit 104, and the output unit 108. The processing circuitry 106 may be configured to train a speech-to-text translation model in a continual manner. In other words, the processing circuitry 106 may be configured to continually train the speech-to-text translation model. For example, the processing circuitry 106 may be adapted to convert audio of German language to text of English language. The processing circuitry 106 may employ three sampling techniques to facilitate speech-to-text translation. Specifically, the processing circuitry 106 may employ sampling techniques, such as, the augmented proportional language sampling (APLS) technique, the random sampling (RS) technique, and the gradient sampling (GRS) technique. The combination of the three sampling techniques may eliminate the need to retain entire previously trained language pairs i.e., the previous language pairs and thereby may prevent catastrophic forgetting.
The processing circuitry 106 may be configured to facilitate speech-to-text translation by way of the continual learning technique. Specifically, the processing circuitry 106 may be configured to continually train the speech-to-text translation model that may facilitate the speech-to-text translation.
The processing circuitry 106 may combine the current language pair with the previous language pairs. Specifically, the processing circuitry 106 may be configured to combine the current language pair from the input unit 102 and the previous language pairs stored in the storage unit 104 to generate a combined language pair.
The processing circuitry 106 may be configured to balance the number of previous language pairs. Specifically, a proportional language sampling technique i.e., normal PLS technique may facilitate the processing circuitry 106 to balance the number of previous language pairs by duplicating language pairs from minority languages i.e., the languages for which the system 100 already executed speech-to-text translation based on the continual learning technique. Since, the PLS technique balances the number of previous language pairs and the previous language pairs may be repeatedly considered while balancing the previous language pairs, therefore, the processing circuitry 106 may employ the APLS technique. Specifically, the APLS technique may be added with an augmentation technique in the normal PLS technique. The APLS technique may advantageously remove redundancy in the previous language pairs. Thus, the APLS technique may ensure to retain the previous language pairs in equal proportion to the current language pair. The APLS technique may further mitigate an over-fitting from repeated previous language pairs that may be stored in the storage unit 104 by augmentation. The augmentation technique may include, but not limited to, changing pitch and length of the previous language pairs with various parameters. The APLS technique may incorporate octave value-based pitch augmentation technique to balance the number of previous language pairs such that the octave value may preferably range between -0.5 to 0.5. When the pitch is changed, the frequency also changes. The pitch may be upper or lower depending on the octave value. The APLS technique may further advantageously ensures that no two same repeated language pairs are augmented with the same octave value twice.
In some embodiments of the present disclosure, the PLS technique is a data replay technique that may facilitate the balance of representation of samples across various languages, with a special focus on minority languages or those with limited resources. By drawing samples from the combined pool of current and buffered data ({Dt, Bt-1}), PLS assigns weights to each sample based on the ratio of the total combined data to the number of samples per language (|{Dt, Bt-1}|/|Xi|). This approach ensures that minority languages are oversampled while majority languages are appropriately represented.
The processing circuitry 106 may be configured to balance the previous language pairs and the current language pair by random sampling i.e., the RS technique may assist the processing circuitry 106 in achieving equilibrium by randomly pairing the current language pair with the previous language pairs and pairing one previous language pair with another previous language pair. Specifically, the processing circuitry 106 may facilitate the RS technique to balance the previous language pairs and the current language pair, such that the current language pair may be in the majority and the previous language pairs may be in the uniform random sample.
In some embodiments of the present disclosure, the processing circuitry 106 may utilize the losses produced by the APLS technique and the RS technique for the purposes of backpropagation and updating of the previous language pairs.
The processing circuitry 106 may employ input and output embeddings that may convert the sequences of words or tokens from the current language pair into continuous representations.
The processing circuitry 106 may be configured to select and maintain a representative set that may closely approximate a gradient of the previous language pairs with respect to the current language pair. Specifically, the GRS technique may facilitate to select and maintain the representative set that may closely approximate the gradient of the previous language pairs with respect to the current parameters. Preferably, the GRS technique may facilitate to update the storage unit 104 with better language pairs i.e., rich and diverse language pairs. The processing circuitry 106 may therefore advantageously select rich and diverse language pairs that may be stored in the storage unit 104. The GRS technique may therefore advantageously minimize noise and may advantageously facilitate to store the previous language pairs with maximum diversity in the storage unit 104. The storage unit 108 may facilitate to provide the rich and diverse set of language pairs while subsequent training (future training) of the speech-to-text translation model. In other words, during subsequent training (future training) of the speech- to-text translation model, the rich and diverse set of language pairs may be retrieved from the storage unit 108.
In some embodiments of the present disclosure, the GRS technique may be based on Batch Stochastic Gradient Descent (BSGD) that may operate by dividing the previous language pairs into smaller batches.
The optimized language pairs may then be forwarded to an encoder that may process the optimized training dataset using attention to capture the contextual information. Specifically, the encoder that may capture the contextual information of current language pair and the previous language pair that may be optimized by the sampling techniques (APLS and RS).
The contextual information of the optimized training dataset may be introduced to a decoder that may generate the output sequence by attending to the continuous representations through the attention mechanisms. The output sequence that may be generated by the decoder may be then mapped to the target vocabulary by the final linear layer.
The output unit 108 may be coupled to the processing circuitry 106. The output unit 108 may be configured to display the text generated from the current language pair. Specifically, the output unit 108 may be configured to generate the text translation of the voice signals in the desired language.
In some embodiments of the present disclosure, the output unit 108 may include, any one of, a computer monitor, a smartphone, a tablet, a projector screen, and/or a combination thereof. Embodiments of the present disclosure are intended to include and/or otherwise cover any type of known and later developed output unit, without deviating from the scope of the present disclosure.
In operation, the system 100 may enable the user to provide the current language pair for the conversion of the first language of the current language pair into the second language of the current language pair. The current language pair may be provided by the user via the input unit 102. The storage unit 104 may be configured to store the previous language pairs and the current language pair. The processing circuitry 106 may be coupled to the storage unit 104. The processing circuitry 106 may be configured to generate a combined language pair by combining both the current language pair and the previous language pairs that may be stored in the storage unit 104. Then, the processing circuitry 106 may employ the augmented proportional language sampling (APLS) technique to ensure that the previous language pairs are retained in the storage unit 104 in proportions equal to the current language pair. The APLS technique may facilitate a balanced representation of linguistic diversity over time. Then, the processing circuitry 106 may employ the random sampling (RS) technique to randomly pair the current language pair with previous language pairs, further enhancing the diversity and richness of the combined language pair. Finally, to approximate the gradient of previous language pairs with the current one, the processing circuitry 106 may utilize the gradient representative sampling (GRS) technique, selecting and maintaining a representative set of the combined language pair.

FIG. 1B illustrates an architectural diagram of the processing circuitry 106 for speech-to-text translation based on the continual learning technique, in accordance with an embodiment of the present disclosure.
The processing circuitry 106 may receive the current language pair. Following this, the processing circuitry 106 may merge the current language pair with the previous language pairs stored in the storage unit 104. The Bi-sampler 110 may employ the combination of the APLS and the RS techniques to further enhance the language pairs. Subsequently, the processing circuitry 106 may facilitate the APLS technique to eliminate redundancy within the previous language pairs. The optimized language pairs may undergo processing within the language unit 112, consisting of the encoder and the decoder. The encoder may employ attention mechanisms to capture contextual information from both the current and optimized previous language pairs. This contextual information then may be utilized by the decoder to generate an output sequence, which may further be mapped to the target vocabulary through a final linear layer. The GRS technique may select and maintain a representative set, closely mirroring the gradient characteristics of the previous language pairs concerning the current language pair. The GRS technique may work in conjunction with the encoder and the decoder to optimize the learning of the languages in the replay buffer.
In some embodiments of the present disclosure, the bi-sampler may integrate both sampling techniques (PLS and RS) by incorporating two fully-connected layers within the decoder. One layer is dedicated to PLS, emphasizing class balance across languages, while the other focuses on RS, mitigating overfitting to minority languages. By optimizing a joint loss function, the bi-sampler may achieve a balanced trade-off between class balance and overfitting.
FIG. 2 illustrates an architectural diagram of the processing circuitry 106 involving a proportional language sampling engine, random sampling engines, and a speech-to-text (ST) transforming engine for the speech-to-text translation, in accordance with an embodiment of the present disclosure. The PLS (proportional language sampling) technique may be employed by a proportional language sampling engine (PLS) engine 202, and the RS (random sampling) technique may be employed by a random sampling (RS) engine 204.
The processing circuitry 106 may include the proportional language sampling engine (PLS) engine 202, the random sampling (RS) engine 204, and the speech-to-text (ST) transforming engine 206. The ST transforming engine 206 may include an encoding engine 208 and a decoding engine 210. The encoding engine 208 facilitates the functioning of the encoder and the decoding engine 210 may facilitate the function of the decoder.
The data from the current language pair and the fixed-size buffer are combined into a single dataset. From this combined dataset, samples may be drawn for both the Proportional Language Sampler (PLS) and the Random Sampler (RS). The samples selected by the PLS may undergo augmentation before being inputted into the encoding engine 208. On the other hand, the RS samples are directly fed into the encoding engine 208. Both PLS and RS pathways generate translations, which may be then processed by the decoding engine 210.
The output from both the encoding and the decoding engines 208 and 210 may be transferred to an attention mechanism before being processed by the ST Transforming engine 206. Finally, backpropagation loss is applied to optimize the entire encoding-decoding unit. (combination of the encoding engine 208 and the decoding engine 210.
For example, the speech translation model that has been trained on English and Spanish language pairs initially. The objective of the system 100 is to extend its capabilities to include French translation, while ensuring it doesn't forget its proficiency in English and Spanish. To accomplish this, the system 100 may employ a continual learning approach, where the model continuously learns new languages while retaining knowledge of previously encountered ones. At each step of the training process, the speech-to-text translation model may encounter language pairs from both the current task (French) and previous tasks (English and Spanish). Techniques such as the Proportional Language Sampler (PLS) and Random Sampler (RS) may be utilized to ensure a balanced representation of data across all languages. Moreover, a buffer strategy is implemented to store and utilize past language pairs, preventing the model from forgetting its proficiency in English and Spanish while adapting to French. Throughout training, the model learns from a combination of current and previous language pairs, guided by sampling strategies and augmentation techniques to enhance robustness and prevent overfitting.
FIG. 3 illustrates a flow chart that depicts a method 200 for speech-to-text translation by the system 100 of FIG. 1A, in accordance with an embodiment of the present disclosure.
At step 302, the system 100 may be configured to receive the current language pair. Specifically, the system 100 may be configured to receive the current language pair by way of the input unit 102.
At step 304, the system 100 may be configured to combine the current language pair with the previous language pairs. Specifically, the system 100 may be configured to combine the current language pair with the previous language pairs, by way of the processing circuitry 106.
At step 306, the system 100 may be configured to retain the previous language pairs in equal proportion to the current language pair, by the way of the processing circuitry 106. Specifically, the APLS technique may advantageously remove redundancy in the previous language pairs. Thus, the APLS technique may ensure to retain the previous language pairs in equal proportion to the current language pair. The APLS technique may further mitigate an over-fitting from repeated previous language pairs that may be stored in the storage unit 104 by augmentation.
At step 308, the system 100 may be configured to balance the previous language pairs and the current language pair by the random sampling technique, by the way of the processing circuitry 106. Specifically, the RS technique may assist the processing circuitry 106 in achieving equilibrium by randomly pairing the current language pair with the previous language pairs and pairing one previous language pair with another previous language pair.
At step 310, the system 100 may be configured to maintain the representative set that may approximate the gradient of the previous language pairs with respect to the current parameters, by way of the processing circuitry 106. Specifically, the GRS technique may facilitate to select and maintain the representative set that may closely approximate the gradient of the previous language pairs with respect to the current parameters. Preferably, the GRS technique may facilitate to update the storage unit 104 with better language pairs i.e., rich and diverse language pairs. The processing circuitry 106 may therefore advantageously select rich and diverse pairs that may be stored in the storage unit 104. The GRS technique may therefore advantageously facilitate to store the language pairs with maximum diversity in the storage unit 104 in order to enhance the performance and adaptability of the speech-to-text translation.
At step 312, the system 100 may be configured to display the text corresponding to the current language pair, by the way of the output unit 108.
Certain terms are used throughout the following description and claims to refer to particular features or components. As one skilled in the art will appreciate, different persons may refer to the same feature or component by different names. This document does not intend to distinguish between components or features that differ in name but not structure or function. While various aspects of the present disclosure have been illustrated and described, it will be clear that the present disclosure is not limited to these aspects only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present disclosure, as described in the claims.

,CLAIMS:

1. A speech-to-text translation system (100) comprising:
an input unit (102) is configured to receive a current language pair;
a storage unit (104) coupled to the input unit (102) configured to store previous language pairs and the current language pair;
processing circuitry (106) coupled to the storage unit (104) and is configured to:
combine the previous language pairs with the current language pair to generate a combined language pair;
retain the previous language pairs in equal proportions to the current language pair in the storage unit (104) by way of an augmented proportional language sampling (APLS) technique;
balance the previous language pairs and the current language pairs in the combined language pair by randomly pairing the current language pair with the previous language pairs by way of a random sampling (RS) technique; and
select and maintain a representative set of the combined language pair that approximates the gradient of the previous language pairs with the current language pair by a gradient representative sampling (GRS) technique.
2. The speech-to-text translation system (100) as claimed in claim 1, wherein the current language pair corresponds to a first language and a second language, wherein the first language is given as input as speech and the second language is selected from the input unit (102).

3. The speech-to-text translation system (100) as claimed in claim 1, wherein the processing circuitry (106) facilitates the training of the speech-to-text translation model in a continual manner.

4. The speech-to-text translation system (100) as claimed in claim 1, further comprises an output unit (108) that is configured to display text corresponding to the first language of the current language pair.

5. The speech-to-text translation system (100) as claimed in claim 1, the APLS technique incorporates an octave value based pitch augmentation technique to balance the number of previous language pairs, wherein the octave value may preferably range between -0.5 to 0.5.

6. The speech-to-text translation system (100) as claimed in claim 1, wherein the storage unit (104) maintains a replay buffer to store the previous language pairs.

7. The speech-to-text translation system (100) as claimed in claim 1, wherein the processing circuitry (106) facilitates the speech-to-text translation in a continual manner.

8. The speech-to-text translation system (100) as claimed in claim 1, wherein the processing circuitry (106) utilizes losses produced by the APLS technique and the RS technique for backpropagation and updating of the previous language pairs.

9. A method (300) for speech-to-text translation comprising:
receiving (302), by way of an input unit (102), a current language pair;
combining (304), by way of processing circuitry (106), the current language pair with previous language pairs;
retaining (306), by way of the processing circuitry (106), the previous language pairs in equal proportion to the current language pair;
balance (308), by way of the processing circuitry (106), the previous language pairs, and the current language pair;
maintaining (310), by way of the processing circuitry (106), a representative set that approximates a gradient of the previous language pairs;
displaying (312), by way of the processing circuitry (106), a text corresponding to the current language pair.

Documents

Application Documents

# Name Date
1 202321071567-STATEMENT OF UNDERTAKING (FORM 3) [19-10-2023(online)].pdf 2023-10-19
2 202321071567-PROVISIONAL SPECIFICATION [19-10-2023(online)].pdf 2023-10-19
3 202321071567-FORM FOR SMALL ENTITY(FORM-28) [19-10-2023(online)].pdf 2023-10-19
4 202321071567-FORM FOR SMALL ENTITY [19-10-2023(online)].pdf 2023-10-19
5 202321071567-FORM 1 [19-10-2023(online)].pdf 2023-10-19
6 202321071567-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-10-2023(online)].pdf 2023-10-19
7 202321071567-EVIDENCE FOR REGISTRATION UNDER SSI [19-10-2023(online)].pdf 2023-10-19
8 202321071567-DRAWINGS [19-10-2023(online)].pdf 2023-10-19
9 202321071567-DECLARATION OF INVENTORSHIP (FORM 5) [19-10-2023(online)].pdf 2023-10-19
10 202321071567-FORM-26 [04-03-2024(online)].pdf 2024-03-04
11 202321071567-Proof of Right [19-04-2024(online)].pdf 2024-04-19
12 202321071567-ENDORSEMENT BY INVENTORS [03-05-2024(online)].pdf 2024-05-03
13 202321071567-DRAWING [03-05-2024(online)].pdf 2024-05-03
14 202321071567-COMPLETE SPECIFICATION [03-05-2024(online)].pdf 2024-05-03
15 Abstract.1.jpg 2024-06-19
16 202321071567-FORM-9 [12-08-2024(online)].pdf 2024-08-12
17 202321071567-MSME CERTIFICATE [05-11-2024(online)].pdf 2024-11-05
18 202321071567-FORM28 [05-11-2024(online)].pdf 2024-11-05
19 202321071567-FORM 18A [05-11-2024(online)].pdf 2024-11-05
20 202321071567-PA [31-12-2024(online)].pdf 2024-12-31
21 202321071567-FORM28 [31-12-2024(online)].pdf 2024-12-31
22 202321071567-EVIDENCE FOR REGISTRATION UNDER SSI [31-12-2024(online)].pdf 2024-12-31
23 202321071567-EDUCATIONAL INSTITUTION(S) [31-12-2024(online)].pdf 2024-12-31
24 202321071567-ASSIGNMENT DOCUMENTS [31-12-2024(online)].pdf 2024-12-31
25 202321071567-8(i)-Substitution-Change Of Applicant - Form 6 [31-12-2024(online)].pdf 2024-12-31
26 202321071567-FER.pdf 2025-01-23
27 202321071567-FORM 3 [04-02-2025(online)].pdf 2025-02-04
28 202321071567-FER_SER_REPLY [05-06-2025(online)].pdf 2025-06-05
29 202321071567-RELEVANT DOCUMENTS [08-08-2025(online)].pdf 2025-08-08
30 202321071567-FORM 13 [08-08-2025(online)].pdf 2025-08-08
31 202321071567-US(14)-HearingNotice-(HearingDate-10-10-2025).pdf 2025-09-22
32 202321071567-Correspondence to notify the Controller [25-09-2025(online)].pdf 2025-09-25
33 202321071567-Written submissions and relevant documents [27-10-2025(online)].pdf 2025-10-27
34 202321071567-PatentCertificate06-11-2025.pdf 2025-11-06

Search Strategy

1 SearchHistoryE_07-01-2025.pdf

ERegister / Renewals