Sign In to Follow Application
View All Documents & Correspondence

A Monolingual Data Based Code Mixed Text To Speech System

Abstract: ABSTRACT A MONOLINGUAL DATA BASED CODE MIXED TEXT-TO-SPEECH SYSTEM The present invention relates to a monolingual data based code-mixed Text-to-Speech system and method thereof. The Text-to-Speech system based model comprising an encoder and a decoder to convert monolingual Hindi or English data into output speech in a target speaker’s voice. The model is first trained on a single speaker English data in roman script (150). The model is further trained on a multi-speaker data from a pool of English and Hindi data in Devanagari script (160). Since the model is already pre-trained on large amounts of Hindi and English data, the decoder is fine-tuned to generate output speech in a target speaker’s voice (170). During fine tuning process, only the decoder is trained on the target speaker’s Hindi and English data and also the number of weights for training are reduced in the encoder i.e., the encoder is frozen and will not be trained during the fine-tuning process. Figure 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 November 2023
Publication Number
52/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

FLIPKART INTERNET PRIVATE LIMITED
Buildings Alyssa, Begonia & Clover, Embassy Tech Village, Outer Ring Road, Devarabeesanahalli Village, Bengaluru - 560103, Karnataka, India

Inventors

1. JOSHI, Raviraj
Flat A-1104, Polite Harmony, Near Sane Chowk, Chikhali, Pune-411062, Maharashtra, India
2. GARERA, Nikesh
I-1638 Brigade Cosmopolis, 286 Whitefield Main Road, Bangalore-560066, Karnataka, India

Specification

Description:FIELD OF INVENTION

[001] The present invention relates to a Text-to-Speech (TTS) system. Particularly, the present invention relates to a code-mixed Text-to-Speech (TTS) system. More particularly, the present invention relates to a code-mixed TTS conversion using only monolingual data.

BACKGROUND OF THE INVENTION
[002] Text-to-Speech (TTS) systems convert written text into spoken words. The TTS systems provide enhanced accessibility to users with disabilities such as visual impairment and provide immersive reading experience to different types of users. These systems are crucial in making the information available across different formats and improving inclusivity in communication.
[003] There are many known Text-to-Speech (TTS) systems such as concatenative, formant or parametric systems. Concatenative systems use pre-recorded human speech units, formant system synthesizes speech from modeled articulatory parameters and parametric system generates speech from mathematical models. The prior known Text-to-Speech systems have limitations including difficulties in natural intonation, pronunciation errors and challenges with complex linguistic nuances. Furthermore, due to lack of emotional expressiveness and context-dependent errors in speech, achieving human like prosody and understanding remains challenging in the prior known systems.
[004] Code-mixing refers to the mixing of phrases, words, and morphemes of one language into another language (Myers-Scotton 1997). The Text-to-Speech systems with code mixing are quite complex primarily because of difference in meaning of terms and phases, multiple pronunciation rules of the languages to be mixed and lack of resources for language retrieval and speech recognition.
[005] Reference may be made to US Patent No. US 11,514,887 B2, that discloses a text-to-speech synthesis method using machine learning. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to speech synthesis model. However, such a single language text-to-speech synthesis method doesn't provide code-mixing of more than one language.
[006] The single language TTS systems doesn't work well for code-mixed text. The first component in a TTS system is the G2P (Grapheme to phoneme). The English G2P model maps Hindi phonemes to the closest English phonemes. This might not result in natural pronunciation for Hindi words. For example, if a language model tries to read words in an unknown language written in a model-favorable script such as a code-mixed text written in Roman script “Lets go to my ghar”, a single language TTS model may pronounce the Hindi word "ghar" as "garr".
[007] For building a code-mixed Text-to-Speech system, a corpus of code-mixed recordings for training such a system is required. However, such recordings are rarely available in practice due to the focus on a single language. Hence lack of datasets is a hindrance for the development of such systems.
[008] Hence, a need arises to devise an improved system and method for code-mixed Text-to-Speech synthesis using a single script transliteration-based approach, making the system simplified, more economical and less complicated.

OBJECTIVES OF THE INVENTION

[009] The primary objective of the present invention is to provide a method for code-mixed Text-to-Speech synthesis.
[0010] Another objective of the present invention is to build a Hindi-English code-mixed or code-switched TTS.
[0011] Still another objective of the present invention is to utilize a single script transliteration-based approach to build a bilingual system.
[0012] Other objectives and advantages of the present invention will become apparent from the following description taken in connection with the accompanying drawings, wherein, by way of illustration and example, the aspects of the present invention are disclosed.

SUMMARY OF THE INVENTION

[0013] The present invention relates to a monolingual data based code-mixed Text-to-Speech system and method thereof. Ideally, code-mixed recordings are required for training such a code-mixed or code-switched TTS system. However, such recordings are rarely available in practice due to the focus on a single language. To solve the problem of the lack of datasets, a data-oriented approach is proposed that utilizes monolingual data from two languages. The present invention provides a Text-to-Speech system based model comprising an encoder and a decoder to convert monolingual Hindi or English data into output speech in a target speaker’s voice. The model is first trained on a single speaker English data in roman script. The model is further trained on a multi-speaker data from a pool of English and Hindi data in Devanagari script. Since the model is already pre-trained on large amounts of Hindi and English data, the decoder is fine-tuned to generate output speech in a target speaker’s voice. During fine tuning process, only the decoder is trained on the target speaker’s Hindi and English data and also the number of weights for training are reduced in the encoder i.e., the encoder is frozen and will not be trained during the fine-tuning process.

BRIEF DESCRIPTION OF DRAWINGS

[0014] A complete understanding of the present invention may be obtained by reference to the accompanying drawings, when taken in conjunction with the detailed description thereof and in which:

[0015] Figure 1 illustrates a method for creating a code-mixed Text-to-Speech (TTS) using monolingual Hindi and English data.

[0016] Figure 2 illustrates a training module of the code-mixed TTS system.

DETAILED DESCRIPTION OF THE INVENTION

[0017] The following description describes various features and functions of the disclosed system and method with reference to the accompanying figure. In the figure, similar symbols identify similar components, unless context dictates otherwise. The illustrative aspects described herein are not meant to be limiting. It may be readily understood that certain aspects of the disclosed system and method can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein.

[0018] Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.

[0019] Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.

[0020] The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention.

[0021] It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.

[0022] It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. The equations used in the specification are only for computation purpose.

[0023] The terms “module” and “corpus” used herein denote a software or hardware component, and the “corpus” or “audio corpus” performs a specific role of storing audio data corresponding to English and Hindi data, recorded in the voices of one or more artist speakers, and may include one or more datasets of pre-recorded audio samples. However, the meaning of “unit” and “corpus” or “audio corpus” is not limited to software or hardware. It may be a combination of software, hardware, or firmware. The “module” may be configured to operate in conjunction with a generic or a specific processing unit to execute instructions to carry out the functioning of the present invention. Other hardware or software components may be utilized to implement the present invention.

[0024] The systems and methods disclosed in this invention may be implemented by hardware, software, firmware and/or any combination thereof. For example, a processor such as CPU, GPU, or any other processing unit that may be implemented by different types of electronic components such as logic circuits, microprocessors, Integrated Circuits, microcontrollers, etc., may be used in the present invention. In a non-limiting example, the model disclosed in the present invention may be implemented with a configuration of Intel Xeon (Skylake, IBRS) CPU with 72 cores, Nvidia A100 Tensor core GPU with 40 GB VRAM, and a RAM with configuration of 350GB along with other auxiliary hardware and software components. However, the model may be implemented with some another configuration of CPU/GPU and Memory also.

[0025] Software may include, but not limited to Application Programming Interface (API) that enables different software components to communicate with each other, whereby internet-based web or mobile applications can access or request remote web services through their APIs. In a non-limiting example, the model of the present invention may be exposed with an API endpoint that is configured to connect with or hit from the Flipkart mobile application having a voice assistant feature. The API can also be hit from the customer experience voice bot.

[0026] While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.

[0027] Accordingly, a Text-to-Speech (TTS) system may be provided for e-commerce mobile or web applications. The e-commerce applications include voice assistant and customer experience bot. The text to be spoken in these applications have inherent code-mixing of two languages.

[0028] The present invention also provides a method to create such Text-to-Speech (TTS) system which works well in code-mixed use cases. The present invention uses two languages, a primary language and a secondary language in which terms and phrases of the secondary language are code-mixed in the primary language.

[0029] In an embodiment, the primary language is Hindi and the secondary language is English. A code-mixed data may be generated and required to be converted into an output speech in a target language. For example - "Mafi chahate hai, par aapke product BrandX Super Protection Tape ko wapis nahi kiya jaa sakta hai" is a code-mixed data that may be converted into a speech in Hindi language. Since product names and service terminologies are in English such code-mixed text is very common. According to the present invention, a novel approach has been designed to build a code-mixed TTS using only monolingual Hindi and English data.

[0030] The present invention implements one or more features of code-mixed TTS system using one or more non-limiting examples. Some of the examples for training the code-mixed TTS system includes (i) English data in Roman script and transliteration thereof in Devanagari script, (ii) Hindi data in Devanagari script, and (iii) pure mixed English and Hindi data in Devanagari script, as illustrated below:

[0031] In an embodiment, an audio corpus may be provided, comprising a plurality of audio samples in the primary language and the secondary language. The audio samples are studio recordings recorded with the voices of one or more speakers uttering a text in primary language or secondary language. In another embodiment, the audio corpus is a multi-speaker Hindi corpus containing audio samples in Hindi recorded with the voices of one or more speakers. The voice of an artist corresponding to the primary language and/or the transliterated secondary language for a mixed text in Devanagari script may be a target voice.

[0032] The audio corpus may be stored on a local hard drive attached to a computing system or a server in a compatible format, for example, raw .wav format. It may be appreciated by person skilled in the art that the audio data may be stored in some other formats also. All the audio data may be re-sampled at 16 kHz and further encoded in 16-bit PCM wav format for training and inference. Such datasets may be used for training one or more modules.

[0033] Figure 1 represents an embodiment of the present invention, wherein a neural model is used for converting code-mixed data into speech. The conversion of code-mixed data into speech is performed by two main modules: (i) a transliteration module; and (ii) a training module. The functioning of the training module is explained in Figure 2.

[0034] In an embodiment, the neural model is a pre-trained model that encodes the language representation, such as characters, words, or sentences, so that one or more embedding vectors may be used for other tasks. In the context of Natural Language Processing (NLP) and deep learning models, especially for tasks involving text data, the embedding layer is a fundamental component that is used to convert categorical variables, such as words or tokens in text, into dense vectors of real numbers, often referred to as embeddings. These embeddings capture semantic relationships between words and allow the model to learn and understand the meaning of words in a continuous vector space. It is well known in the art that an embedding layer is a type of hidden layer in a neural network that maps input information from a high-dimensional space to a lower-dimensional space.

[0035] Some of the features of the present invention may be implemented using a two-stage Text-to-Speech synthesis model using Tacotron 2 and Waveglow architectures. Tacotron 2 is a neural network architecture that generates Mel-spectrogram frames directly from input text using an encoder-decoder architecture, whereas WaveGlow is a flow-based model that utilizes Mel-spectrogram frames to generate output speech. It can be appreciated by a person skilled in the art that other architecture models are also available that are being used in existing Text-to-Speech systems, for example, Fastspeech2, FastPitch, Transformer-TTS, MelGAN, HiFiGAN, StyleMelGAN, etc.

[0036] In some embodiments, a Tacotron 2 speech synthesis model may be implemented to generate a Mel-spectrogram from the input text. Tacotron 2 is a neural network architecture for Text-to-Speech synthesis and mainly includes a recurrent sequence-to-sequence feature prediction network that predicts a sequence of Mel-spectrogram frames from an input character sequence. It consists of an encoder, which creates an internal representation of the input character sequence, and a decoder, which turns this representation into a Mel-spectrogram.

[0037] In some embodiments, a Waveglow generative model may be implemented to generate speech output from a mel-spectrogram. WaveGlow is a flow-based modified Wavenet vocoder that generates time-domain waveform samples conditioned on the predicted Mel-spectrogram frames. Waveglow utilizes a single network and can be trained using only a single cost function, which makes the training procedure simple and stable.

[0038] In another embodiment of the present invention, a monolingual code-mixed Text-to-Speech System implements the following modules:

I. Transliteration Module: A transliteration module is utilized to first convert English data into Devanagari script. The combined Hindi data and transliterated English data in Devanagari script are used to train the model.
II. Training Module: The module is first pre-trained on English data in roman script and further trained on pure mixed language data in Devanagari script using the multi-speaker audio corpus followed by a fine-tuning step. During fine-tuning process, the encoder is frozen, and the decoder is fine-tuned to select a target speaker of choice.

[0039] The transliteration module utilizes a rule based mappings, for example, dictionary lookup, that converts a script of source language to a script into a target language. In one embodiment, the transliteration module transliterates roman script of the source language “English” into Devanagari script of the target language “Hindi”. In Fig. 1, 120 illustrates one or more types of language data used during training and testing scenarios. 130 illustrates the training scenario. During training, both English and Hindi data in original script forms, for example, roman script for English data and Devanagari script for Hindi data, and transliterated English data in Devanagari script, are utilized to train the model. 140 illustrates the testing scenario, where pure code-mixed data is utilized to test the model.

[0040] The transliteration module transliterates English data into Devanagari script. It can be easily understood with the help of some of the following examples illustrating pure English training sentences in roman script transliterated into Devanagari script:
i. The English sentence “by a similar processed produced the block books, which were the immediate predecessors of the true printed book” is transliterated in Devanagari script as “??? ? ?????? ????????? ??????????? ? ????? ?????, ????? ?? ? ??????? ???????????? ?? ? ???? ???????? ???”.
ii. The English sentence “the invention of movable metal letters in the middle of the fifteenth century may justly be considered as the invention of the art of printing” is transliterated in Devanagari script as “? ???????? ?? ?????? ???? ?????? ?? ? ????? ?? ? ????????? ??????? ?? ?????? ?? ???????? ?? ?? ???????? ?? ? ???? ?? ?????????”.
iii. The English sentence “and it is worth mention in passing that, as an example of fine typography” is transliterated in Devanagari script as “??? ?? ?? ???? ????? ?? ?????? ???, ?? ?? ????????? ?? ???? ???????????”.
iv. The English sentence “the earliest book printed with movable types, the gutenberg, or "forty-two line bible" of about fourteen fifty-five” is transliterated in Devanagari script as “? ????????? ??? ???????? ??? ?????? ??????, ? ?????????, ?? "??????-?? ???? ??????" ?? ????? ??????? ??????-????”.

[0041] Some examples are provided that illustrates pure Hindi training sentences in Devanagari script:
i. ???? ??? ?? ????
ii. ?? ???? ?? ??? ???
iii. ????? ??? ?????? ?? ??????? ??? ?? ?? ???? ?? ????? ???? ??? ???
iv. ?? ???????? ?? ?????????, ?????? ???? ?? ?????? ?? ??????? ???? ?? ???? ???? ????? ??? ???? ???? ??? ????

[0042] Some more examples illustrating Test/runtime code-mixed sentences in Devanagari script:
i. ??????! ??? ???? ???????? ?????? ??????????
ii. ??? ??????, ???? ????? ??? ?????? ?? ??? ??? ???? ??? ?? ????? ?? ????? ???? ???? ????? ???? ?????? ???? ???? ?????
iii. ???? ??? ?? ??? ?? ?? ?????? ????? ???? ???, ????? ???? ????? ?? ?? ???? ???-?? ????? ?? ???? ??? ??? ?? ??? ????
iv. ?? ?? ?? ?????? ?????? ?? ??? ??? ?? ?? ????? ?? ??? ???? ???? ???? ?????? ?? ?????????? ???? ??????? ??????? ?? ??? ??? ???

[0043] Figure 2 illustrates a training module. The training module comprises an encoder and a decoder. The training module implements a three step process. In Fig. 2, the reference numeral 150 illustrates the first step wherein the model utilizes a single speaker English language data such that once the model is trained with said data, the embedding layer that learned the representations for the roman characters is discarded. At second step (160), the model is further trained on the multi-speaker data from a pool of English and Hindi speakers that may be stored in the audio corpus. At third step (170), the model freezes the text encoder weights such that the encoder will not be trained during fine-tuning of the decoder. The model provides fine tuning of the decoder by training the decoder on a target speaker’s Hindi and English data so as to select a target speaker to provide output in the voice of a single speaker artist.

[0044] In another embodiment, the Text-to-Speech system provides a model for the conversion of monolingual Hindi or English data into output speech, wherein a text encoder first processes input text and a decoder further utilizes the processed input to generate an output audio spectrogram. Since the model is already pre-trained on large amounts of English and Hindi data, the encoder is further configured to freeze the encoder weights. Freezing means discontinuing the training process of the encoder while executing a fine-tuning process in the decoder. Final-tuning means the decoder is trained on a target speaker’s Hindi and English data and adapted to provide a target speaker voice. Since the decoder is trained on multiple voices, there is a need to adapt it to the final target voice. During fine-tuning process, the encoder will not be trained and the decoder is configured to provide an output targeting a particular speaker’s voice. The freezing of the encoder reduces the number of weights to be trained as the encoder weights are skipped, which account for 3/5th of the total weights. This reduces overfitting for the target speaker.

[0045] The present invention provides a technical solution to the problem of how to build a code-mixed Text-to-Speech synthesis system to provide an output speech using a single script transliteration-based approach. The present invention provides a technical solution for converting bilingual text inputs into monolingual speech output. Thus, as described in the present invention, a TTS system is provided that performs bilingual text to monolingual speech conversion with extremely low resource constraints.

[0046] In an embodiment, the present invention discloses a monolingual Text-to-Speech system comprising: a first database for receiving data in a primary language; a second database for receiving data in a secondary language; an audio corpus comprising plurality of audio samples in the primary language and the secondary language of one or more speakers; a transliteration module adapted to receive the data from the second database; and a training module adapted to process the data received from the first database, the transliteration module, and the audio corpus.

[0047] In the embodiments, the transliteration module is configured to transliterate the data received from the second database into a target script format data.

[0048] In the embodiments, the training module comprises an encoder and a decoder, wherein the training module is configured to train with the secondary language and the audio sample corresponding to the secondary language, the encoder is further configured to discard embedding weights associated with the secondary language.

[0049] In the embodiments, the training module is further configured to train with the primary language and the script format data and one or more audio samples of one or more speakers corresponding to the primary language and the script format data.

[0050] In the embodiments, the training module provides a fine tuning of the decoder, wherein the fine tuning includes training of the decoder to select audio sample of a target speaker from one or more speakers and discontinue the training of the encoder.

[0051] Further, as per the embodiment, the primary language is Hindi language, the secondary is selected from the group of languages British English, Middle English, American English, Old English, Scottish English, International English, Canadian English, and the target script format is Devanagari script.

[0052] According to an aspect of the invention, the converted output speech is emitted by a playback means operably coupled to the processor wherein the playback means is selected from but not limited to speaker, mobile speaker, wireless speakers, bluetooth speakers, sound bars, etc.

[0053] In an embodiment, a method of operation of the monolingual text-to-speech system may be implemented. The method includes at least the steps of:
i. receiving data in a primary language through a first database;
ii. receiving data in a secondary language through a second database;
iii. transliterating the data received by the second database into a target script format data;
iv. training by an encoder in a training module with the data received from the second database and audio sample corresponding to the secondary language from an audio corpus;
v. discarding one or more embedding weights associated with the secondary language by the encoder;
vi. training the training module with the data received from the primary database, the script format data, and one or more audio samples corresponding to the primary language and the target script format data from the audio corpus; and
vii. providing by the training module a fine tuning of a decoder, wherein the fine tuning includes training of the decoder to select audio sample of a target speaker from one or more speakers and discontinue the training of the encoder.

[0054] In an embodiment, the present invention may include a computer program product that may comprise computer-readable instructions for causing a processor to carry out aspects of the present invention on a computer or a processing device.

[0055] While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
, Claims:WE CLAIM:

1. A monolingual text-to-speech system comprising:
i. a first database for receiving data in a primary language;
ii. a second database for receiving data in a secondary language;
iii. an audio corpus comprising plurality of audio samples in the primary language and the secondary language of one or more speakers;
iv. a transliteration module adapted to receive the data from the second database; and
v. a training module adapted to process the data received from the first database, the transliteration module, and the audio corpus,
wherein
the transliteration module is configured to transliterate the data received from the second database into a target script format data; and
the training module comprises an encoder and a decoder, wherein
• the training module is configured to train with the secondary language and the audio sample corresponding to the secondary language, the encoder is further configured to discard embedding weights associated with the secondary language,
• the training module is further configured to train with the primary language, the script format data, and one or more audio samples corresponding to the primary language and the script format data, and
• the training module provides a fine tuning of the decoder, wherein the fine tuning includes training of the decoder to select audio sample of a target speaker from one or more speakers and discontinue the training of the encoder.

2. The system as claimed in claim 1, wherein the primary language is Hindi language.

3. The system as claimed in claim 1, wherein the secondary language is selected from the group of languages including British English, Middle English, American English, Old English, Scottish English, International English, Canadian English.

4. The system as claimed in claim 1, wherein the target script format data is Devanagari script.

5. The system as claimed in claim 1, wherein the audio sample of the target speaker is the output audio to the code-mixed input.

6. A method of operation of the monolingual text-to-speech system as claimed in claim 1, wherein the method comprising the steps of:
i. receiving data in a primary language through a first database;
ii. receiving data in a secondary language through a second database;
iii. transliterating the data received by the second database into a target script format data;
iv. training by an encoder in a training module with the data received from the second database and audio sample corresponding to the secondary language from an audio corpus;
v. discarding one or more embedding weights associated with the secondary language by the encoder;
vi. training the training module with the data received from the primary database, the script format data, and one or more audio samples corresponding to the primary language and the target script format data from the audio corpus; and
vii. providing by the training module a fine tuning of a decoder, wherein the fine tuning includes training of the decoder to select audio sample of a target speaker from one or more speakers and discontinue the training of the encoder.

7. The method as claimed in claim 6, wherein the first database receives Hindi language data.

8. The method as claimed in claim 6, wherein the second database receives from the group of languages including British English, Middle English, American English, Old English, Scottish English, International English, Canadian English.

9. The method as claimed in claim 6, wherein the target script format data is Devanagari script.

10. A computer program product comprising computer-readable instructions for implementing the method of claim 6 on a computer or a processing device.

Documents

Application Documents

# Name Date
1 202341078244-STATEMENT OF UNDERTAKING (FORM 3) [17-11-2023(online)].pdf 2023-11-17
2 202341078244-REQUEST FOR EXAMINATION (FORM-18) [17-11-2023(online)].pdf 2023-11-17
3 202341078244-REQUEST FOR EARLY PUBLICATION(FORM-9) [17-11-2023(online)].pdf 2023-11-17
4 202341078244-PROOF OF RIGHT [17-11-2023(online)].pdf 2023-11-17
5 202341078244-POWER OF AUTHORITY [17-11-2023(online)].pdf 2023-11-17
6 202341078244-FORM-9 [17-11-2023(online)].pdf 2023-11-17
7 202341078244-FORM 18 [17-11-2023(online)].pdf 2023-11-17
8 202341078244-FORM 1 [17-11-2023(online)].pdf 2023-11-17
9 202341078244-DRAWINGS [17-11-2023(online)].pdf 2023-11-17
10 202341078244-DECLARATION OF INVENTORSHIP (FORM 5) [17-11-2023(online)].pdf 2023-11-17
11 202341078244-COMPLETE SPECIFICATION [17-11-2023(online)].pdf 2023-11-17
12 202341078244-FER.pdf 2025-05-13
13 202341078244-FER_SER_REPLY [24-06-2025(online)].pdf 2025-06-24
14 202341078244-COMPLETE SPECIFICATION [24-06-2025(online)].pdf 2025-06-24
15 202341078244-CLAIMS [24-06-2025(online)].pdf 2025-06-24

Search Strategy

1 202341078244E_21-06-2024.pdf