Abstract: SYSTEM AND METHOD FOR CREATING AND USING PHONETIC GLOSSARY IN TEXT TO SPEECH GENERATION ABSTRACT Disclosed is a system and method for creating and using phonetic glossary in text to speech generation. The system and method involve a client device and an application server connected via a communication network. The system is configured for the replacement of an audio file in a multimedia file into a desired target language with accurate pronunciation, using a phonetic glossary created during the preprocessing stage of the audio file. The phonetic glossary is created by the client device by receiving text input from the user. This includes a library containing actual texts for a word in the target language and a combination of texts indicating pronunciation of the corresponding word. The text indicating the desired pronunciation of the word replaces all the similar words in the text data which are further given for text to speech conversion by the audio generator in the application server. Ref. fig. 1
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2005
COMPLETE SPECIFICATION
(See section 10, rule 13)
1. TITLE OF THE INVENTION:
SYSTEM AND METHOD FOR CREATING AND USING PHONETIC GLOSSARY IN TEXT TO SPEECH GENERATION
2. APPLICANT
(a) Name: Rikaian Technology Pvt. Ltd.
(b) Nationality: An Indian Company
(c) Address:
Office No. 3, S. No. 846,
Near Marathwada College, Shivajinagar,
Pune 41100, Maharashtra, India
3. PREAMBLE TO THE DESCRIPTION
PROVISIONAL
The following specification describes the invention. COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present invention relates to information technology and more particularly, the present invention relates to a system and method for creating and using phonetic glossary in text to speech generation for audio editing in a media file that can be used for all the languages wherever digital voice is supported and for pronouncing the words as expected.
BACKGROUND OF THE INVENTION
Today, text to speech services are very common for the providers like Google, Microsoft, or Amazon. This service is exposed as API and can be integrated into various products for generating digital or machine voice. However, there is an issue when reproducing the utterance sound of the target language and use it for communication.
All these services are based on sophisticated AI/ML techniques yet in some cases the digital voice also known as machine-generated voice does not pronounce the words as expected. There can be various factors to be considered for this variation in the pronunciation like the region where it is spoken, the data used to train the machine learning algorithm, etc. This results in the pronunciation which is not as expected.
An automatic dubbing method disclosed in US2023076258A1, comprises responsive to receiving a selection of media content for playback on a user device by a user, processing the extracted speeches of a first voice from the media content to generate replacement speeches using a set of phonemes of a second voice of the user of the user device, and replacing the extracted speeches of the first voice with the generated replacement speeches in the audio portion of the media content for playback on the user device. Here a first voice is extracted from an audio portion of a media content then generate a voice print model of a second voice of a user. In responsive to receiving a selection of the media content for playback on the user device the device process the extracted speeches by utilizing the voice print model to generate replacement speeches and replacing the extracted speeches of the voice with the generated speeches in the audio portion of the media content for playback on the user device. In one of the embodiments, a text processing module and a voice position tracking module are used by the audio processing module to enhance the dubbing process. However, in this process, no preprocessing of the dubbed voice is performed prior to speech conversion. This may result in a speech output that does not accurately reflect the intended pronunciation.
Accordingly, there exists a need to provide a system and method for creating and using phonetic glossary in text to speech generation for audio editing in a media file that would eliminate the issues associated with pronunciation and accent of the audio.
OBJECTS OF THE INVENTION
An object of present invention is to generate audio data as per selected language in accordance with pronunciation of the translated subtitles in a media file.
Yet, another object of the present invention is to produce voice information in accordance with phonetic notation of input scripts in the selected language so that a user can experience the selected language in actual pronunciation.
Yet another objective of the present invention is to enable the creation of a phonetic glossary and a repository of frequently repeated words in a media file by receiving inputs from the user.
Yet, another object of the present invention is to provide a technique that can automatically generate audio that corresponds to the subtitles of a media file in any language.
Yet, another object of the present invention is to provide a system and method for creating and using phonetic glossary in Text to Speech Generation.
SUMMARY OF THE INVENTION
A system for creating and using phonetic glossary in text to speech generation comprising at least one client device operably connected to a communication network via a communication module, an application server configured within at least one server device. The client device having at least one processor functionally coupled to the communication module, at least one storage unit, and a plurality of input/output devices, wherein the storage unit is configured with an application module that is communicatively coupled to the processor. The application server is configured within at least one server device having at least one processing unit operably coupled to at least one communication interface, and at least one storage medium, the application server is configured to be in communication with the application module in the client device via the communication network.
The application server is configured with a media file receiver, a subtitle generator, a subtitle editor containing a preprocessing and translator module, an audio generator, a post processing and mixing module. The application server is configured to receive a media file from the client device via the media file receiver, generate subtitles from the audio data as per a target language by a speech to text engine in the subtitle generator, editing the generated subtitles by the subtitle editor in communication with the client device, translating the generated subtitles by the translator module, producing an audio file with required pronunciation in the target language as per the phonetic glossary generated during preprocessing of the subtitle file, generate final audio file of the translated subtitles by the audio generator, create final video file by mixing the video and generated audio file by the post processing and mixing module and provide the multimedia file to the client device.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects and advantages of the present invention will become apparent when the disclosure is read in conjunction with the following figures, wherein
Figure 1 shows a functional block diagram of a system for creating and using phonetic glossary in text to speech generation in accordance with an embodiment of the present invention,
Figure 2 shows a functional block representation of preprocessing of text in a media file in a system for creating and using phonetic glossary in text to speech generation in accordance with an embodiment of the present invention,
Figure 3 shows a block representation of mixing of a plurality of files for a target media file in accordance with an embodiment of the present invention,
Figure 4 shows a flow diagram of method for creating and using phonetic glossary in text to speech generation in accordance with an embodiment of the present invention, and
Figure 5 shows a pictorial view of the phonetic text column of a system for creating and using phonetic glossary in text to speech generation in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The foregoing objects of the present invention are accomplished and the problems and shortcomings associated with the prior art, techniques and approaches are overcome by the present invention as described below in the preferred embodiments.
The present invention provides a system and method for creating and using phonetic glossary in text to speech generation. The system is configured to produce digital voice in a multimedia file by converting text data in the subtitles into speech using a phonetic glossary. Hence, by adjusting the text in the subtitles, one can influence the pronunciation, although the actual spelling may be different. The system supports all languages wherever digital voice is supported.
The present invention is illustrated with reference to the accompanying drawings, throughout which reference numbers indicate corresponding parts in the various figures. These reference numbers are shown in brackets in the following description.
In the following description, for the purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
Throughout this application, with respect to all reasonable derivatives of such terms, and unless otherwise specified (and/or unless the particular context clearly dictates otherwise), each usage of:
“a” or “an” is meant to read as “at least one.”
“the” is meant to be read as “the at least one.”
References in the present invention to “one embodiment” or “an embodiment” mean that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one of the embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components and may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, firmware and/or by human operators.
Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. A system and a method for practicing various embodiments of the present invention may involve one or more processors and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks.
If the specification states a component or feature "may' can", "could", or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
As used in the description herein and throughout the claims that follow, the meaning of "a, an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this invention will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
Parts of the description may be presented in terms of operations performed by at least one electrical / electronic circuit, a computer system, using terms such as data, state, link, fault, packet, and the like, consistent with the manner commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. As is well understood by those skilled in the art, these quantities take the form of data stored/transferred in the form of no transitory, computer-readable electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipulated through mechanical and electrical components of the computer system; and the term computer system includes general purpose as well as special purpose data processing machines, switches, and the like, that are standalone, adjunct or embedded. For instance, some embodiments may be implemented by a processing unit that executes program instructions so as to cause the processing unit to perform operations involved in one or more of the methods described herein.
The program instructions may be computer-readable code, such as compiled or non-compiled program logic and/or machine code, stored in a data storage unit that takes the form of a non-transitory computer-readable medium, such as a magnetic, optical, and/or flash data storage unit. Moreover, such processing unit and/or data storage may be implemented using a single computer system or may be distributed across multiple computer systems (e.g., servers) that are communicatively linked through a network to allow the computer systems to operate in a coordinated manner.
While embodiments of the present invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the scope of the invention, as described in the claim.
The present invention is illustrated with reference to the accompanying drawings, throughout which reference numbers indicate corresponding parts in the various figures. These reference numbers are shown in bracket in the following description.
Referring to the figures from 1 to 5, a system for creating and using phonetic glossary in text to speech generation (100) for enhancing the multimedia audio quality is provided in accordance with the present invention. The system (100) comprises of at least one client device in communication with an application server via a communication network. The application server is configured within a server device and is coupled to a communication network via a network interface.
In an embodiment, the application server is a cloud-based application and the connectivity between the client device and the server device in a client server architecture.
In an embodiment, the server device of the present invention is particularly a cloud-based server. The cloud-based server may be communicatively coupled to the client device through API technology.
In an embodiment of the present invention, the client device comprises at least one processor operably coupled to at least one communication module, at least one storage unit, and a plurality of input/output devices. The storage unit is configured with an application module that is communicatively coupled to the processor. The application module upon executed by the processor is capable of communicating with the application server.
The server device having at least one processing unit operably coupled to at least one communication interface, and at least one storage medium. The application server is configured to be in communication with the application module upon executed by the processor in the client device. The communication interface operably connects the server device with the communication network and make communication possible with a plurality of such client devices.
In an embodiment of the present invention, the application module in the client device is configured for receiving and translating the audio data in the multimedia file to a language selected by a user of the client device. The application module that in communication with the application server, when executed is capable of performing speech synthesis on received multimedia file containing video, audio and subtitle information and replace the audio information in accordance with the user selection by the user of the client device.
The application server is configured with a media file receiver for receiving multimedia file from a user of the client device, a subtitle generator for generating subtitle file from audio file, a subtitle editor containing a preprocessing and translator module for translating the text data corresponding to subtitle file, an audio generator for creating audio file from the processed text data, a post processing and mixing module for combining the video, generated audio file, and subtitle file. The application server is configured to receive a media file from the client device via the application module, receive selection of audio language and reproduce a multimedia file containing translated audio file of the selected language. This includes speech synthesis based on the plurality of texts provided in the form of recognizable words. These words are further converted into meaningful pronunciation in a preset language. In an exemplary embodiment, the pronunciation is generated by means of preset rules or through a dictionary of pronunciation. In addition, the application server also comprises a translator module that translates the words in accordance with the texts entered in a phonetic text column, when executed by the processing unit.
Referring to the figure 1, the functional block diagram is shown in accordance with an embodiment of the system for creating and using phonetic glossary in text to speech generation in a multimedia file. The application module in the client device is configured with a user interface for communicating with a user. The client device allows uploading a multimedia file that needed to be audio edited. The multimedia file in the embodiment includes a subtitle, audio speech, background music and video information. The user interface further allows the user to input selection of language for audio file. The subtitle text and the audio information are extracted for translating in the audio language/ target language) as per user selection. The application server receives this audio speech information and translates into the target language by taking the subtitle information as a reference.
Referring to the figure 2, the processing of the subtitle text data is shown in accordance with an embodiment of the present invention. The text data extracted from the subtitle file is preprocessed and translated to a target language by the translator module. The target language is selected by the user of the client device by means of a user interface provided by the application module. The preprocessing of the text data includes receiving inputs from a phonetic glossary created for the target language. In an embodiment, the phonetic glossary is created by the client device by receiving text input from the user. This includes a library containing actual texts for a word in the target language and a combination of texts indicating pronunciation of the corresponding word. The text indicating the desired pronunciation of the word replaces all the similar words in the text data which are further given for text to speech conversion by the audio generator in the application server. The preprocessing of the text data before converting into audio format gives reproduced multimedia file with enhanced accuracy in pronunciation. In one of the exemplary embodiments of the present invention, the application server generates voice information in accordance with the texts in a subsidiary subtitle created during the preprocessing of the text data.
In one of the exemplary embodiments of the present invention, the phonetic glossary is created automatically in accordance with the language selected by the user of the client device.
Referring to figure 3, a representation of mixing of a plurality of files to for a target media file in accordance with an embodiment of the present invention is shown. The generated subtitle file from the previous process is translated into a digital voice or human voice by means of a translator editor module. Further, the mixing module adds the background music into translated audio information then combine the translated subtitle file, generated audio file and source video file together to form a media file.
Referring to the figure 4, a flow diagram of method for creating and using phonetic glossary in text to speech generation is shown in accordance with an embodiment of the present invention. In an implementation scenario, the client device configured with the application module for connecting with the application server that provides a customized user interface for receiving input multimedia file which the user may intend to play back in a desirable language. The user interface allows the user of client device to select a language from a list of languages. The application module further receives the multimedia file and provide to the application server for further processing. The application server receives the multimedia file through the media file receiver. The user interface in the application module further receives selection of language for the audio file to be played by the multimedia file. The subtitle file from the multimedia file is extracted by subtitle generator in the application server. The generated subtitle file comprises text data that is translated to the user selected language. The translated text data is further set for preprocessing. The user interface further allows the user of the client device to create a phonetic text library containing words of the selected language and a combination of texts indicating pronunciation of the corresponding word. The text indicating the desired pronunciation of the word replaces all the similar words in the text data by the translator editor in the application server. The edited version of the text data of subtitle is further given for text to speech conversion by the audio generator in the application server. This creates voice data with required pronunciation for each of the user entry for the selected language. Finally, the application server combines the voice data of each audio segment to generate a complete audio file of the subtitles. Further, a mixing module in the application server creates a multimedia file by mixing the video, subtitle file and generated audio file for playback. The multimedia file thus generated is delivered to the client device in a format selected by the user of the client device.
In one of the embodiments of the present invention, a method for creating and using phonetic glossary in text to speech generation is explained in accordance with an embodiment of the present invention. In an implementation scenario, the client device configured with the application module for connecting with the application server that provides a customized user interface for receiving input multimedia file which the user may intend to play back in a desirable language. The user interface allows the user of client device to select a language from a list of languages. The application module further receives the multimedia file via a media file receiver and provide to the application server for further processing. Additionally, the user interface within the application module allows users to select the language for the audio file associated with the multimedia content. Subsequently, the application server extracts the subtitle file from the multimedia file using a subtitle generator therein. The generated subtitle file comprising a first subtitle text file of originally translated subtitle text data corresponding to the user selected language and a second subtitle text file, that is a copy of the first subtitle text file kept for further preprocessing and audio generation. The user interface further allows the user of the client device to create a phonetic text library containing words of the selected language and a combination of texts indicating pronunciation of the corresponding word via a recording text column in the user interface. The text indicating the desired pronunciation of a word in the recording text column replaces all the similar words in the text data in the second subtitle text file by the translator editor in the application server. The edited version of the second subtitle text file is further given for text to speech conversion by the audio generator in the application server. This creates an audio file containing voice data with desired pronunciation for each of the entry in the recording text column of the selected language. Further, the application server is configured to perform the task of combining the voice data from each audio segment to produce a comprehensive audio file that encompasses the subtitles. Additionally, within the application server, a mixing module is employed to create a multimedia file by blending the video, subtitle file, and the previously generated audio file. This multimedia file is then ready for playback. Finally, the application server delivers the resulting multimedia file to the client device, according to the user's preference.
In an implementation of the present invention, a speech to text engine in the subtitle generator in the application server extracts and converts the audio file in the multimedia file and convert them into subtitle file.
In one of the embodiments of the present invention, the translation and preprocessing of the audio file performs simultaneously that a user of the client device can increase the volume of the audio and adjust the duration of the audio segment to match with the video file.
In one of the embodiments of the present invention, the translator module in the application server is also capable of receiving audio files from external sources.
In one of the embodiments of the present invention, the process of generating a phonetic text library is a manual process by receiving texts from a user via the client device.
In one of the exemplary embodiments of the present invention, the preprocessing and translator module in the application server device creates a repository of repeated words of the phonetic glossary for repeated use.
In one of the exemplary embodiments of the present invention, the phonetic glossary is built as an independent platform that is linked to the translator editor of the application server. Thus, the phonetic glossary may be handled by a registered user of application server that receives the words and the combination of texts corresponding to the pronunciation of the word in a plurality of languages.
Referring to figure 5, pictorial view of the phonetic text column of a system for creating and using phonetic glossary in text to speech generation for audio editing in a multimedia file in accordance with an embodiment of the present invention. The phonetic text column is configured for generating a phonetic glossary which is made accessible to a user by means of a text box in a graphical user interface for the purpose of the translation editor in the translation/voice-over phase. This contains at least two text boxes side by side in the translation editor, in which a first box intended for receiving translation of the text which may be used to display the subtitle of the video information, and a second box which may receive a phonetic text for generating digital voice in synchronism with the video information. The second column specifically, the recording column allows the user to enter the texts in a way the user intends to sound the digital voice. The application module further produces digital voice in accordance with the phonetic text received in the second text box. The phonetic glossary thus produced contains a first column for the actual text and a second column for phonetic text, such that similar words need not be entered repeatedly throughout the period of video information.
ADVANTAGES OF THE INVENTION
1. In the system (100), once the phonetic glossary is introduced, the phonetic text column can be automatically generated from the translated text in the editor and words in the phonetic glossary. This reduces the considerable effort from the user side in generating text for digital voice.
2. The system (100) supports the automatic generation of phonetic glossary from the source texts entered through GUI.
3. The pre and post processing of the text data before converting into audio format gives reproduced multimedia file with enhanced accuracy in pronunciation.
4. Invention talks about modifying the text before sending for text to speech generation. It can be used in any application where text to speech is used. By using the technique mentioned in this invention, the text can be made more appropriate before sending for text to speech generation.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best 5 explain the principles of the present invention and its practical application, and to thereby enable others skilled in the art to best utilize the present invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the claims of the present invention.
,CLAIMS:We claim:
1. A system for creating and using phonetic glossary in text to speech generation (100), the system (100) comprising:
at least one client device operable connected to a communication network via a communication module, the client device having at least one processor functionally coupled to the communication module, at least one storage unit, and a plurality of input/output devices, wherein the storage unit is configured with an application module that is communicatively coupled to the processor,
an application server, the application server is configured within at least one server device having at least one processing unit operably coupled to at least one communication interface, and at least one storage medium, the application server is configured to be in communication with the application module in the client device via the communication network,
wherein the application server is configured with a media file receiver, a subtitle generator, a subtitle editor containing a preprocessing and translator module, an audio generator, a post processing and mixing module, and the application server is configured to receive a media file from the client device via the application module, receive selection of audio language via a user interface and produce an audio file of the selected language;
wherein the application server is configured to receive media files from the client device by the media file receiver, generate subtitles from the audio data as per a target language by a speech to text engine in the subtitle generator, editing the generated subtitles by the subtitle editor in communication with the client device, translating the generated subtitles by the translator module, producing an audio file with required pronunciation in the target language as per the phonetic glossary generated during preprocessing of the subtitle file, generate final audio file of the translated subtitles by the audio generator, create final video file by mixing the video and generated audio file by the post processing and mixing module and provide the multimedia file to the client device.
2. The system as claimed in claim 1, wherein the application server device is a cloud-based application.
3. The system as claimed in claim 1, wherein the server device is communicatively coupled to the client device through API technology.
4. A method for creating and using phonetic glossary in text to speech generation (150) having at least one client device coupled to an application server via a communication network, the client device composed of at least one processor operably coupled to at least one storage unit, a plurality of input/output devices and at least one communication module, wherein the storage unit is configured with an application module that is in communication with the application server, the method comprising steps of:
receiving media files by the media file receiver in the application server via the client device;
receiving selection of audio language from a user by the application server via the client device;
generating subtitle file from the audio data in the multimedia file by a speech to text engine in the subtitle generator in the application server;
editing the generated subtitle by the subtitle editor in the application server;
translating the text data as per the selected language by a translator module in the application server;
generating a phonetic text library for a plurality of translated texts in the translated subtitles by the translator module in the application server;
creating a phonetic glossary and a repository of repeated words by preprocessing and translator module in the application server;
creating voice data with required pronunciation for each of the entry in the phonetic glossary of the selected language by the audio generator in the application server;
combining the voice data of each audio segment and generating an audio file of the subtitles by the audio generator in the application server;
creating a video file by mixing the video, subtitle file and generated audio file by the post processing and mixing module in the application server; and
delivering the multimedia file to the client device in a format selected thereby.
5. The method as claimed in claim 4, wherein the process of generating a phonetic text library is an automated process.
6. The method as claimed in claim 4, wherein the process of generating a phonetic text library is a manual process by receiving texts from a user via the client device.
7. The method as claimed in claim 4, wherein the process of generating subtitle file from the audio data in the multimedia file as per the selected language by a speech to text engine in the subtitle generator in the application server.
Dated this on 18th day of July, 2023
Ragitha. K
(Agent for Applicant)
IN-PA/2832
| # | Name | Date |
|---|---|---|
| 1 | 202221024154-STATEMENT OF UNDERTAKING (FORM 3) [25-04-2022(online)].pdf | 2022-04-25 |
| 2 | 202221024154-PROVISIONAL SPECIFICATION [25-04-2022(online)].pdf | 2022-04-25 |
| 3 | 202221024154-POWER OF AUTHORITY [25-04-2022(online)].pdf | 2022-04-25 |
| 4 | 202221024154-FORM FOR STARTUP [25-04-2022(online)].pdf | 2022-04-25 |
| 5 | 202221024154-FORM FOR SMALL ENTITY(FORM-28) [25-04-2022(online)].pdf | 2022-04-25 |
| 6 | 202221024154-FORM 1 [25-04-2022(online)].pdf | 2022-04-25 |
| 7 | 202221024154-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-04-2022(online)].pdf | 2022-04-25 |
| 8 | 202221024154-EVIDENCE FOR REGISTRATION UNDER SSI [25-04-2022(online)].pdf | 2022-04-25 |
| 9 | 202221024154-DRAWINGS [25-04-2022(online)].pdf | 2022-04-25 |
| 10 | 202221024154-DECLARATION OF INVENTORSHIP (FORM 5) [25-04-2022(online)].pdf | 2022-04-25 |
| 11 | 202221024154-FORM 3 [15-07-2022(online)].pdf | 2022-07-15 |
| 12 | 202221024154-ENDORSEMENT BY INVENTORS [15-07-2022(online)].pdf | 2022-07-15 |
| 13 | 202221024154-PostDating-(25-04-2023)-(E-6-76-2023-MUM).pdf | 2023-04-25 |
| 14 | 202221024154-APPLICATIONFORPOSTDATING [25-04-2023(online)].pdf | 2023-04-25 |
| 15 | 202221024154-FORM 3 [19-07-2023(online)].pdf | 2023-07-19 |
| 16 | 202221024154-ENDORSEMENT BY INVENTORS [19-07-2023(online)].pdf | 2023-07-19 |
| 17 | 202221024154-DRAWING [19-07-2023(online)].pdf | 2023-07-19 |
| 18 | 202221024154-COMPLETE SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 19 | 202221024154-FORM-9 [25-08-2023(online)].pdf | 2023-08-25 |
| 20 | 202221024154-STARTUP [29-08-2023(online)].pdf | 2023-08-29 |
| 21 | 202221024154-FORM28 [29-08-2023(online)].pdf | 2023-08-29 |
| 22 | 202221024154-FORM 18A [29-08-2023(online)].pdf | 2023-08-29 |
| 23 | Abstract1.jpg | 2023-10-05 |
| 24 | 202221024154-FER.pdf | 2023-12-20 |
| 25 | 202221024154-RELEVANT DOCUMENTS [19-06-2024(online)].pdf | 2024-06-19 |
| 26 | 202221024154-PETITION UNDER RULE 137 [19-06-2024(online)].pdf | 2024-06-19 |
| 27 | 202221024154-OTHERS [19-06-2024(online)].pdf | 2024-06-19 |
| 28 | 202221024154-FER_SER_REPLY [19-06-2024(online)].pdf | 2024-06-19 |
| 29 | 202221024154-COMPLETE SPECIFICATION [19-06-2024(online)].pdf | 2024-06-19 |
| 30 | 202221024154-US(14)-HearingNotice-(HearingDate-21-10-2024).pdf | 2024-10-03 |
| 31 | 202221024154-FORM-26 [18-10-2024(online)].pdf | 2024-10-18 |
| 32 | 202221024154-Correspondence to notify the Controller [18-10-2024(online)].pdf | 2024-10-18 |
| 33 | 202221024154-Written submissions and relevant documents [29-10-2024(online)].pdf | 2024-10-29 |
| 34 | 202221024154-Form-4 u-r 138 [16-12-2024(online)].pdf | 2024-12-16 |
| 35 | 202221024154-Written submissions and relevant documents [17-12-2024(online)].pdf | 2024-12-17 |
| 36 | 202221024154-Annexure [17-12-2024(online)].pdf | 2024-12-17 |
| 37 | 202221024154-PatentCertificate26-12-2024.pdf | 2024-12-26 |
| 38 | 202221024154-IntimationOfGrant26-12-2024.pdf | 2024-12-26 |
| 1 | 202221024154E_11-12-2023.pdf |