Abstract: Method and system for media content management. The system generates summary of a video, using super frames extracted from the video. Further, the generated video summary data is used for video navigation, content retrieval, and for video search purposes.
DESC:The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-
TECHNICAL FIELD
[001] The embodiments herein relate to electronic devices and, more particularly, to media content management in electronic devices.
BACKGROUND
[002] Modern day electronic devices such as mobile phones, cameras (video/still), tablets, and computers and so on are equipped with media capture, storage, and management features and associated technologies.
[003] The devices such as smart phones, tablet computers and so on, which are popular among the public, are also provided with local as well as cloud based storage devices. A single user might be using different devices, and data might be stored in different locations. As a result, data management becomes a matter of concern. In addition to this, data storage constraints also arise. For example, most of the mobile phones in this era are equipped with high quality camera. As a result, the photographs occupy more storage space. Further, when multiple devices are used to capture images/audio/video of the same event, data redundancy problems also arise.
[004] Further, from the user perspective, it is quite difficult to manage the data. For example, when the user needs to search for a particular content (be it a video, audio, or image), the system may take more time to retrieve the results if the amount of data stored in the system is more, in terms of the size. Further, arranging and rearranging data can be a tedious process as well. Consider an example, where the user has a set of videos taken on a holiday; currently, he has to watch the videos individually and has to manually arrange them.
OBJECT OF INVENTION
[005] An object of the embodiments herein is to generate video summary using super frames extracted from a video.
[006] Another object of the embodiments herein is to index video files using video summarization.
[007] Another object of the embodiments herein is to retrieve action sequences using video summarization.
[008] Another object of the embodiments herein is to retrieve video files matching an image query, as moment recall.
[009] Another object of the embodiments herein is to generate a master summary of two or more selected videos, wherein the master summary comprises of selected frames from the selected videos.
[0010] Another object of the embodiments herein is storage space optimization using video summarization.
SUMMARY
[0011] In view of the foregoing, an embodiment herein provides a method for video summarization. Initially, at least one sequence of frames in a video is identified by a User Equipment (UE). Further, the identified sequence of frames is extracted as at least one super frame of the video, by the UE. Further, the video summary is generated using the extracted super frame, by the UE, based on a pre-determined criterion.
[0012] Embodiments further disclose a system for video summarization. The system configured for identifying at least one sequence of frames in a video, by a User Equipment (UE). The UE further extracts the sequence of frames as at least one super frame of the video. The UE further generates a video summary using the extracted super frame, based on a pre-determined criterion.
[0013] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
[0014] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0015] FIG. 1 illustrates a block diagram of User Equipment (UE) used for video summarization, as disclosed in the embodiments herein;
[0016] FIG. 2 illustrates a block diagram that shows components of the UE, as disclosed in the embodiments herein;
[0017] FIG. 3 is a flow diagram that depicts steps involved in the process of generating a video summary using super frames, as disclosed in the embodiments herein;
[0018] FIG. 4 is a flow diagram that depicts steps involved in the process of video summary based video navigation, using the UE, as disclosed in the embodiments herein;
[0019] FIG. 5 is a flow diagram that depicts steps involved in the process of video summary based action summary retrieval, using the UE, as disclosed in the embodiments herein;
[0020] FIG. 6 is a flow diagram that depicts steps involved in the process of using video summary for moment recall, as disclosed in the embodiments herein; and
[0021] FIG. 7 is a flow diagram that depicts steps involved in the process of using video summary for storage space optimization, as disclosed in the embodiments herein.
DETAILED DESCRIPTION OF EMBODIMENTS
[0022] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0023] The embodiments herein disclose a mechanism for media management. Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
[0024] FIG. 1 illustrates a block diagram of User Equipment (UE) used for video summarization, as disclosed in the embodiments herein. The UE 101 can be any electronic device that can store data in at least one format. The UE 101 can further be configured to have at least one means for capturing and storing media in at least one of the aforementioned formats. The UE 101 can be further configured to save data in local memory, cloud based storage space, or both. The UE 101 can be further configured to have at least one means to display the media content to users. The UE 101 can be further configured to support at least one option to allow a user to interact with the UE 101, so as to manage the data. The UE 101 can be a smart phone, tablet computer, a Personal Digital Assistant (PDA) and/or any such device.
[0025] FIG. 2 illustrates a block diagram that shows components of the UE, as disclosed in the embodiments herein. The UE 101 comprises of an Input/Output (I/O) interface 201, a video summarization engine 202, a memory module 203, a navigation module 204, a content retrieval module 205, and a master summary generator 206.
[0026] The I/O interface 201 can be configured to allow users to interact with the UE 101, so as to perform at least one function related to data management, data capture, and any such related activity. The I/O interface 201 can be in any suitable form such as but not limited to a keypad, and a touch screen display. Further, the I/O interface provides options for the user to initiate, and control any function associated with the data capture and management. The I/O interface 201 may further be associated with at least one means for capturing media content, and/or may receive/collect contents from an external source. The external source referred to herein can be internet, an external hard disk and so on.
[0027] The video summarization engine 202 can be configured to identify action sequences in a collected video, extract corresponding super frames, and generate video summary corresponding to the video, using the extracted super frames. The term ‘Super frame’ may refer to frames that represent unique action scenes from the video being processed. In an embodiment, the video summarization engine 202 automatically initiates the video summarization as and when a new video is collected and saved in the memory module 203. In another embodiment, the video summarization engine 202 performs video summarization, upon receiving instruction from the user.
[0028] The memory module 203 can be configured to store media contents of different types and different formats, in corresponding media databases, and provide them to other components of the UE 101, for further processing, upon receiving data request. In various embodiments, the memory module 203 can be internal or external to the UE 101. Further, the memory module 203 can be of fixed size or expandable. The memory module 203 can be further configured to store video summary generated for each video stored in the media databases, in the same or different databases. The memory module 203 can also be configured to support media content indexing so as to support quick content search and retrieval.
[0029] The navigation module 204 can be configured to perform video navigation. The video navigation process is intended to allow the user quick access to different action sequences in the video. While a video is being played, the navigation module 203 identifies, based on the video summary generated and stored for that video in the memory module 203, reels of super frames associated with the video to the user. Further, the navigation module 204 collects an input from the user, wherein the input is pertaining to selection of a particular super frame from the super frames that are part of the reel displayed to the user. Further, the navigation module redirects the user to a part of the video where the selected super frame is being displayed.
[0030] The content retrieval module 205 can be configured to collect a search query from the user, wherein the search query may comprise of at least a portion of at least one type of media file. In an embodiment, the search query may be instantly created by the user, based on a media content being viewed. For example, while watching a video file, the user may, using suitable options, select a particular portion of the video, and provide the selected portion as the as the search query. The content retrieval module 205, upon receiving the search query, searches among the contents stored in the memory module, preferably among the summary videos that are represented by a video library index, and identifies all matching contents. Further, the content retrieval module 205 presents the identified contents to the user, using the I/O interface module 201.
[0031] The master summary generator 206 can be configured to generate, for two or more selected videos, a master summary that comprises of selected frames from the selected videos. The master summary generator 206 may, from the video summary generated for the selected videos, identify super frames for the selected videos, and generate the master summary for the selected videos. In an embodiment, the master summary generator 206 receives user selection pertaining to the videos to be used to generate the master summary. In another embodiment, the master summary generator 206 automatically identifies, from the memory module 203, contents that are related to each other, selects them, and then generates the master summary for the selected videos. The master summary generator 206 can identify related contents based on at least one parameter such as, but not limited to, date on which the content has been generated and stores, and tags.
[0032] FIG. 3 is a flow diagram that depicts steps involved in the process of generating a video summary using super frames, as disclosed in the embodiments herein. Once a video is selected, automatically or based on user instruction, the video summarization engine 202 identifies (302) frames that represent different actions in the selected video. Further, the video summarization engine 202 extracts (304), the identified frames as the super frames corresponding to that particular video.
[0033] After identifying the super frames, the video summarization engine 202, based on one or more pre-determined criterion, generates a video summary from the identified super frame(s). In an embodiment, the pre-determined criterion is an interestingness score. The video summarization engine 202 determines (306) an interestingness of the extracted super frames, as an interestingness score. In an embodiment, the interestingness score is determined based on at least one criteria pre-configured by the user.
[0034] In an embodiment, the ‘interestingness’ is decided based on the amount of ‘new information’ present in a super frame being considered. Assume at time T, Mth super-frame is being processed, and a dictionary consisting of N super-frames (represented as spatio-temporal features) is also available. The Mth super-frame is compared against all contents of the dictionary using pre-configured matching criteria, and number of matches (N) is identified. If value of ‘N’ exceeds a predefined threshold ‘T’, the interestingness score of Mth super-frame is set as ‘high’. Further, the Mth super-frame is added to the dictionary by removing an already existing super-frame from the dictionary, thereby updating the dictionary. In an embodiment, the super-frame that matches most with the rest of the super-frames in the dictionary is chosen to be removed. In another embodiment, the dictionary is updated based on the interestingness score of the super-frame as well. For example, the interestingness score of a new super-frame considered is compared with interestingness score of a super-frame that has the least interestingness score value among all existing super-frames in the dictionary. If the interestingness score of the new super-frame is found to be higher, then the dictionary is updated by replacing the existing super-frame considered, with the new super-frame. If value of N is lower than the threshold ‘T’, the interestingness score of Mth super-frame is set as ‘low’, and the Mth super-frame is further not added to the dictionary.
[0035] Further, the determined interestingness score is compared with a threshold value of interestingness, wherein the threshold value of interestingness is pre-determined and pre-configured. If the determined interestingness score is found to be equal to or exceeding the threshold value, then the corresponding super frame is selected for generating the video summary. Further, by using the selected super frames, the video summary is generated (310). The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[0036] FIG. 4 is a flow diagram that depicts steps involved in the process of video summary based video navigation, using the UE, as disclosed in the embodiments herein. While a selected video is being played, the navigation module 203 identifies, based on the video summary generated and stored for that video in the memory module 203, super frames associated with the video. In an embodiment, only those super frames having high interestingness value are selected, and the selected super frames are then displayed (402) to the user, as a reel of super frames. The user can, using a suitable user interface, select at least one super frame from the reel being displayed.
[0037] The navigation module 204 receives (404) an input pertaining to the user selection of a particular super frame, and identifies (406) a specific portion of the video being played, where the selected super frame is selected from. Further, the navigation module 204 navigates/redirects (408) the user to the selected part of the video. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
[0038] FIG. 5 is a flow diagram that depicts steps involved in the process of video summary based action summary retrieval, using the UE, as disclosed in the embodiments herein. The content retrieval module 205 in the UE 101 collects (502) a search query from the user, wherein the search query may comprise of at least a portion of at least one type of media file. For example, if the user intends to search for all videos in the media library index, then the search query can be a portion of any video. For example, while watching a video file, the user may, using suitable options provided by the content retrieval module 205 and the I/O interface 201, select a particular portion of the video, and provide the selected portion as the as the search query.
[0039] The content retrieval module 205, upon receiving the search query, extracts (504) all super frames from the query video, and compares (506) the extracted super frames with the video library index. By comparing the super frames, the content retrieval module identifies (508) and retrieves (510) all matching contents from the video library. Further, the identified matches are displayed to the user. For example, if the search query video is of taking a penalty kick in a football match, the content retrieval module 205 by searching, identifies all videos in the library that has at least one similar super-frame (that displays the penalty kick), and displays the search results to the user.
[0040] The various actions in method 500 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 5 may be omitted.
[0041] FIG. 6 is a flow diagram that depicts steps involved in the process of using video summary for moment recall, as disclosed in the embodiments herein. The term ‘moment recall’ refers to a feature that allows to collect video summaries that match a query input, wherein the query input is an image. The UE 101 initiates the moment recollection by collecting (602) an image as a query input. Further, the collected query input is compared (604) with a database in an associated storage space, in which video summary pertaining to at least one video is stored.
[0042] By comparing the query input with the video summaries in the database, at least one video summary is identified that matches the query input. Any suitable image and/or video processing and comparison algorithm can be used to compare the query input with the video summaries. In various embodiments, parameters such as but not limited to time stamp, and geo tag associated with the query input as well as the video summaries, are considered to identify a match.
[0043] Upon identifying at least a match, the detected match is provided (608) as output, in response to the query input, in a suitable format, using at least one suitable interface. If no match is found, then a pre-configured message that indicates that no result is found, is displayed (610) to the user, using a suitable interface.
[0044] The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.
[0045] FIG. 7 is a flow diagram that depicts steps involved in the process of using video summary for storage space optimization, as disclosed in the embodiments herein. The user can initiate (702) video recording using the UE 101.
[0046] The UE 101 can be configured to monitor recording of the video, and detect at least one trigger of a pre-defined type, for storage space optimization. For example, the available storage space can go below a set value i.e. a threshold limit of storage space which has been pre-configured with the UE 101. Further, the trigger can be at least one of or a combination of a manual input provided by the user, available storage space being less than a threshold value, and/or any such event as pre-defined by a user.
[0047] Upon receiving at least one trigger for the storage space optimization, the UE 101 dynamically generates (706) a summary of the video being recorded, and stores the video summary in the corresponding storage space, instead of the actual video.
[0048] The various actions in method 700 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 7 may be omitted.
[0049] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 2 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0050] The embodiments disclosed herein specify a mechanism for media content management in an electronic device. The mechanism allows super frame based video summary generation for various applications, providing a system thereof. Therefore, it is understood that the scope of protection is extended to such a system and by extension, to a computer readable means having a message therein, said computer readable means containing a program code for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment using the system together with a software program written in, for ex. Very high speed integrated circuit Hardware Description Language (VHDL), another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including, for ex. any kind of a computer like a server or a personal computer, or the like, or any combination thereof, for ex. one processor and two FPGAs. The device may also include means which could be for ex. hardware means like an ASIC or a combination of hardware and software means, an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means or at least one hardware-cum-software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the embodiment may be implemented on different hardware devices, for ex. using a plurality of CPUs.
[0051] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.
,CLAIMS:CLAIMS
What is claimed is:
1) A method for video summarization, the method comprising:
identifying at least one sequence of frames in a video, by a User Equipment (UE);
extracting the at least one sequence of frames as at least one super frame of the video, by the UE; and
generating a video summary using the at least one extracted super frame, based on at least one predetermined criterion, by the UE.
2) The method as claimed in claim 1, wherein the at least one pre-determined criteria is an interestingness score of the super frame.
3) The method as claimed in claim 2, wherein determining the interestingness score of the at least one super frame further comprises of:
comparing the super frame with a dictionary of super frames, by the UE;
identifying extent to which the super frame matches with contents of the dictionary, by the UE;
determining interestingness of the super frame as high, by the UE, if amount of match between the super frame and contents of the dictionary exceeds value of a pre-configured threshold limit; and
determining interestingness of the super frame as low, by the UE, if amount of match between the super frame and contents of the dictionary is lower than the threshold limit.
4) The method as claimed in claim 3, wherein the dictionary is updated by adding at least one new super-frame, wherein adding the at least one super-frame comprises of:
identifying an existing super frame with the lowest value of interestingness score among all existing super frames in the dictionary;
comparing interestingness score of the at least one super-frame with interestingness score of the identified existing super frame; and
replacing the existing super-frame with the at least one super frame, if interestingness score of the at least one super frame is more than that of the existing super-frame.
5) The method as claimed in claim 1, wherein the video summary is used for video navigation, wherein using the video summary for video navigation further comprises of:
displaying a reel of super frames from the video summary, by the UE, wherein the reel of super frames comprises of super frames with interestingness score as high;
receiving a user selection for at least one super frame from the reel of super frames, by the UE;
identifying a specific portion of the video being represented by the super frame being received as the user selection, by the UE; and
navigating to the specific portion of the video, by the UE.
6) The method as claimed in claim 1, wherein the video summary is used for indexing a video library, by the UE, wherein indexing the video library using the video summarization further comprises of generating and storing video summary corresponding to all videos in the video library.
7) The method as claimed in claim 6, wherein the video library index is used for an action summary retrieval, wherein using the video library index for the action summary retrieval further comprises of:
receiving a query video from a user, by the UE, wherein the query video comprises of at least frame corresponding to at least one action;
extracting super frames from the query video, by the UE;
comparing the extracted super frames with the video library index, by the UE;
identifying at least one match for the extracted super frames, in the video library index, by the UE; and
retrieving at least one video summary corresponding to the identified match in the video library index, by the UE.
8) The method as claimed in claim 1, wherein the video summarization is used for recalling a memory, wherein the recalling of memory based on the video summarization further comprises of:
collecting a query input from a user, by the UE, wherein the query input is an image;
identifying at least one match for the query input among a plurality of video summaries saved in a storage space associated with the UE, by the UE; and
providing the at least one match as response to the query input, by the UE.
9) The method as claimed in claim 1, wherein the video summarization is used for storage space optimization in the UE, wherein the storage space optimization using the video summarization further comprises of:
initiating video recording, by the UE;
detecting at least one trigger for the storage space optimization, while the video is being recorded, by the UE;
generating a video summary for the video being recorded, dynamically upon detecting the at least one trigger, by the UE; and
storing the generated video summary in a storage space associated with the UE, in place of the video being recorded, by the UE.
10) A system for video summarization, the system configured for:
identifying at least one sequence of frames in a video, by a User Equipment (UE);
extracting the at least one sequence of frames as at least one super frame of the video, by the UE; and
generating a video summary using the at least one extracted super frame, based on at least one predetermined criterion, by the UE.
11) The system as claimed in claim 10, wherein the UE is configured to use an interestingness score corresponding to the at least super frame as the pre-determined criterion for generating the video summary.
12) The system as claimed in claim 11, wherein the system is configured to determine the interestingness score of the at least one super frame by:
comparing the super frame with a dictionary of super frames, by the UE;
identifying extent to which the super frame matches with contents of the dictionary, by the UE;
determining interestingness of the super frame as high, by the UE, if amount of match between the super frame and contents of the dictionary exceeds value of a pre-configured threshold limit; and
determining interestingness of the super frame as low, by the UE, if amount of match between the super frame and contents of the dictionary is lower than the threshold limit.
13) The system as claimed in claim 12, wherein said system is configured to update the dictionary by adding at least one new super-frame, by:
identifying an existing super frame with the lowest value of interestingness score among all existing super frames in the dictionary, by said UE;
comparing interestingness score of the at least one super-frame with interestingness score of the identified existing super frame, by said UE; and
replacing the existing super-frame with the at least one super frame, if interestingness score of the at least one super frame is more than that of the existing super-frame, by said UE.
14) The system as claimed in claim 10, wherein the system is configured to use the video summary for video navigation by:
displaying a reel of super frames from the video summary, by the UE, wherein the reel of super frames comprises of super frames with interestingness score as high;
receiving a user selection for at least one super frame from the reel of super frames, by the UE;
identifying a specific portion of the video being represented by the super frame being received as the user selection, by the UE; and
navigating to the specific portion of the video, by the UE.
15) The system as claimed in claim 10, wherein the system is configured to use the video summary for indexing a video library by generating and storing video summary corresponding to all videos in the video library, by the UE.
16) The system as claimed in claim 15, wherein the system is configured to use the video library index for an action summary retrieval by:
receiving a query video from a user, by the UE, wherein the query video comprises of at least frame corresponding to at least one action;
extracting super frames from the query video, by the UE;
comparing the extracted super frames with the video library index, by the UE;
identifying at least one match for the extracted super frames, in the video library index, by the UE; and
retrieving at least one video summary corresponding to the identified match in the video library index, by the UE.
17) The system as claimed in claim 10, wherein the system is configured to use the video summarization for recalling a memory by:
collecting query input from a user, by the UE, wherein the query input is an image;
identifying at least one match for the query input among a plurality of video summaries saved in a storage space associated with the UE, by the UE; and
providing the at least one match as response to the query input, by the UE.
18) The system as claimed in claim 10, wherein the video summarization is used for storage space optimization in the UE, wherein the storage space optimization using the video summarization further comprises of:
initiating video recording, by the UE;
detecting at least one trigger for the storage space optimization, while the video is being recorded, by the UE;
generating a video summary for the video being recorded, dynamically upon detecting the at least one trigger, by the UE; and
storing the generated video summary in a storage space associated with the UE, in place of the video being recorded, by the UE.
Dated this 19th February, 2016
Signature:
Name of the Signatory: Dr. Kalyan Chakravarthy
| # | Name | Date |
|---|---|---|
| 1 | SRIB-20141201-001_Provisional Specification.pdf | 2015-03-28 |
| 2 | Form5.pdf | 2015-03-28 |
| 3 | FORM3.pdf | 2015-03-28 |
| 4 | Drawings.pdf | 2015-03-28 |
| 5 | Drawing [19-02-2016(online)].pdf | 2016-02-19 |
| 6 | Description(Complete) [19-02-2016(online)].pdf | 2016-02-19 |
| 7 | 1452-CHE-2015-Power of Attorney-210416.pdf | 2016-07-13 |
| 8 | 1452-CHE-2015-Correspondence-PA-210416.pdf | 2016-07-13 |
| 9 | REQUEST FOR CERTIFIED COPY [04-10-2016(online)].pdf | 2016-10-04 |
| 10 | REQUEST FOR CERTIFIED COPY [27-12-2016(online)].pdf | 2016-12-27 |
| 11 | 1452-CHE-2015-FORM-26 [15-03-2018(online)].pdf | 2018-03-15 |
| 12 | 1452-CHE-2015-FER.pdf | 2019-08-05 |
| 13 | 1452-CHE-2015-OTHERS [05-02-2020(online)].pdf | 2020-02-05 |
| 14 | 1452-CHE-2015-FER_SER_REPLY [05-02-2020(online)].pdf | 2020-02-05 |
| 15 | 1452-CHE-2015-DRAWING [05-02-2020(online)].pdf | 2020-02-05 |
| 16 | 1452-CHE-2015-CORRESPONDENCE [05-02-2020(online)].pdf | 2020-02-05 |
| 17 | 1452-CHE-2015-COMPLETE SPECIFICATION [05-02-2020(online)].pdf | 2020-02-05 |
| 18 | 1452-CHE-2015-CLAIMS [05-02-2020(online)].pdf | 2020-02-05 |
| 19 | 1452-CHE-2015-ABSTRACT [05-02-2020(online)].pdf | 2020-02-05 |
| 20 | 1452-CHE-2015-US(14)-HearingNotice-(HearingDate-14-03-2022).pdf | 2022-02-21 |
| 21 | 1452-CHE-2015-FORM-26 [10-03-2022(online)].pdf | 2022-03-10 |
| 22 | 1452-CHE-2015-Correspondence to notify the Controller [10-03-2022(online)].pdf | 2022-03-10 |
| 23 | 1452-CHE-2015-Annexure [10-03-2022(online)].pdf | 2022-03-10 |
| 24 | 1452-CHE-2015-Written submissions and relevant documents [28-03-2022(online)].pdf | 2022-03-28 |
| 25 | 1452-CHE-2015-Written submissions and relevant documents [28-03-2022(online)]-1.pdf | 2022-03-28 |
| 26 | 1452-CHE-2015-Annexure [28-03-2022(online)].pdf | 2022-03-28 |
| 27 | 1452-CHE-2015-Annexure [28-03-2022(online)]-1.pdf | 2022-03-28 |
| 28 | 1452-CHE-2015-PatentCertificate30-03-2022.pdf | 2022-03-30 |
| 29 | 1452-CHE-2015-IntimationOfGrant30-03-2022.pdf | 2022-03-30 |
| 30 | 1452-CHE-2015-RELEVANT DOCUMENTS [31-08-2023(online)].pdf | 2023-08-31 |
| 31 | 1452-CHE-2015-FORM-27 [29-08-2025(online)].pdf | 2025-08-29 |
| 1 | 2019-08-0516-19-24_05-08-2019.pdf |