Sign In to Follow Application
View All Documents & Correspondence

A Method And System For Presenting Information For Entities Displayed On A Video Content

Abstract: A method and system for presenting information for entities displayed on a video content is provided. The system includes a detection module for identifying entities, a tag manager for providing a tag for each of the entities, an aggregator module for aggregating information associated with an entity selected by a user, and presentation module for presenting the information. The method includes receiving an input from a user, obtaining information associated with an entity selected by the user, aggregating the information associated with the entity, providing multiple presentation modes to the user and presenting, the information associated with the entity, based on one of the multiple presentation modes.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 May 2012
Publication Number
07/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SAMSUNG ELECTRONICS COMPANY
416 MAETAN-DONG, YEONGTONG-GU, SUWON-SI, GYEONGGI-DO 442-724

Inventors

1. SUBRAMANIAN MUTHUKUMAR
3/14, KAVALKARAN STREET, PERIYANDANKOVIL-WEST, KARUT-639 002
2. NEERAJ SOLANKI
S/O SHRI O.P. SOLANKI, S.D. - 108, SHASTRI NAGAR, GHAZIABAD,UP
3. RAVI KANT BANSAL
S/O SHRI DINESH KUMAR GUPTA A-26, KUMAWAT COLONY, KHHATIPURA ROAD, JHOTWARA JAIPUR, RAJASTHAN - 302 012
4. GINJAN AGRAWAL
S/O SHRI ANAND PRAKASH AGRAWAL C-11, SHIVALIK NAGAR, BHEL, HARIDWAR
5. PANKAJ THAKUR
S/O SHRI CHAGRAM THAKUR D-26, EVEREST HEIGHT, BIMAN NAGAR, PUNE, 411 014
6. DEVENDRA KHANDELWAL
S/O SHRI SURESH KHANDELWAL 31, SITARAM COLONY OPP. GOSHALA TONK ROAD SANGANER JAIPU, RAJASTHAN
7. AJIT JAIN
S/O DR. VISHWA NATH B-8/137, SANGANER ROHINI, DELHI-110 085

Specification

A METHOD AND SYSTEM FOR PRESENTING INFORMATION FOR ENTITIES

DISPLAYED ON A VIDEO CONTENT

FIELD OF THE INVENTION

[0001] The present invention relates to the field multimedia and more particularly to detection and presentation of one or more entities present in a video content and displayed on a display device.

BACKGROUND

[0002] In the recent times, digital televisions are becoming increasingly popular. In one example, digital televisions are used for, but not limited to, watching a video content by a user. Examples of the video content include, but are not limited to, a live video, a streaming video and a stored video. Presenting information about one or more entities while watching, for example, the video content enables a user to gain knowledge on the entities. Information in one example can include name of the entity, a location associated with the entity, a brief description of the entity or detailed description of the entity. Examples of the entities include, but not limited to, individuals, monuments, objects, products, living beings, for example, animals, birds and trees. However, a system for presenting information such that the user can gain the knowledge on the entities is not found. Consequently, the user obtains the information associated with the entities at a later time in future using one or more information sources, for example, books, internet and the like.

[0003] Conventional technique aims at detecting faces of one or more individuals. Each individual is associated with a profile. The profile includes information content of the individual in the form of text. When face of an individual is detected, then a profile associated with the individual whose face was detected is retrieved. Upon retrieving the profile, the information content included in the profile is presented, to a user, on a display device. The information content is presented in a video form, textual form, affective or graphic information or a combination of the video form, the audio form, the textual form. However, the technique provides merely the information that is stored prior to presenting on the display device. Consequently information associated with a live event that is broadcasted is not known. Further, the technique restricts to detecting and presenting information associated with merely individuals and not other entities, for example, monuments, locations, products, objects, birds, trees and the like.

[0004] In another example, a method and a system performs facial detection of multiple individuals included in a live video conference. The method includes detecting faces of individuals included in the live video conference. Further, faces detected are compared with multiple faces stored in a storage device to determine a match. Also, a voice of a person can also be detected along with the face for identifying an individual. Upon determining the individual, annotation is performed. The annotation includes personal information of the individual determined. The personal information of the individual is displayed in the form of a streaming video. A storage device is used for storing annotated information associated with each individual detected. However, the system is limited to capturing only faces of individuals included in the live video stream. Further, the system is restrained from detecting and annotating entities, for example, monuments, locations, products, objects, birds, trees and the like.

[0005] In the light of the foregoing discussion there is a need for a system and a method for detecting one or more entities, present in a video content, for example, a live I video, a streaming video and a stored video and an image, that are displayed on a display device and subsequently presenting information associated with the entities, to the user.

SUMMARY

[0006] Embodiments of the present disclosure described herein provide system and a method for presenting information for entities displayed on a video content.

[0007] An example of a system for presenting information for entities displayed on a video content includes a detection module for identifying multiple entities. The plurality of entities is being displayed on the video content. The system also includes a tag manager for providing a tag associated with each of the multiple entities. The system further includes an aggregator module for aggregating information associated with an entity wherein the entity is being selected by a user. The aggregation is being performed upon obtaining the information from various information sources. Further, the system includes a presentation module for presenting the information associated with each of the entities in multiple presentation modes.

[0008] An example of a method of presenting information for entities displayed on a video content includes receiving an input from a user. The input includes activation ofan information fetching mode by the user. The method also includes obtaining information associated with an entity selected by the user. The information is being obtained from multiple information sources. The method further includes aggregating the information associated with the entity. Further the method includes providing multiple presentation modes to the user. The presentation modes are being used to present, the information associated with the entity, in multiple data formats. Moreover, the method includes presenting the information associated with the one or more entities based on one of the presentation modes selected by the user.

BRIEF DESCRIPTION OF FIGURES

[0009] The accompanying figure, similar reference numerals may refer to identical or functionally similar elements. These reference numerals are used in the detailed description to illustrate various embodiments and to explain various aspects and advantages of the present disclosure.

[0010] FIG. 1 is a block diagram of an environment in accordance with which various embodiments can be implemented;

[0011] FIG. 2 is a block diagram of a system for presenting information for entities displayed on a video content, in accordance with one embodiment;

[0012] FIG. 3 is a flowchart illustrating a method of presenting information for entities displayed on a video content, in accordance with one embodiment

[0013] FIG. 4 is an exemplary illustration of presenting information for entities displayed on a video content, in accordance with one embodiment.

DETAILED DESCRIPTION

[0014] It should be observed the method steps and system components have been represented by conventional symbols in the figure, showing only specific details which are relevant for an understanding of the present disclosure. Further, details may be readily apparent to person ordinarily skilled in the art may not have been disclosed. In the present disclosure, relational terms such as first and second, and the like, may be used to distinguish one entity from another entity, without necessarily implying any I actual relationship or order between such entities.

[0015] Embodiments of the present disclosure described herein provide system and method of presenting information on entities included in a video content, a live video, a streaming video and a stored video, in accordance with one embodiment.

[0016] FIG. 1 is a block diagram of an environment 100 in accordance with which various embodiments can be implemented. The environment 100 includes electronic devices, for example, a digital television (DTV) 105a, a computer 105b and a mobile device 105c. The environment 100 also includes a user 110. The electronic devices are capable of constantly receiving and presenting a video content to the user 110. Examples of the video content include, but are not limited to, a live video, a streaming video and a stored video.

[0017] The video content include multiple entities. Examples of the entities include, but are not limited to, individuals, monuments, objects, products, living beings, for example, animals, birds and trees. The multiple entities are displayed while the video content is being played on the electronic devices. The user 110, while watching the video content, wishes to acquire information of one or more entities. Hence, the user 110 activates an information fetching mode on the electronic device. The information fetching mode can be activated using one or more input devices, for example, but not limited to, a remote, keyboard, a television menu and a pointing device.

[0018] Upon activating, the information fetching mode, the user 110 the one or more entities are tagged automatically. Each entity of the one or more entities is associated with a tag. The tag associated with each entity includes tag information. The tag information includes, but not limited to, name of the entity, location information associated with the entity and the like. Further, other details can also be included in the tag. The tag provided for each entity is thus displayed adjacent to the entity for viewing by the user 110. The tag information associated with each entity is stored in a tag information database.

[0019] Upon displaying the tag for each entity, the user 110 selects one or more entities. Selection is performed for obtaining information associated with each entity that is selected by the user 110. Selection can be performed using one or more input devices, for example, but not limited to, the remote, the television menu, the keyboard and the pointing device. Further, upon selecting the entities, information associated with each of the entity is collected. The information can be collected from various sources. Examples of the sources include, but not limited to, cloud and a local storage device. Multiple servers, located locally or remotely, present in the cloud can be used for collecting information associated with each of the entity selected. Further, the information collected from the sources is aggregated using pre defined rules. Furthermore, the aggregated information associated with each of the entity is thus presented to the user,

[0020] Multiple presentation modes can be used for presenting the aggregated information. The presentation modes are selected by the user 110. Examples of the presentation modes include, but not limited to, an audio mode, a video mode and a textual mode. The aggregated information is presented, to the user 110, based on the presentation mode selected by the user 110.

[0021] In one example, the user 110 can be watching a cricket match broadcasted in real time on the DTV 105a. The user 110 may wish to obtain information on, but not limited to, a player's details, playground information and individuals umpiring the cricket match. Hence, the user activates the information fetching mode on the DTV. As a result of activating the information fetching mode, the player, the playground and the individuals umpiring the cricket match can be automatically tagged. In such a case, the user 110 selects the player, the playground and the individuals umpiring the cricket match for obtaining the information. Further, the information associated with the player, the playground and the individuals umpiring the cricket match is collected from multiple sources. The information collected from the multiple sources is further aggregated. The aggregated information is subsequently presented, to the user 110, in various presentation modes.

[0022] A system for presenting information associated with the one or more entities included in a video content or an image is explained in detail in conjunction with FIG. 2.

[0023] FIG. 2 is a block diagram of a system 200 for presenting information for entities displayed on a video content, in accordance with one embodiment. The system 200 includes a detection module 205, a tag managing module 210, an aggregator module 215 and a presentation module 220. The system components are communicably coupled to each other using a communication interface.

[0024] The detection module 205 is used to detect one or more entities present in, for example, a video content. Examples of the video content include, but are not limited to, a live video, a streaming video and a stored video. In another example, the one or more entities can also be present in an image or a distal graphic content. The video content is played on electronic devices. Examples of the electronic devices include, but are not limited to, a DTV, a mobile device, computers, laptops personal digital assistants and hand held devices. Examples of the entities include, but are not limited to, individuals, monuments, objects, products, living beings, for example, animals, birds and trees. The entities, in one example, can be globally known. The detection module 205 utilizes various detection algorithms for detecting the entities present in the video content or the recorded video. In one example, a face detection algorithm is used to determine face of an individual included in the video content. In another example, various image processing algorithms are used to determine various entities included in the video content. The detection module 205 detects the entities in real time.

[0025] The detection module 205 further includes a database, for example a detection database 240 for storing the entities detected.

[0026] The tag managing module 210 automatically provides a tag for each entity detected. The tag managing module 210 provides the tag for each entity automatically since the entity is globally known. The tag can include, for example, but not limited to, name of the entity detected and location associated with the entity detected. In one example, the tag managing module 210 provides the tag in real time upon detection of the entity. The tag managing module 210 obtains the tag for the entity from one or more information sources. Examples of the information sources include, but are not limited to, a local database, remote storage device and the internet. The tag managing module 210 further includes a tag information database 245 for storing and maintaining the tag for each entity.

[0027] The aggregator module 215 is used for aggregating information associated with each entity detected by the detection module 205. Prior to performing aggregation, the aggregator module 215 obtains the information associated with each entity from the information sources. Further, the aggregator module 215 includes a database, for example, the aggregator database 250 for storing aggregated information associated with each entity.

[0028] In one example, Dr Abdul Kalam present in a video content gets automatically tagged since Dr Abdul Kalam is a globally known individual. Further, a user, for example, the user 110 wishes to obtain information associated with Dr Abdul Kalam while watching the video content. Hence the user can select Dr Abdul Kalam. Upon selecting, information associated with Dr Abdul Kalam is collected from the information sources. In one example, information associated with Dr Abdul Kalam can include, but is not limited to, full name of Dr Abdul Kalam, date of birth of Dr Abdul Kalam, designation of Dr Abdul Kalam and other information associated with Dr Abdul Kalam.

[0029] The presentation module 220 is configured to present the aggregated information associated with each entity. The presentation module 220 is further configured to present the aggregated information in various presentation modes. Examples of the presentation modes include, but are not limited to, an audio mode, a video mode and a textual mode.

[0030] The presentation module 220 further includes an audio managing module 225 configured to present the information in an audio format, a text managing module 230 configured to present the information in a textual format and a video managing module 235 configured to present the information in a video format. The user can select the audio format, the textual format, the video format or a combination of three formats for obtaining the information presented by the presentation module 220.

[0031] The audio managing module 225 obtains, from the aggregator database 250, the information associated with each entity. One or more communication interfaces can be used for obtaining the information associated with the entity into an audio format.

[0032] Further, a karaoke option can also be enabled for presenting the information that is included in the form of text. The text managing module 230 supports the karaoke option.

[0033] the information. Further, the audio managing module 225 processes the information associated with the entity. In one example, the information associated with the entity may be the form of text. The audio managing module 225 processes the information present in the form of text. Processing includes translating

[0034] In some embodiments, upon selecting the audio mode, the user can select a language. Upon selecting the language, the audio managing module 225 plays the information associated with the entity in the audio format. The audio format includes the Information. Further, the information present in the audio format can be played in one or more languages as desired by the user. In one example, a mult! language mode can be selected for playing the information present in the audio format in the one or more languages as desired by the user. In some embodiments, one or more web links of various audio clips associated with the entity can also be provided to the user. One or more natural language processing (NLP) algorithms can be used for providing the information in any language selected by the user.

[0035] The text managing module 230 obtains, from the aggregator database 250, the information associated with each entity. The communication interfaces can be used for obtaining the information. Further, the text managing module 230 processes the information associated with the entity. Processing includes translating the information associated with the entity into a text format. Upon translating, the text managing module 230 displays the information associated with the entity in the form of text. In one example, the text can be presented in the form of pamphlets and the like. Further, the text managing module 230 presents a significant description of the entity briefly in the text format.

[0036] In some embodiments, the text managing module 230 can present the text in one or more languages as desired by the user.

[0037] The video managing module 235 obtains, from the aggregator database 250, the information associated with each entity. The communication interfaces can be used for obtaining the information. Further, the video managing module 235 processes the information associated with the entity. Processing includes translating the information associated with the entity into a video format. Upon translating, the video managing module 235 displays the information associated with the entity in the video format. In some embodiments, the video managing module 235 is operable to provide one or more web links of various video clips associated with the entity to the user. The video managing module 235 further enables the user to select one link at one instant. Further, upon selecting, the video managing module 235 is operable to play the video clip selected using, for example, a picture in picture (PIP) technology.

[0038] In one embodiment, the system 200 can be embedded within, for example, but not limited to, display devices, such as, DTV, laptops, computers, mobile devices, PDA and the like.

[0039] In some embodiments, the system is operable to store multiple entities, selected by the user, that are auto tagged for providing information associated with the multiple entities. Further, the system is operable to store each of the multiple entities, selected by the user, in a database. Further, the system is configured to provide a sequence number to each of the multiple entities. Subsequently, based on the sequence number, the system is operable to obtain information associated with each entity and further present the information, to the user, in different presentation modes as selected by the user.

[0040] A method of presenting information associated with the one or more entities included in a video content or a recorded video is explained in detail in conjunction with FIG. 2.

[0041] FIG. 3 is a flowchart illustrating a method of presenting information for entities displayed on a video content, in accordance with one embodiment.

[0042] The method starts at step 305. At step 310 an Input is received from a user, for example, the user 110. The input can include the user activating an information fetching mode while watching a video content. Examples of the video content include, but are not limited to, a live video, a streaming video and a stored video. The live stream includes a video content that is broadcasted directly. The streaming video includes multiple video contents from, for example, but not limited to, a cloud server. Examples of the streaming video include, but are not limited to, youtube videos and the like. The stored stream can include a video content or an audio content that is played from a local storage device, for example, a compact disk, a harddisk and the like. Activation of the information fetching mode enables the user to acquire information associated with multiple entities included in the video content. Upon activating the information fetching mode, the entities are detected automatically in real time. A detection module, for example the detection module 205 can be used to detect the entities. Further, the entities detected are automatically tagged since the information fetching mode is activated by the user. In one example, famous entities included in the video content are auto tagged in response to the user activating the information fetching mode. Each of the entities that are tagged is associated with a tag information. The tag information, in one example can include name of the entity, a location associated with the entity, or a brief description of the entity. The detected entities are tagged automatically by a tag managing module, for example the tag managing module 210. Further, a tag information associated with each entity is stored in a tag information database, for example, the tag information database 245.

[0043] In one embodiment, the input also includes one or more entities, of the multiple entities present in the video content, selected by the user. The one or more entities are selected for obtaining information associated with each of the entities selected by the user. The user can select the entities using one or more input devices. Examples of the input devices include, but are not limited to, remote, television menu, pointing devices and the like. Upon selecting, the one or more entities are highlighted.

[0044] At step 315, the information associated with the entities, selected by the user, are obtained. The information is obtained from multiple information sources. Examples of the information sources include, but are not limited to, local storage device, remote storage device and the internet. The information can be obtained in a textual format or an image format. Further, one or more links to audio clips associated with the selected entity and one or more links to video clips associated with the selected entity, is also obtained in the textual format.

[0045] At step 320, the information of the entities, selected by the user, that are obtained from the multiple information sources, are aggregated. Various aggregation algorithms can be used to perform aggregation. One or more pre-defined rules can be used to perform the aggregation. An aggregator module, for example, the aggregator module 320 is used to obtain and aggregate the information associated with the entities selected by the user.

[0046] At step 325, multiple presentation modes are provided to the user. Examples of the presentation modes include, but are not limited to, an audio mode, a video mode and a textual mode. The presentation modes are used present the information associated with entities selected by the user in various data formats. Examples of the data formats include, but are not limited to, an audio format, a video format and a textual format. The user can select a single presentation mode or a combination of presentation modes. The input devices can be used for selecting the presentation modes.

[0047] In one example, if the user selects a textual mode, then the information, associated with the entities is presented to the user in a textual format. Simultaneously, the user can also select the audio mode. Upon selecting the audio mode, the information presented in the textual format is converted into the audio format.

[0048] At step 330, the information associated with the entities, selected by the user, are presented to the user. The information is presented, to the user, based on the presentation mode selected by the user. In one example, if the audio mode is selected, by the user, then information, such as, but not limited to, one or more audio clippings associated with a selected entity is collected from the information sources. The information collected from the information sources is aggregated. Further, aggregated information, associated with the selected entity, is converted in the form of an audio. Subsequently, the audio is played, to the user, using one or more audio output devices, for example, a head phone, an external speaker and the like.. An audio managing module, for example, the audio managing module 225 is used to convert and present the information, to the user, in the form of the audio.

[0049] In some embodiments, upon selecting the audio mode, the user can also select a language. Upon selecting the language, the aggregated information can be played in the language as selected by the user. In some embodiments, one or more web links of various audio clips associated with the entity can also be provided to the user. One or more natural language processing (NLP) algorithms can be used for presenting the information in any language selected by the user. One or more audio output devices included in the display device is used for playing the information in the audio format.

[0050] Further, a karaoke option can also be enabled for presenting the aggregated information that is included in the form of text. A text managing module, for example, the text managing module 230 supports the karaoke option.

[0051] In another example, if the video mode is selected, by the user, then information, such as, but not limited to, one or more video clippings associated with the selected entity is collected from the information sources. The information collected from the information sources is aggregated. Further, aggregated information, associated with the selected entity, is converted in the form of a video. Subsequently, the video is played, to the user, using one or more media players. In some embodiments, one or more web links of various video clips associated with the entity can also be provided to the user. The user can select one link at one instant. Further, upon selecting the video clip selected is played using, for example, a picture in picture (PIP) technology. A video managing module, for example, the video managing module 235 is used to convert and present the information, to the user, in the form of the video.

[0052] In another example, if the textual mode is selected, by the user, then information associated with the selected entity is collected from the information sources. The information collected from the information sources is aggregated. Further, aggregated information, associated with the selected entity, is presented, to the user, in the form of text. In one example, the text can be presented in the form of pamphlets and the like. Further, the presents a significant description of the entity briefly in the text format.

[0053] In some embodiments, the text managing module 230 can present the text in one or more languages as desired by the user. A text managing module, for example, the text managing module 230 is used to present the information associated with the selected entity in the form of text.

[0054] In some embodiments, the user can select multiple entities that are tagged for obtaining the information associated with the multiple entities. Each of the multiple entities selected by the user is stored in a queue in a database. In such a case, each of the multiple entities is provided with a sequence number. Subsequently, based on the sequence number, the information associated with each entity is obtained from the queue and further presented, to the user, in different presentation modes as selected by the user.

[0055] FIG. 4 is an exemplary illustration of presenting information for entities displayed on a video content, in accordance with one embodiment. FIG. 4 includes, in one example, a DTV 405. A recorded video stream is played by the DTV 405.

[0056] A user, for example, the user 110 activates an information fetching mode while watching the video content played by the DTV 405. Input devices for example, but not limited to, a remote, DTV menu and a pointing device can be used for activating the information fetching mode. Upon activating the information fetching mode, one or more entities are highlighted and further tags are automatically displayed for the entities included in the video content. Hence, a tag 410 named 'Taj Mahal' is displayed adjacent to entity Taj Mahal on the DTV 405. Further, a tag 415 named 'Abdul Kalam' is displayed adjacent to entity Abdul Kalam on the DTV 405. Furthermore, a tag 420 named 'Ostrich' is displayed adjacent to entity Ostrich on the DTV 405.

[0057] In one example, the user can select the entity 'Abdul Kalam' using one or more input devices. Examples of the input devices include, but are not limited to, remote, television menu, pointing devices and the like. Selecting the entity 'Abdul Kalam' enables the user to acquire information associated with the entity 'Abdul Kalam' included in the video content. Information, for example, but not limited to, date and place of birth of Abdul Kalam, profession associated with Abdul Kalam, past professional details associated with Abdul Kalam and current designation of Abdul Kalam can be presented.

[0058] Further, the user can also select one or more presentation modes for acquiring the information. Examples of the presentation modes include, but not limited to, an audio mode, a video mode and a textual mode. In one example, if the user selects the audio mode, then the information associated with Abdul Kalam is presented, to the user, in the form of an audio that can be played using one or more audio output devices for example, a head phone, an external speaker and the like associated with the display device. In another example, if the user selects the video mode, then the information associated with Abdul Kalam is presented, to the user, in the form of a video that can be played using the media players. In some embodiments, one or more web links of various video clips associated with the entity can also be provided to the user. The user can select one link at one instant. Further, upon selecting the video clip selected is played using, for example, a picture in picture (PIP) technology. In yet another example, if the user selects the textual mode, then the information associated with Abdul Kalam is presented, to the user, in the form of a text, for example, but not limited to, in a pamphlet form. In one example, significant information, for example name, full name, birth date, birth place, education, profession and the like, associated with Abdul Kalam can be displayed similar to a form. Similarly, the user can select other entities displayed on the DTV 405 for obtaining information associated with the other entities.

[0059] Advantageously, the present disclosure enables to obtain information associated with one or more entities while watching a video content. By presenting the information, the user can gain knowledge associated with one or more entities while watching the video content. Further, the audio mode enables the user to obtain information associated with an entity while viewing the video simultaneously without getting disturbed. For example, present invention further enables one or more education institutions for educating the users, parliaments to know the current political situation, sports and various other fields.

[0060] In the preceding specification, the present disclosure and its advantages have been described with reference to specific embodiments. However, it will be apparent to a person of ordinary skill in the art that various modifications and changes can be made, without departing from the scope of the present disclosure, as set forth in the claims below. Accordingly, the specification and figures are to be regarded as illustrative examples of the present disclosure, rather than in restrictive sense. All such possible modifications are intended to be included within the scope of present disclosure.

I/We claim:

1 A system for presenting information for entities displayed on a video content, tine system comprising:

a detection module for identifying a plurality of entities, the plurality of entities being displayed on the video content;

a tag managing module for providing a tag associated with each of the plurality of entities, the tag comprising name of the entity;

an aggregator module for aggregating information associated with an entity wherein the entity is being selected by a user, the aggregation being performed upon obtaining the information from a plurality of information sources; and

a presentation module for presenting the information associated with each of the plurality of entities in a plurality of presentation modes;

2 The system as claimed in claim 1, wherein the video content comprises at least one of a live video, a streaming video and a stored video.

3 The system as claimed in claim 1, wherein the detection module further comprises a detection database for storing the plurality of entities.

4 The system as claimed in claim 1, wherein the aggregator module further comprises an aggregator database for storing the information associated with each of the plurality of entities.

5 The system as claimed in claim 1, wherein the tag managing module is further configured to collect, the information associated with each of the plurality of entities, from a tag information database.

6 The system as claimed in claim 1, wherein the presentation module further comprising:

an audio managing module for presenting the information associated with each of the plurality of entities in an audio format;

a text managing module for presenting the information associated with each of the plurality of entities in a textual format; and

a video managing module for presenting the information associated with each of the plurality of entities in a video format.

7 The system as claimed in claim 1, wherein the audio managing module is configured to:

obtain, from the aggregator database, the information associated with each of the plurality of entities;

process the information associated with each of the plurality of entities, the processing comprises translating, the information associated with each of the plurality of entities, into the audio format; and

play, the information associated with each of the plurality of entities, in the audio format.

8 A method of presenting information for entities displayed on a video content, the method comprising:

receiving an input from a user, the input comprising activation of an information fetching mode by the user;

obtaining information associated with an entity selected by the user, the information being obtained from a plurality of information sources;

aggregating the information associated with the entity;

providing a plurality of presentation modes to the user, the plurality of presentation modes being used to present the information associated with the entity in a plurality of data formats; and

presenting, the information associated with the entity, based on one of the plurality of presentation modes selected by the user.

9 The method as claimed in claim 7, wherein the video content comprises at least one of a live video, a streaming video and a stored video.

10 The method as claimed in claim 7 and further comprising:

storing the one or more entities in a detection database;

storing the information associated with the one or more entities in a aggregator database; and

storing a tag associated with the one or more entities in a tag information database, the tag comprising name of the entity;

11 The method as claimed in claim 7, wherein the plurality of presentation modes comprises at least one of a text mode, an audio mode and a video mode.

12 The method as claimed in claim 11, wherein the text mode enables presenting, the information associated with the one or more entities, in a text format.

13 The method as claimed in claim 11, wherein the audio mode enables presenting, the information associated with the one or more entities, in an audio format.

14 The method as claimed in claim 11, wherein the video mode enables presenting, the information associated with the one or more entities, in a video format.

Documents

Application Documents

# Name Date
1 1755-CHE-2012 POWER OF ATTORNEY 07-05-2012.pdf 2012-05-07
1 1755-CHE-2012-Response to office action [19-08-2022(online)].pdf 2022-08-19
2 1755-CHE-2012 FORM-5 07-05-2012.pdf 2012-05-07
2 1755-CHE-2012-US(14)-HearingNotice-(HearingDate-09-06-2021).pdf 2021-10-03
3 1755-CHE-2012-CORRECTED PAGES [24-06-2021(online)].pdf 2021-06-24
3 1755-CHE-2012 FORM-3 07-05-2012.pdf 2012-05-07
4 1755-CHE-2012-MARKED COPY [24-06-2021(online)].pdf 2021-06-24
4 1755-CHE-2012 FORM-2 07-05-2012.pdf 2012-05-07
5 1755-CHE-2012-Written submissions and relevant documents [24-06-2021(online)].pdf 2021-06-24
5 1755-CHE-2012 FORM-1 07-05-2012.pdf 2012-05-07
6 1755-CHE-2012-PETITION UNDER RULE 137 [23-06-2021(online)].pdf 2021-06-23
6 1755-CHE-2012 DRAWINGS 07-05-2012.pdf 2012-05-07
7 1755-CHE-2012-Correspondence to notify the Controller [02-06-2021(online)].pdf 2021-06-02
7 1755-CHE-2012 DESCRIPTION (COMPLETE) 07-05-2012.pdf 2012-05-07
8 1755-CHE-2012-Response to office action [29-07-2020(online)].pdf 2020-07-29
8 1755-CHE-2012 CORRESPONDENCE OTHERS 07-05-2012.pdf 2012-05-07
9 1755-CHE-2012 CLAIMS 07-05-2012.pdf 2012-05-07
9 1755-CHE-2012-ABSTRACT [23-12-2019(online)].pdf 2019-12-23
10 1755-CHE-2012 ABSTRACT 07-05-2012.pdf 2012-05-07
10 1755-CHE-2012-CLAIMS [23-12-2019(online)].pdf 2019-12-23
11 1755-CHE-2012 FORM-1 02-07-2012.pdf 2012-07-02
11 1755-CHE-2012-COMPLETE SPECIFICATION [23-12-2019(online)].pdf 2019-12-23
12 1755-CHE-2012 CORRESPONDENCE OTHERS 02-07-2012.pdf 2012-07-02
12 1755-CHE-2012-DRAWING [23-12-2019(online)].pdf 2019-12-23
13 1755-CHE-2012 CORRESPONDENCE OTHERS 01-04-2013.pdf 2013-04-01
13 1755-CHE-2012-FER_SER_REPLY [23-12-2019(online)].pdf 2019-12-23
14 1755-CHE-2012 FORM-13 01-04-2013.pdf 2013-04-01
14 1755-CHE-2012-FORM 3 [23-12-2019(online)].pdf 2019-12-23
15 1755-CHE-2012 FORM-18 25-04-2013.pdf 2013-04-25
15 1755-CHE-2012-Information under section 8(2) (MANDATORY) [23-12-2019(online)].pdf 2019-12-23
16 1755-CHE-2012 FORM-13 15-07-2015.pdf 2015-07-15
16 1755-CHE-2012-OTHERS [23-12-2019(online)].pdf 2019-12-23
17 1755-CHE-2012-FORM 13 [21-12-2019(online)].pdf 2019-12-21
17 1755-CHE-2012 FORM-13 15-07-2015.pdf 2015-07-15
18 1755-CHE-2012-FORM-26 [21-12-2019(online)].pdf 2019-12-21
18 Form 13_Address for service.pdf 2015-07-17
19 1755-CHE-2012-RELEVANT DOCUMENTS [21-12-2019(online)].pdf 2019-12-21
19 Amended Form 1.pdf 2015-07-17
20 1755-CHE-2012-FER.pdf 2019-07-04
20 Form 3 [08-07-2016(online)].pdf 2016-07-08
21 1755-CHE-2012-Changing Name-Nationality-Address For Service [19-02-2018(online)].pdf 2018-02-19
21 1755-CHE-2012-FORM-26 [27-11-2017(online)].pdf 2017-11-27
22 1755-CHE-2012-RELEVANT DOCUMENTS [19-02-2018(online)].pdf 2018-02-19
23 1755-CHE-2012-Changing Name-Nationality-Address For Service [19-02-2018(online)].pdf 2018-02-19
23 1755-CHE-2012-FORM-26 [27-11-2017(online)].pdf 2017-11-27
24 Form 3 [08-07-2016(online)].pdf 2016-07-08
24 1755-CHE-2012-FER.pdf 2019-07-04
25 Amended Form 1.pdf 2015-07-17
25 1755-CHE-2012-RELEVANT DOCUMENTS [21-12-2019(online)].pdf 2019-12-21
26 1755-CHE-2012-FORM-26 [21-12-2019(online)].pdf 2019-12-21
26 Form 13_Address for service.pdf 2015-07-17
27 1755-CHE-2012 FORM-13 15-07-2015.pdf 2015-07-15
27 1755-CHE-2012-FORM 13 [21-12-2019(online)].pdf 2019-12-21
28 1755-CHE-2012 FORM-13 15-07-2015.pdf 2015-07-15
28 1755-CHE-2012-OTHERS [23-12-2019(online)].pdf 2019-12-23
29 1755-CHE-2012 FORM-18 25-04-2013.pdf 2013-04-25
29 1755-CHE-2012-Information under section 8(2) (MANDATORY) [23-12-2019(online)].pdf 2019-12-23
30 1755-CHE-2012 FORM-13 01-04-2013.pdf 2013-04-01
30 1755-CHE-2012-FORM 3 [23-12-2019(online)].pdf 2019-12-23
31 1755-CHE-2012 CORRESPONDENCE OTHERS 01-04-2013.pdf 2013-04-01
31 1755-CHE-2012-FER_SER_REPLY [23-12-2019(online)].pdf 2019-12-23
32 1755-CHE-2012 CORRESPONDENCE OTHERS 02-07-2012.pdf 2012-07-02
32 1755-CHE-2012-DRAWING [23-12-2019(online)].pdf 2019-12-23
33 1755-CHE-2012 FORM-1 02-07-2012.pdf 2012-07-02
33 1755-CHE-2012-COMPLETE SPECIFICATION [23-12-2019(online)].pdf 2019-12-23
34 1755-CHE-2012 ABSTRACT 07-05-2012.pdf 2012-05-07
34 1755-CHE-2012-CLAIMS [23-12-2019(online)].pdf 2019-12-23
35 1755-CHE-2012 CLAIMS 07-05-2012.pdf 2012-05-07
35 1755-CHE-2012-ABSTRACT [23-12-2019(online)].pdf 2019-12-23
36 1755-CHE-2012-Response to office action [29-07-2020(online)].pdf 2020-07-29
36 1755-CHE-2012 CORRESPONDENCE OTHERS 07-05-2012.pdf 2012-05-07
37 1755-CHE-2012-Correspondence to notify the Controller [02-06-2021(online)].pdf 2021-06-02
37 1755-CHE-2012 DESCRIPTION (COMPLETE) 07-05-2012.pdf 2012-05-07
38 1755-CHE-2012-PETITION UNDER RULE 137 [23-06-2021(online)].pdf 2021-06-23
38 1755-CHE-2012 DRAWINGS 07-05-2012.pdf 2012-05-07
39 1755-CHE-2012-Written submissions and relevant documents [24-06-2021(online)].pdf 2021-06-24
39 1755-CHE-2012 FORM-1 07-05-2012.pdf 2012-05-07
40 1755-CHE-2012-MARKED COPY [24-06-2021(online)].pdf 2021-06-24
40 1755-CHE-2012 FORM-2 07-05-2012.pdf 2012-05-07
41 1755-CHE-2012-CORRECTED PAGES [24-06-2021(online)].pdf 2021-06-24
41 1755-CHE-2012 FORM-3 07-05-2012.pdf 2012-05-07
42 1755-CHE-2012 FORM-5 07-05-2012.pdf 2012-05-07
42 1755-CHE-2012-US(14)-HearingNotice-(HearingDate-09-06-2021).pdf 2021-10-03
43 1755-CHE-2012 POWER OF ATTORNEY 07-05-2012.pdf 2012-05-07
43 1755-CHE-2012-Response to office action [19-08-2022(online)].pdf 2022-08-19

Search Strategy

1 searchstrategy_04-07-2019.pdf