Abstract: The present invention provides a contextual media navigation system (100) for navigating a linear media (204) displayed on an electronic device (102) to assist a user (104) in rapidly and effectively skimming through the linear media (204) by displaying a synchronized contextual representation corresponding to the linear media (204) in real-time. The synchronized contextual representation is customized as per user defined formats and user-defined representations types, thereby facilitating ease of access and navigation for the user (104). Reference Figure: FIG. 1A
DESC:FIELD OF INVENTION
[0001] The present invention relates generally to systems and methodologies for displaying and processing digital media, and particularly to media navigation.
BACKGROUND
[0002] By ease of access of media platforms, streaming services, and file sharing services, consumption of digital media such as videos and documents has increased. Both, live and prerecorded videos are available online and can be accessed on-demand by users. While watching a video, it is often desired by a user to navigate to parts of the video that are of most interest to the user. However, the user has to rely on memory or intuition to navigate the video to the parts of interest. The existing media platforms are optimized to transmit the videos in low bandwidths to conserve data. Even when the videos include key frames for the reference of the user, such key frames do not convey any usable contextual information to the user for easier navigation.
[0003] In one of the conventional techniques discussed in US 7,917,839 B2 by the present Applicant, a user is provided a choice to select an interaction model of his/her choice from a pre-built library of interaction models. However, the technique does not provide for customized and/or contextual interaction models as per the user’s preferences.
[0004] Therefore, there is a need for a contextual media navigation technique for customized and easier navigation of digital media.
SUMMARY
[0005] This summary is provided to introduce concepts related to a method of navigating linear media and a contextual media navigation system thereof. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[0006] In an embodiment of the present invention, a method of navigating linear media displayed on an electronic device is provided. The method assists a user in skimming through the linear media rapidly and effectively. The method includes receiving one or more scanning preferences and representation preferences by a context creator from the user of the electronic device. The method includes receiving a current media position of the displayed linear media by the context creator from a linear media player with navigation. The method includes obtaining an extracted content corresponding to the linear media by the context creator from a linear media input extraction service. The method includes obtaining a plurality of models of representations by the context creator from a representation creation service based on the extracted content in real-time. The models of representations can be defined by the user. The method includes selecting a model of representation by a context representation manager from the plurality of models of representations based on the representation preferences in real-time. The method includes providing a concise real-time representation corresponding to a user-preferred window of the displayed linear media around the current media position based on the selected model of representation and the scanning and representation preferences of the user by a context player. The format of the representation can be defined by the user. A current position of the representation is synchronized with the current media position of the linear media dynamically, thereby assisting the user in skimming through the linear media in real-time.
[0007] In an embodiment of the present invention, a contextual media navigation system for navigating a linear media displayed on an electronic device is provided. The contextual media navigation system assists a user in skimming through the linear media rapidly and effectively. The contextual media navigation system includes a context creator, a context representation manager, and a context player. The context creator receives one or more scanning preferences and representation preferences from the user of the electronic device. The context creator receives a current media position of the displayed linear media from a linear media player with navigation. The context creator obtains an extracted content corresponding to the linear media from a linear media input extraction service. The context creator obtains a plurality of models of representations from a representation creation service based on the extracted content in real-time. The models of representations can be defined by the user. The context representation manager selects a model of representation from the plurality of models of representations based on the representation preferences in real-time. The context player provides a concise real-time representation corresponding to a user-preferred window of the displayed linear media around the current media position based on the selected model of representation and the scanning and representation preferences of the user. The format of the representation can be defined by the user. A current position of the representation is synchronized with the current media position of the linear media dynamically, thereby assisting the user in skimming through the linear media in real-time.
[0008] In an embodiment, the context creator includes a scanning preferences editor. The scanning preferences editor updates a context scanning rule set based on the received scanning preferences.
[0009] In an embodiment, the context creator includes a context scanning decision maker. The context scanning decision maker receives a current media position of the displayed linear media from the linear media player with navigation. The context scanning decision maker determines one or more bookend scan values based on the scanning preferences and the updated context scanning rule set.
[0010] In an embodiment, the context creator includes an extraction manager. The extraction manager obtains the extracted content corresponding to the linear media from a linear media input extraction service based on the aforesaid bookend scan values. The extraction manager stores the obtained extracted content in a linear media contextual input store.
[0011] In an embodiment, the context scanning decision maker receives the representation preferences from a representation preference manager. The context scanning decision maker determines a type of representation based on the received representation preferences.
[0012] In an embodiment, the extraction manager obtains a representation generated based on the extracted content and the determined type of representation.
[0013] In an embodiment, the context representation manager includes a context representation time stamp generator. The context representation time stamp generator generates a plurality of time stamps for the obtained representation.
[0014] In an embodiment, the extraction manager generates metadata for the extracted content. The extraction manager stores the metadata corresponding to the extracted content in the contextual input store.
[0015] In an embodiment, the scanning preferences and the representation preferences for the contextual media are changed by the user in real-time.
[0016] In an embodiment, the bookend scan values are indicative of limits of the linear media. The representation is generated for the linear media within the bookend scan values.
[0017] In an embodiment, the linear media and the representation are displayed simultaneously to the user on a display of the electronic device.
[0018] In an embodiment, examples of the representation include, but are not limited to, WordCloud, TagCloud, infographic, summary, images and game.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0019] The detailed description is described with reference to the accompanying figures.
[0020] FIG. 1A illustrates a contextual media navigation system in accordance with an embodiment of the present invention;
[0021] FIG. 1B illustrates a contextual navigation display in accordance with an embodiment of the present invention;
[0022] FIG. 2 is a schematic block diagram of components of an electronic device in accordance with an embodiment of the present invention;
[0023] FIG. 3 is a schematic block diagram of a contextual media navigation enabler of the electronic device of FIG. 2 in accordance with an embodiment of the present invention;
[0024] FIG. 4 illustrates an exemplary format of a context scanning rule set structure in accordance with an embodiment of the present invention;
[0025] FIG. 5 illustrates an exemplary format of storing a linear media contextual input in accordance with an embodiment of the present invention;
[0026] FIG. 6 illustrates an exemplary format for storing time-stamped context representation data in accordance with an embodiment of the present invention;
[0027] FIG. 7 illustrates an exemplary format for storing types of representations supported by a contextual media navigation enabler in accordance with an embodiment of the present invention;
[0028] FIG. 8 shows a flow chart illustrating a method of scanning a linear media rapidly using a contextual media navigation enabler in accordance with an embodiment of the present invention;
[0029] FIG. 9 illustrates a use case of a contextual media navigation system in accordance with an embodiment of the present invention; and
[0030] FIG. 10 shows a flow chart illustrating a method of navigating linear media in accordance with an embodiment of the present invention.
[0031] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[0032] The various embodiments of the present invention provide a method of navigating linear media displayed on an electronic device and a contextual media navigation system for navigating a linear media displayed on an electronic device.
[0033] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details.
[0034] One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[0035] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
[0036] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions.
[0037] Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0038] The present invention provides a contextual media navigation system for navigating a linear media displayed on an electronic device. The contextual media navigation system assists the user in rapidly and effectively skimming through the linear media by way of a synchronized contextual representation in real-time. The synchronized contextual representation is customized as per user defined formats and user-defined representations types.
[0039] Referring now to FIG. 1A, a contextual media navigation system (100) is shown in accordance with an embodiment of the present invention. The contextual media navigation system (100) includes an electronic device (102) linked to a user (104). The electronic device (102) is connected to a plurality of servers (also referred to as a “server (106)”) by way of a communication network (108). The electronic device (102) includes a processor (110), a memory (112), an Input/Output (I/O) unit (114), a display (116), and a network (n/w) communication unit (118).
[0040] Examples of the processor (110) include, but are not limited to, Field Programmable Gate Arrays (FPGAs), Central Processing Units (CPUs), and Application-Specific Integrated Circuits (ASICs). Examples of the memory (112) include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), fixed memory device (such as hard disk drive), movable memory equipment (such as floppy disk), flash memory etc. In an example, the memory (112) stores computer-readable and executable instructions, which when executed by the processor (110) cause the processor (110) to perform the method of contextual media navigation in real-time.
[0041] The I/O unit (114) includes input/output peripherals, such as, keyboard, mouse, speakers, etc. coupled to the electronic device (102). The display (116) may include touch screen displays and LED/LCD displays etc. The n/w communication unit (118) facilitates communication of the electronic device (102) with the server (106) by way of wired or wireless communication networks. Examples of the communication network (108) include, but are not limited to, WiFi, optical fiber, Long-Term Evolution (LTE), LTE-A, WiMax etc.
[0042] In an example, the server (106) may be a video streaming server that provides real-time video to the electronic device (102). In another example, the server (106) may be a File Transfer Protocol (FTP) server that provides files and/or documents to the electronic device (102). In yet another example, the server (106) maybe a media server that provides various types of live and pre-recorded media to the electronic device (102).
[0043] Referring now to FIG. 1B, a contextual navigation display is shown in accordance with an embodiment of the present invention. The electronic device (102) includes a linear media player with navigation (120) and a contextual media navigation enabler (200). The contextual media navigation enabler (200) and the linear media player with navigation (120) together facilitate rapidly scanning the linear media. In an example, the contextual media navigation enabler (200) and the linear media player with navigation (120) are implemented by the processor (110) within an application running on the electronic device (102).
[0044] Referring now to FIG. 2, components of the electronic device (102) are shown in accordance with an embodiment of the present invention. The contextual media navigation enabler (200) includes a context creator (210), a context representation manager (230), and a context player (240). The contextual media navigation enabler (200) is in communication with a linear media input extraction service (250), the linear media player with navigation (120), and a representation creation service (254). The linear media player (120) includes a linear media navigation control Application Programing Interface (API) (258).
[0045] The contextual media navigation enabler (200) receives preferences from the user (104). The preferences include scanning preferences and display preferences. The contextual media navigation enabler (200) receives a current media position (202) from the linear media player with navigation (120). The contextual media navigation enabler (200) generates a contextual representation based on the scanning preferences and display preferences of the user (104).
[0046] The contextual media navigation enabler (200) facilitates the user (104) to scan a linear media including, but not limited to, videos, documents, audio, web pages, and articles using various types of representations, including but not limited to, word cloud, infographics, summary and more. This allows the user (104) to scan and navigate the linear media (204) rapidly and helps focus on segments of the linear media (204) which matter the most to him/her.
[0047] The user (104) experiences the linear media (204) using one or more navigation controls of the linear media player with navigation (120) or a contextual representation. When the position of the linear media (204) changes, the current media position (202) is updated. The context creator (210) accepts the current media position (202). Using the current media position (202) and the linear media (204), the context creator (210) invokes the linear media input extraction service (250) to generate and provide an extracted content (252). The context creator (210) converts the extracted content (252) to a linear media contextual input (224). The linear media contextual input (224) and a representation preference (206) received from the user (104) is used to generate various types of representations using the representation creation service (254). The context representation manager (230) receives a context representation (238) from the representation creation service (254). The context representation manager (230) appropriately selects the data to be provided to the context player (240) for display. The context player (240) displays the representation, i.e., the contextual representation. The user (104) may navigate the representation using navigation controls of the context player (240). When the user (104) navigates the contextual representation, it generates a new media position (256) for the linear media (204). The new media position (256) is provided as input to the linear media player with navigation (120) such as, but not limited to, APIs, or other methods to control the linear media player with navigation (120). This enables the user (104) to rapidly scan the linear media (204) for relevant information he/she may be looking for using the contextual media navigation enabler (200).
[0048] Referring now to FIG. 3, the detailed diagram of the contextual media navigation enabler (200) is shown in accordance with an embodiment of the present invention.
[0049] The context creator (210) includes an extraction manger (216), a context scanning decision maker (212), a scanning preferences editor (220), and a context scanning rule set (222).
[0050] The context representation manager (230) includes a context representation time stamp generator (232), a representation preference manager (236), and a database of types of representation (234).
[0051] The context player (240) includes a context representation display controller (242) and a context representation display with navigation (244).
[0052] The user (104) experiences the linear media (204) using the linear media player with navigation (120). As the user (104) navigates through the linear media (204), the linear media player with navigation (120) shares the current media position (202) with the context scanning decision maker (212). The context scanning decision maker (212) determines one or more bookends of the linear media (204) to be scanned using the context scanning rule set (222). The bookends of the linear media (204) for scanning are selected using the context scanning rule set (222). The user (104) may set the preference for scanning using scanning the preferences editor (220). The scanning preferences are stored in the context scanning rule set (222) in a format as specified in FIG. 4.
[0053] The context scanning decision maker (212) invokes the extraction manager (216) for extracting content from the linear media (204) for selected bookends of the linear media (204). The extraction manager (216) uses the bookend scan values (214) and the type of linear media (204) to invoke the linear media input extraction service (250). There may be various types of linear media input extraction service (250) based on the type of the linear media (204). For example, for a video as input, the linear media input extraction service (250) may include a transcript generator, closed caption creator, or audio extractor. Similarly, for a PDF document, the linear media input extraction service (250) may include an image and text extractor.
[0054] The linear media input extraction service (250) extracts the contextual input based on the linear media (204) received from the linear media player with navigation (120) and the bookend scan values (214) received from the extraction manager (216). The extraction manager (216) accepts the extracted content (252) and stores it with additional key meta-data in the format of the linear media contextual input store (218) for future reference. FIG. 5 represents an exemplary format for storing linear the media contextual input store (218) for various types of the linear media (204) along with their bookend scan values (214).
[0055] The context scanning decision maker (212) also creates the representation preference (206) based on inputs from the representation preference manager (236). The context scanning decision maker (212) then decides on the type of representation creation service to be invoked based on the representation preference (206). For this, the context scanning decision maker (212) refers to the types of representations data (234) from the context representation manager (230) along with the representation preference (206). The user (104) may set the representation preference (206) using the representation preference manager (236).
[0056] Using the linear media contextual input (224) and the representation preference (206), the context scanning decision maker (212) invokes appropriate representation creation service (254). There are various types of representation creation services including, but not limited to, wordcloud or tagcloud of words based on input data, infographic creation based on key points, notes or summary creation, or a set of recommended images based on input content. These representation creation services (254) receive input content and generate context representation (238) of the data. The context representation time stamp generator (232) stores the context representation (238) along with the bookend scan values (214) in a time stamped format as shown in FIG. 6.
[0057] The context player (240) displays the contextual representation to the user (104). For this, the context representation display controller (242) receives the time-stamped context representation data from the context representation time stamp generator (232) and displays the representation using the context representation display with navigation (244). At this moment, the user (104) can experience the linear media (204) using the linear media player with navigation (120) and the contextual representation using the context representation display with navigation (244). The user (104) may navigate using any one of the two navigation options to scan the content. If the user (104) uses the context representation display with navigation (244) to scan the representation, a new media position (256) is generated.
[0058] The linear media player with navigation (120) synchronizes the linear media (204) with representation using the new media position (256) accepted by the linear media navigation control API (258).
[0059] Referring now to FIG. 8, a flow chart illustrating a method of scanning the linear media (204) rapidly using the contextual media navigation enabler (200) is shown in accordance with an embodiment of the present invention.
[0060] At Step 801, the user (104) initiates the application which displays the linear media player with navigation (120) along with the contextual media navigation enabler (200).
[0061] At Step 802, the user (104) sets the representation preferences (206) using the representation preference manager (236).
[0062] At Step 803, the user (104) sets the preferences for the bookend scan values (214) using the scanning preferences editor (220).
[0063] At Step 804, the user (104) experiences the linear media (204) using the linear media player with navigation (120). As the user (104) navigates the linear media (204), the contextual representation is updated and synchronized accordingly.
[0064] At Step 805, the user (104) experiences the change in the contextual representation.
[0065] At Step 806, the user (104) may decide on how to navigate the linear media (204). There are two options for the user (104), as provided in Step 807 and Step 810.
[0066] At Step 807, the user (104) may scan the linear media (204) using the linear media player with navigation (120).
[0067] Alternatively, at Step 810, the user (104) may scan the contextual representation using the contextual media navigation enabler (200).
[0068] At Step 808, the linear media (204) is always synchronized with the contextual representation (244).
[0069] At Step 809, the user (104) may decide to continue to navigate the linear media (204) or to stop navigating the linear media (204). If the user (104) decides to continue to navigate the linear media (204), the Step 806 is executed.
[0070] Referring now to FIG. 9, a use case of the contextual media navigation system (100) is shown in accordance with an embodiment of the present invention.
[0071] The use case shows a sample screenshot of the application in which the user (104) experiences a video as the linear media (204) using YouTube® player as a part of the linear media player with navigation (120) and a wordcloud representation within the contextual media navigation enabler (200).
[0072] Referring now to FIG. 10, a flow chart illustrating a method of navigating the linear media (204) is shown in accordance with an embodiment of the present invention.
[0073] At Step 1002, the context creator (210) receives the scanning preferences and the representation preferences from the user (104) of the electronic device (102).
[0074] The scanning preferences and the representation preferences for the contextual representation can be changed by the user (104) in real-time.
[0075] At Step 1004, the context creator (210) receives the current media position (202) of the linear media (204) from the linear media player with navigation (120).
[0076] At Step 1006, the context creator (210) obtains the extracted content (252) corresponding to the linear media (204) from the linear media input extraction service (250).
[0077] The scanning preferences editor (220) updates the context scanning rule set (222) based on the scanning preferences. The context scanning decision maker (212) receives the current media position (202) of the displayed linear media (204) from the linear media player with navigation (120). The context scanning decision maker (212) determines the bookend scan values based on the scanning preferences and the updated scanning rule set (222). The extraction manager (216) obtains the extracted content (252) corresponding to the linear media (204) from the linear media input extraction service (250) based on the aforesaid bookend scan values. The extraction manager (216) stores the obtained extracted content (252) in a linear media contextual input store (218). The context scanning decision maker (212) receives the representation preferences from the representation preference manager (236). The context scanning decision maker (212) determines the type of representation based on the received representation preferences.
[0078] The bookend scan values are indicative of the limits of the linear media (204). The representation is generated for the linear media (204) within the bookend values.
[0079] At Step 1008, the context creator (210) obtains the models of representations from the representation creation service (254) based on the extracted content (252) in real-time. The models of representation can be defined by the user (104).
[0080] The extraction manager (216) obtains the representation generated based on the extracted content (252) and the determined type of representation. The context representation time stamp generator (232) generates the time stamps for the obtained representation. The context representation manager (230) provides the representation and the corresponding time stamps to the context player (240) in real-time.
[0081] At Step 1010, the context representation manager (230) selects the model of representation based on the representation preferences in real-time.
[0082] At Step 1012, the context player (240) provides the concise real-time representation corresponding to the user-preferred window of the displayed linear media (204) around the current media position (202) based on the selected model of representation and the scanning and representation preferences of the user (104). The format of the representation can be defined by the user (104).
[0083] The linear media (204) and the representation are displayed simultaneously to the user (104) on the electronic device (102).
[0084] The extraction manger (216) generates the metadata for the extracted content (252). The extraction manager (216) stores the metadata corresponding to the extracted content (252) in the contextual input store (218).
[0085] The current position of the representation is synchronized with the current media position (202) of the linear media (204) dynamically. This assists the user (104) in skimming through the linear media in real-time.
[0086] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. A method of navigating linear media (204) displayed on an electronic device (102), said method assisting a user (104) in skimming through the linear media (204) rapidly and effectively, said method comprising:
receiving, by a context creator (210), one or more scanning preferences and representation preferences from the user (104) of the electronic device (102);
receiving, by the context creator (210), a current media position (202) of the displayed linear media (204) from a linear media player with navigation (120);
obtaining, by the context creator (210), an extracted content (252) corresponding to the linear media (204) from a linear media input extraction service (250);
obtaining, by the context creator (210), a plurality of models of representations from a representation creation service (254) based on the extracted content (252) in real-time, wherein the models of representations can be defined by the user (104);
selecting, by a context representation manager (230), a model of representation from the plurality of models of representations based on the representation preferences in real-time; and
providing, by a context player (240), a concise real-time representation corresponding to a user-preferred window of the displayed linear media (204) around the current media position (202) based on the selected model of representation and the scanning and representation preferences of the user (104), wherein format of the representation can be defined by the user (104),
wherein a current position of the representation is synchronized with the current media position (202) of the linear media 204 dynamically, thereby assisting the user (104) in skimming through the linear media in real-time.
2. The method as claimed in claim 1, wherein obtaining the extracted content (252) includes:
updating, by a scanning preferences editor (220), a context scanning rule set (222) based on the received scanning preferences;
receiving, by a context scanning decision maker (212), a current media position (202) of the displayed linear media (204) from the linear media player with navigation (120); and
determining, by the context scanning decision maker (212), one or more bookend scan values based on the scanning preferences and the updated context scanning rule set (222).
3. The method as claimed in claim 2, wherein obtaining the extracted content (252) includes:
obtaining, by an extraction manager (216), the extracted content (252) corresponding to the linear media (204) from a linear media input extraction service (250) based on the aforesaid bookend scan values; and
storing, by the extraction manager (216), the obtained extracted content (252) in a linear media contextual input store (218).
4. The method as claimed in claim 3, comprising:
receiving, by the context scanning decision maker (212), the representation preferences from a representation preference manager (236); and
determining, by the context scanning decision maker (212), a type of representation based on the received representation preferences.
5. The method as claimed in claim 3, wherein obtaining the models of representation includes:
obtaining, by the extraction manager (216), a representation generated based on the extracted content (252) and the determined type of representation;
generating, by a context representation time stamp generator (232), a plurality of time stamps for the obtained representation; and
providing, by the context representation manager (230), the representation and the corresponding time stamps to the context player (240) in real-time.
6. The method as claimed in any one of the claims 1-5, comprising:
generating, by the extraction manager (216), metadata for the extracted content (252); and
storing, by the extraction manager (216), the metadata corresponding to the extracted content (252) in the contextual input store (218).
7. The method as claimed in any one of the claims 1-6, wherein the scanning preferences and the representation preferences for the representation can be changed by the user (104) in real-time.
8. The method as claimed in any one of the claims 1-7, wherein the bookend scan values are indicative of limits of the linear media (204), and wherein the representation is generated for the linear media (204) within the bookend scan values.
9. The method as claimed in any one of the claims 1-8, wherein the linear media (204) and the representation are displayed simultaneously to the user (104) on a display (116) the electronic device (102).
10. The method as claimed in claim 1, wherein examples of the representation include, but are not limited to, WordCloud, TagCloud, infographic, summary, images and game.
11. A contextual media navigation system (100) for navigating a linear media (204) displayed on an electronic device (102), said contextual media navigation system (100) assisting a user (104) in skimming through the linear media (204) rapidly and effectively, the contextual media navigation system (100) comprising:
a context creator (210) configured to:
receive one or more scanning preferences and representation preferences from the user (104) of the electronic device (102),
receive a current media position (202) of the displayed linear media (204) from a linear media player with navigation (120),
obtain an extracted content (252) corresponding to the linear media (204) from a linear media input extraction service (250), and
obtain a plurality of models of representations from a representation creation service (254) based on the extracted content (252) in real-time, wherein the models of representations can be defined by the user (104);
a context representation manager (230) configured to select a model of representation from the plurality of models of representations based on the representation preferences in real-time; and
a context player (240) configured to provide a concise real-time representation corresponding to a user-preferred window of the displayed linear media (204) around the current media position (202) based on the selected model of representation and the scanning and representation preferences of the user (104), wherein format of the representation can be defined by the user (104),
wherein a current position of the representation is synchronized with the current media position (202) of the linear media (204) dynamically, thereby assisting the user (104) in skimming through the linear media (204) in real-time.
12. The contextual media navigation system (100) as claimed in claim 11, wherein the context creator (210) includes a scanning preferences editor (220) configured to update a context scanning rule set (222) based on the received scanning preferences.
13. The contextual media navigation system (100) as claimed in claim 12, wherein the context creator (210) includes a context scanning decision maker (212) configured to:
receive a current media position (202) of the displayed linear media (204) from the linear media player with navigation (120), and
determine one or more bookend scan values based on the scanning preferences and the updated context scanning rule set (222).
14. The contextual media navigation system (100) as claimed in claim 13, wherein the context creator (210) includes an extraction manager (216) configured to:
obtain the extracted content (252) corresponding to the linear media (204) from a linear media input extraction service (250) based on the aforesaid bookend scan values, and
store the obtained extracted content (252) in a linear media contextual input store (218).
15. The contextual media navigation system (100) as claimed in claim 14, wherein the context scanning decision maker (212) is configured to:
receive the representation preferences from a representation preference manager (236), and
determine a type of representation based on the received representation preferences.
16. The contextual media navigation system (100) as claimed in claim 15, wherein the extraction manager (216) is configured to obtain a representation generated based on the extracted content (252) and the determined type of representation.
17. The contextual media navigation system (100) as claimed in claim 16, wherein the context representation manager (230) includes a context representation time stamp generator (232) configured to generate a plurality of time stamps for the obtained representation.
18. The contextual media navigation system (100) as claimed in in any one of the claims 11-17, wherein the extraction manager (216) is configured to:
generate metadata for the extracted content (252), and
store the metadata corresponding to the extracted content (252) in the contextual input store (218).
19. The contextual media navigation system (100) as claimed in in any one of the claims 11-18, wherein the scanning preferences and the representation preferences for the contextual media are changed by the user (104) in real-time.
20. The contextual media navigation system (100) as claimed in in any one of the claims 11-19, wherein the bookend scan values are indicative of limits of the linear media (204), and wherein the representation is generated for the linear media (204) within the bookend scan values.
21. The contextual media navigation system (100) as claimed in in any one of the claims 11-20, wherein the linear media (204) and the representation are displayed simultaneously to the user (104) on a display (116) of the electronic device (102).
22. The contextual media navigation system (100) as claimed in claim 11, wherein examples of the representation include, but are not limited to, WordCloud, TagCloud, infographic, summary, images and game.
| # | Name | Date |
|---|---|---|
| 1 | 202121000793-PROVISIONAL SPECIFICATION [07-01-2021(online)].pdf | 2021-01-07 |
| 2 | 202121000793-FORM 1 [07-01-2021(online)].pdf | 2021-01-07 |
| 3 | 202121000793-DRAWINGS [07-01-2021(online)].pdf | 2021-01-07 |
| 4 | 202121000793-FORM-26 [11-04-2021(online)].pdf | 2021-04-11 |
| 5 | 202121000793-Proof of Right [06-07-2021(online)].pdf | 2021-07-06 |
| 6 | 202121000793-PostDating-(06-01-2022)-(E-6-7-2022-MUM).pdf | 2022-01-06 |
| 7 | 202121000793-APPLICATIONFORPOSTDATING [06-01-2022(online)].pdf | 2022-01-06 |
| 8 | 202121000793-PostDating-(02-02-2022)-(E-6-30-2022-MUM).pdf | 2022-02-02 |
| 9 | 202121000793-APPLICATIONFORPOSTDATING [02-02-2022(online)].pdf | 2022-02-02 |
| 10 | 202121000793-PostDating-(04-03-2022)-(E-6-67-2022-MUM).pdf | 2022-03-04 |
| 11 | 202121000793-APPLICATIONFORPOSTDATING [04-03-2022(online)].pdf | 2022-03-04 |
| 12 | 202121000793-FORM 3 [04-04-2022(online)].pdf | 2022-04-04 |
| 13 | 202121000793-ENDORSEMENT BY INVENTORS [04-04-2022(online)].pdf | 2022-04-04 |
| 14 | 202121000793-DRAWING [04-04-2022(online)].pdf | 2022-04-04 |
| 15 | 202121000793-CORRESPONDENCE-OTHERS [04-04-2022(online)].pdf | 2022-04-04 |
| 16 | 202121000793-COMPLETE SPECIFICATION [04-04-2022(online)].pdf | 2022-04-04 |
| 17 | Abstract1.jpg | 2022-05-17 |
| 18 | 202121000793-PA [26-05-2023(online)].pdf | 2023-05-26 |
| 19 | 202121000793-ASSIGNMENT DOCUMENTS [26-05-2023(online)].pdf | 2023-05-26 |
| 20 | 202121000793-8(i)-Substitution-Change Of Applicant - Form 6 [26-05-2023(online)].pdf | 2023-05-26 |
| 21 | 202121000793-FORM 18 [24-09-2024(online)].pdf | 2024-09-24 |