Abstract: This disclosure relates generally to audio-video processing, and more particularly to system and method for dynamically generating and rendering highlights of a video content. In one embodiment, the method may include receiving a start trigger and a stop trigger to generate and render the highlights of a portion of the video content playing on a first device for a registered user, recording at least one sub-portion of the portion of the video content upon receiving the start trigger and until receiving the stop trigger, monitoring the at least one sub-portion of the video content to detect one or more critical events, dynamically generating the highlights of the at least one sub-portion of the video content for each of the one or more critical events, and dynamically rendering the highlights of the at-least one sub-portion of the video content on a second device in possession of the registered user. FIG. 2
Claims:WE CLAIM
1. A method for dynamically generating and rendering highlights of a video content playing on a first device, the method comprising:
receiving, by a highlights generation and rendering device, a start trigger and a stop trigger to generate and render the highlights of a portion of the video content playing on the first device for a registered user;
recording, by the highlights generation and rendering device, at least one sub-portion of the portion of the video content upon receiving the start trigger and until receiving the stop trigger;
monitoring, by the highlights generation and rendering device, the at least one sub-portion of the video content to detect one or more critical events;
dynamically generating, by the highlights generation and rendering device, the highlights of the at least one sub-portion of the video content for each of the one or more critical events; and
dynamically rendering, by the highlights generation and rendering device, the highlights of the at-least one sub-portion of the video content on a second device in possession of the registered user.
2. The method of claim 1, wherein each of the start trigger and the stop trigger comprises at least one of a manual trigger, and a sensor based trigger.
3. The method of claim 2, wherein the sensor based trigger is generated from one or more sensors in at least one of the first device, the second device, and the highlights generation and rendering device, and wherein the sensor based trigger comprises at least one of an absence of the registered user from a viewing position of the first device, an engagement of the registered user with the second device, and a movement of the registered user away from the first device.
4. The method of claim 1, further comprising discarding the at least one sub-portion of the video content either directly or upon making a copy of the at least one sub-portion of the video content based on the detection of the one or more critical events.
5. The method of claim 1, wherein the one or more critical events are detected by analyzing at least one of a sensor based parameter and the video content, wherein the sensor based parameter comprises at least one of an ambient audio, and an ambient video, and wherein the sensor based parameter is generated from one or more sensors in at least one of the first device, the second device, and the highlights generation and rendering device.
6. The method of claim 5, wherein each of the one or more critical events comprises at least one of an instant change in an audio level of the video content, an instant change in an audio level of the ambient audio, an instant change in expression of viewers viewing the content in the ambient video, an instant change in one or more pre-defined areas within the video content, an identification of one or more pre-defined keywords in the video content or in the ambient audio, and an identification of one or more pre-defined gestures in the video content.
7. The method of claim 1, wherein dynamically generating the highlights comprises extracting, for each of the one or more critical events, a further sub-portion of the video content from the at least one sub-portion of the video content from about a pre-defined time interval before a critical event to about a pre-defined time interval after the critical event.
8. The method of claim 1, wherein dynamically rendering comprises one of automatically pushing or proactively accessing the highlights of the at least one sub-portion of the video content on the second device.
9. The method of claim 1, wherein the registered user registers with the highlights generation and rendering device by at least one of creating a user profile, registering the second device, and downloading an application on the second device.
10. The method of claim 9, wherein the user profile comprises at least an identification of the registered user, an authentication information of the registered user, an image of the user, a list of preferred video contents, a list of preferred genres, a customized definition of the critical event, a customized definition of the start trigger, a customized definition of the stop trigger, and a preferred length of recording, and a preferred size of storage for recording.
11. The method of claim 1, wherein the highlights generation and rendering device is activated through the second device by the registered user prior to viewing the video content on the first device.
12. The method of claim 1, wherein the first device comprises one of a television, and a computing device, and wherein the second device comprises a personal computing device.
13. A system for dynamically generating and rendering highlights of a video content playing on a first device, the system comprising:
a highlights generation and rendering device comprising at least one processor and a computer-readable medium storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising:
receiving a start trigger and a stop trigger to generate and render the highlights of a portion of the video content playing on the first device for a registered user;
recording at least one sub-portion of the portion of the video content upon receiving the start trigger and until receiving the stop trigger;
monitoring the at least one sub-portion of the video content to detect one or more critical events;
dynamically generating the highlights of the at least one sub-portion of the video content for each of the one or more critical events; and
dynamically rendering the highlights of the at-least one sub-portion of the video content on a second device in possession of the registered user.
14. The system of claim 13, wherein each of the start trigger and the stop trigger comprises at least one of a manual trigger and a sensor based trigger, wherein the sensor based trigger is generated from one or more sensors in at least one of the first device, the second device, and the highlights generation and rendering device, and wherein the sensor based trigger comprises at least one of an absence of the registered user from a viewing position of the first device, an engagement of the registered user with the second device, and a movement of the registered user away from the first device.
15. The system of claim 13, wherein the one or more critical events are detected by analyzing at least one of a sensor based parameter and the video content, wherein the sensor based parameter comprises at least one of an ambient audio, and an ambient video, and wherein the sensor based parameter is generated from one or more sensors in at least one of the first device, the second device, and the highlights generation and rendering device.
16. The system of claim 15, wherein each of the one or more critical events comprises at least one of an instant change in an audio level of the video content, an instant change in an audio level of the ambient audio, an instant change in expression of viewers viewing the content in the ambient video, an instant change in one or more pre-defined areas within the video content, an identification of one or more pre-defined keywords in the video content or in the ambient audio, and an identification of one or more pre-defined gestures in the video content.
17. The system of claim 13, wherein dynamically generating the highlights comprises extracting, for each of the one or more critical events, a further sub-portion of the video content from the at least one sub-portion of the video content from about a pre-defined time interval before a critical event to about a pre-defined time interval after the critical event.
18. The system of claim 13, wherein the registered user registers with the highlights generation and rendering device by at least one of creating a user profile, registering the second device, and downloading an application on the second device.
19. The system of claim 13, wherein the highlights generation and rendering device is activated through the second device by the registered user prior to viewing the video content on the first device.
20. A non-transitory computer-readable medium storing computer-executable instructions for:
receiving a start trigger and a stop trigger to generate and render the highlights of a portion of the video content playing on the first device for a registered user;
recording at least one sub-portion of the portion of the video content upon receiving the start trigger and until receiving the stop trigger;
monitoring the at least one sub-portion of the video content to detect one or more critical events;
dynamically generating the highlights of the at least one sub-portion of the video content for each of the one or more critical events; and
dynamically rendering the highlights of the at-least one sub-portion of the video content on a second device in possession of the registered user.
Dated this 30th day of June, 2017
Swetha SN
Of K&S Partners
Agent for the Applicant , Description:TECHNICAL FIELD
This disclosure relates generally to audio-video processing, and more particularly to system and method for dynamically generating and rendering highlights of a video content.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [30-06-2017(online)].pdf | 2017-06-30 |
| 2 | Form 5 [30-06-2017(online)].pdf | 2017-06-30 |
| 3 | Form 3 [30-06-2017(online)].pdf | 2017-06-30 |
| 4 | Form 18 [30-06-2017(online)].pdf_655.pdf | 2017-06-30 |
| 5 | Form 18 [30-06-2017(online)].pdf | 2017-06-30 |
| 6 | Form 1 [30-06-2017(online)].pdf | 2017-06-30 |
| 7 | Drawing [30-06-2017(online)].pdf | 2017-06-30 |
| 8 | Description(Complete) [30-06-2017(online)].pdf_184.pdf | 2017-06-30 |
| 9 | Description(Complete) [30-06-2017(online)].pdf | 2017-06-30 |
| 10 | REQUEST FOR CERTIFIED COPY [03-07-2017(online)].pdf | 2017-07-03 |
| 11 | 201741023139-Proof of Right (MANDATORY) [30-07-2018(online)].pdf | 2018-07-30 |
| 12 | Correspondence by Agent_Form1_01-08-2018.pdf | 2018-08-01 |
| 13 | 201741023139-PETITION UNDER RULE 137 [10-05-2021(online)].pdf | 2021-05-10 |
| 14 | 201741023139-OTHERS [10-05-2021(online)].pdf | 2021-05-10 |
| 15 | 201741023139-FORM 3 [10-05-2021(online)].pdf | 2021-05-10 |
| 16 | 201741023139-FER_SER_REPLY [10-05-2021(online)].pdf | 2021-05-10 |
| 17 | 201741023139-DRAWING [10-05-2021(online)].pdf | 2021-05-10 |
| 18 | 201741023139-CORRESPONDENCE [10-05-2021(online)].pdf | 2021-05-10 |
| 19 | 201741023139-COMPLETE SPECIFICATION [10-05-2021(online)].pdf | 2021-05-10 |
| 20 | 201741023139-CLAIMS [10-05-2021(online)].pdf | 2021-05-10 |
| 21 | 201741023139-ABSTRACT [10-05-2021(online)].pdf | 2021-05-10 |
| 22 | 201741023139-FER.pdf | 2021-10-17 |
| 23 | 201741023139-PatentCertificate15-09-2023.pdf | 2023-09-15 |
| 24 | 201741023139-IntimationOfGrant15-09-2023.pdf | 2023-09-15 |
| 25 | 201741023139-PROOF OF ALTERATION [20-12-2023(online)].pdf | 2023-12-20 |
| 1 | searchE_26-11-2020.pdf |