Abstract: The present disclosure discloses method and video generation system for generating video content based on user data. The video generation system receives user data sequentially from user, where each sequence of user data is converted into text data. One or more objects, relations, emotions, and actions from user data is identified by evaluating text data, a scene descriptor is generated for each sequence of user data, by associating one or more objects with one or more relations, emotions, and actions. The method comprises performing consistency check for scene descriptor of each sequence of user data, based on one or more previously stored scene descriptors, performing, one or more modifications to inconsistent scene descriptors, identified based on consistency check, generating, segments for each of scene descriptor and generating video content for by combining video segments associated with each of scene descriptor. Fig.1
Claims:We claim:
1. A method for generating video content based on user data, the method comprising:
receiving, by a video generation system, user data sequentially from a user, wherein each sequence of the user data is converted into text data;
identifying, by the video generation system, one or more objects, relations, emotions, and actions from the user data by evaluating the text data;
generating, by the video generation system, a scene descriptor for each sequence of the user data, by associating the one or more objects with at least the one or more relations, emotions and actions;
performing, by the video generation system, a consistency check for the scene descriptor of each sequence of the user data, based on one or more previously stored scene descriptors associated with the user data;
performing, by the video generation system, one or more modifications to one or more inconsistent scene descriptors, identified based on the consistency check, from the scene descriptor of each sequence of the user data;
generating, by the video generation system, one or more video segments for each of the scene descriptor; and
generating, by the video generation system, video content for the user data by combining the one or more video segments associated with each of the scene descriptor.
2. The method as claimed in claim 1, wherein the user data comprises recorded story and narration, live narration from the user and text data in form of a conversation script.
3. The method as claimed in claim 1, wherein the scene descriptor is a metadata structure representing the one or more objects with associated attributes and association between the one or more objects with at least the one or more relations, actions, and emotions.
4. The method as claimed in claim 1, wherein the inconsistency in the scene descriptor is identified on occurrence of one of, a change in attributes associated with the one or more objects across different sequence of user data leading to difference between characters chosen by the user and narrated in the user data, and contextual inconsistency.
5. The method as claimed in claim 1, wherein the one or more modifications comprises at least one of, changes to scene descriptor and changes to the one or more objects and actions based on the scene descriptor upon user consent.
6. The method as claimed in claim 1 further comprising providing an option of editing the one or more video segments to the user.
7. The method as claimed in claim 1 further comprising providing audio settings to the generated video content based on user data.
8. A video generation system for generating video content based on user data, comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which, on execution, causes the processor to:
receive user data sequentially from a user, wherein each sequence of the user data is converted into text data;
identify one or more objects, relations, emotions, and actions from the user data by evaluating the text data;
generate a scene descriptor for each sequence of the user data, by associating the one or more objects with at least the one or more relations, emotions and actions;
perform a consistency check for the scene descriptor of each sequence of the user data, based on one or more previously stored scene descriptors associated with the user data;
perform one or more modifications to one or more inconsistent scene descriptors, identified based on the consistency check, from the scene descriptor of each sequence of the user data;
generate one or more video segments for each of the scene descriptor; and
generate video content for the user data by combining the one or more video segments associated with each of the scene descriptor.
9. The video generation system as claimed in claim 8, wherein the user data comprises recorded story and narration, live narration from the user and text data in form of a conversation script.
10. The video generation system as claimed in claim 8, wherein the scene descriptor is a metadata structure representing the one or more objects with associated attributes and association between the one or more objects with at least the one or more relations, actions, and emotions.
11. The video generation system as claimed in claim 8, wherein the processor identifies inconsistency in the scene descriptor on occurrence of one of, a change in attributes associated with the one or more objects across different sequence of user data leading to difference between characters chosen by the user and narrated in the user data, and contextual inconsistency.
12. The video generation system as claimed in claim 8, wherein the one or more modifications comprises at least one of, changes to scene descriptor and changes to the one or more objects and actions based on the scene descriptor upon user consent.
13. The video generation system as claimed in claim 8, wherein the processor provides an option of editing the one or more video segments to the user.
14. The video generation system as claimed in claim 8 further comprising providing audio settings to the generated video content based on user data
Dated this 16th day of February 2018
R Ramya Rao
IN/PA-1607
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
The present subject matter is related in general to the field of multimedia, more particularly, but not exclusively to method and system for generating video content based on user data.
| # | Name | Date |
|---|---|---|
| 1 | 201841006066-STATEMENT OF UNDERTAKING (FORM 3) [16-02-2018(online)].pdf | 2018-02-16 |
| 2 | 201841006066-REQUEST FOR EXAMINATION (FORM-18) [16-02-2018(online)].pdf | 2018-02-16 |
| 3 | 201841006066-POWER OF AUTHORITY [16-02-2018(online)].pdf | 2018-02-16 |
| 4 | 201841006066-FORM 18 [16-02-2018(online)].pdf | 2018-02-16 |
| 5 | 201841006066-FORM 1 [16-02-2018(online)].pdf | 2018-02-16 |
| 6 | 201841006066-DRAWINGS [16-02-2018(online)].pdf | 2018-02-16 |
| 7 | 201841006066-DECLARATION OF INVENTORSHIP (FORM 5) [16-02-2018(online)].pdf | 2018-02-16 |
| 8 | 201841006066-COMPLETE SPECIFICATION [16-02-2018(online)].pdf | 2018-02-16 |
| 9 | abstract 201841006066.jpg | 2018-02-20 |
| 10 | 201841006066-REQUEST FOR CERTIFIED COPY [06-03-2018(online)].pdf | 2018-03-06 |
| 11 | 201841006066-Proof of Right (MANDATORY) [26-04-2018(online)].pdf | 2018-04-26 |
| 12 | Correspondence by Agent_Form 1_01-05-2018.pdf | 2018-05-01 |
| 13 | 201841006066-RELEVANT DOCUMENTS [03-06-2021(online)].pdf | 2021-06-03 |
| 14 | 201841006066-PETITION UNDER RULE 137 [03-06-2021(online)].pdf | 2021-06-03 |
| 15 | 201841006066-OTHERS [03-06-2021(online)].pdf | 2021-06-03 |
| 16 | 201841006066-Information under section 8(2) [03-06-2021(online)].pdf | 2021-06-03 |
| 17 | 201841006066-FORM 3 [03-06-2021(online)].pdf | 2021-06-03 |
| 18 | 201841006066-FER_SER_REPLY [03-06-2021(online)].pdf | 2021-06-03 |
| 19 | 201841006066-DRAWING [03-06-2021(online)].pdf | 2021-06-03 |
| 20 | 201841006066-CORRESPONDENCE [03-06-2021(online)].pdf | 2021-06-03 |
| 21 | 201841006066-COMPLETE SPECIFICATION [03-06-2021(online)].pdf | 2021-06-03 |
| 22 | 201841006066-CLAIMS [03-06-2021(online)].pdf | 2021-06-03 |
| 23 | 201841006066-FER.pdf | 2021-10-17 |
| 24 | 201841006066-PatentCertificate12-01-2024.pdf | 2024-01-12 |
| 25 | 201841006066-IntimationOfGrant12-01-2024.pdf | 2024-01-12 |
| 26 | 201841006066-PROOF OF ALTERATION [01-05-2024(online)].pdf | 2024-05-01 |
| 1 | searchE_16-01-2021.pdf |