Abstract: The instant disclosure relates to a system and method for generating personalized multimedia content to users. Plurality of predetermined multimedia content, along with associated stimuli, are displayed to users for detecting response of users for the displayed content and the stimuli. A reaction factor and an emotion dimension of the users are identified based on the response of the users. Finally, the personalized multimedia content is generated and presented to the users based on the emotion dimension and the reaction factor of the users. The instant method helps in identifying a best suited multimedia theme for the users based on analysis of innate insight of the behavior and preferences of the users, thereby enhancing the overall user experience. FIG. 3
Claims:WE CLAIM:
1. A method of generating personalized multimedia content (111) for plurality of users (107), the method comprising:
displaying, by a multimedia content generator (101), plurality of Predetermined Multimedia Themes (PMTs) (104) and associated one or more stimulus (104a) to the plurality of users (107);
detecting, by the multimedia content generator (101), a reaction factor (109) of each of the plurality of users (107) in response to viewing of the plurality of PMTs (104) and the associated one or more stimulus (104a);
identifying, by the multimedia content generator (101), a multimedia theme, from the plurality of PMTs (104), for each of the plurality of users (107) based on the reaction factor (109);
identifying, by the multimedia content generator (101), an emotion dimension (213) of each of the plurality of users (107) by comparing the reaction factor (109) and one or more emotional metadata (215) related to the one or more stimulus (104a); and
generating, by the multimedia content generator (101), the personalized multimedia content (111) for each of the plurality of users (107) based on the multimedia theme and the emotion dimension (213) corresponding to each of the plurality of users (107).
2. The method as claimed in claim 1, further comprising detecting the response of each of the plurality of users (107) for the plurality of PMTs (104) and the associated one or more stimulus (104a) by plurality of neuroprosthetic devices including at least one of a neural dust sensor, an electroencephalogram, an electro-oculogram or an electrodermal sensor.
3. The method as claimed in claim 1, wherein the response of each of the plurality of users (107) upon viewing the plurality of PMTs (104) and the associated one or more stimulus (104a) indicates one of presence or absence of an aroused neural signal in each of the plurality of users (107).
4. The method as claimed in claim 1, wherein each of the plurality of PMTs (104) and the associated one or more stimulus (104a) are created and stored in a multimedia theme repository (103) associated with the multimedia content generator (101).
5. The method as claimed in claim 1, wherein the one or more emotional metadata (215) comprises at least one of awareness level of the plurality of users (107), acceptance level of the plurality of users (107), emotional bias of the plurality of users (107), cognitive capability of the plurality of users (107) or sensitivity of the plurality of users (107) for the one or more stimulus (104a).
6. The method as claimed in claim 1, wherein identifying the multimedia theme comprises:
assigning, by the multimedia content generator (101), an emotional score (217) to each of the PMTs (104) based on the reaction factor (109); and
selecting, by the multimedia content generator (101), one of the plurality of PMTs (104) having the emotional score (217) greater than a predefined threshold.
7. The method as claimed in claim 1, wherein the reaction factor (109) indicates at least one of level of self-influence of the plurality of users (107), intrinsic drive of the plurality of users (107), emotion of the plurality of users (107), attitude of the plurality of users (107) or influence of the plurality of PMTs (104) and the associated one or more stimulus (104a) on the plurality of users (107).
8. The method as claimed in claim 1 further comprising generating a plurality of associated multimedia content related to the personalized multimedia content (111) based on response of each of the plurality of users (107) to displayed personalized multimedia content (111).
9. The method as claimed in claim 1 further comprising:
creating, by the multimedia content generator (101), multiple groups among the plurality of users (107) based on socio-demographic data patterns of the plurality of users (107); and
displaying, by the multimedia content generator (101), a personalized multimedia content (111) to each of the multiple groups based on the emotion dimension (213) of each of the plurality of users (107) in each of the multiple groups.
10. The method as claimed in claim 1 further comprising identifying a multimedia channel and an optimized schedule in the identified multimedia channel for displaying the personalized multimedia content (111) to the plurality of users (107) based on historical multimedia channel usage data related to each of the plurality of users (107).
11. A multimedia content generator (101) for generating personalized multimedia content (111) for plurality of users (107), the multimedia content generator (101) comprises:
a processor (203); and
a memory, communicatively coupled to the processor (203), wherein the memory stores processor-executable instructions, which, on execution, causes the processor (203) to:
display plurality of Predetermined Multimedia Themes (PMTs) (104) and associated one or more stimulus (104a) to the plurality of users (107);
detect a reaction factor (109) of each of the plurality of users (107) in response to viewing of the plurality of PMTs (104) and the associated one or more stimulus (104a);
identify a multimedia theme, from the plurality of PMTs (104), for each of the plurality of users (107) based on the reaction factor (109);
identify an emotion dimension (213) of each of the plurality of users (107) by comparing the reaction factor (109) and one or more emotional metadata (215) related to the one or more stimulus (104a); and
generate the personalized multimedia content (111) for each of the plurality of users based on the multimedia theme and the emotion dimension (213) corresponding to each of the plurality of users (107).
12. The multimedia content generator (101) as claimed in claim 11, wherein the processor (203) is further configured to detect the response of each of the plurality of users (107) to the plurality of PMTs (104) and the associated one or more stimulus (104a) using plurality of neuroprosthetic devices including at least one of a neural dust sensor, an electroencephalogram, an electro-oculogram or an electrodermal sensor.
13. The multimedia content generator (101) as claimed in claim 11, wherein the response of each of the plurality of users (107) upon viewing the plurality of PMTs (104) and the associated one or more stimulus (104a) indicates one of presence or absence of an aroused neural signal in each of the plurality of users (107).
14. The multimedia content generator (101) as claimed in claim 11, wherein the processor (203) creates and stores each of the plurality of PMTs (104) and the associated one or more stimulus (104a) in a multimedia theme repository (103) associated with the multimedia content generator (101).
15. The multimedia content generator (101) as claimed in claim 11, wherein the one or more emotional metadata (215) comprises at least one of awareness level of the plurality of users (107), acceptance level of the plurality of users (107), emotional bias of the plurality of users (107), cognitive capability of the plurality of users (107) or sensitivity of the plurality of users (107) for the one or more stimulus (104a).
16. The multimedia content generator (101) as claimed in claim 11, wherein to identify the multimedia theme, the processor (203) is configured to:
assign an emotional score (217) to each of the PMTs (104) based on the reaction factor (109); and
select one of the plurality of PMTs (104) having the emotional score (217) greater than a predefined threshold.
17. The multimedia content generator (101) as claimed in claim 11, wherein the reaction factor (109) indicates at least one of level of self-influence of the plurality of users (107), intrinsic drive of the plurality of users (107), emotion of the plurality of users (107), attitude of the plurality of users (107) or influence of the plurality of PMTs (104) and the associated one or more stimulus (104a) on the plurality of users (107).
18. The multimedia content generator (101) as claimed in claim 11, wherein the processor (203) further generates a plurality of associated multimedia content related to the personalized multimedia content (111) based on response of each of the plurality of users (107) to displayed personalized multimedia content (111).
19. The multimedia content generator (101) as claimed in claim 11, wherein the processor (203) is further configured to:
create multiple groups among the plurality of users (107) based on socio-demographic data patterns of the plurality of users (107); and
display a personalized multimedia content (111) to each of the multiple groups based on the emotion dimension (213) of each of the plurality of users (107) in each of the multiple groups.
20. The multimedia content generator (101) as claimed in claim 11, wherein the processor (203) identifies a multimedia channel and an optimized schedule in the identified multimedia channel to display the personalized multimedia content (111) to the plurality of users (107) based on historical multimedia channel usage data related to each of the plurality of users (107).
Dated this 17th day of February, 2017
SWETHA S. N
OF K & S PARTNERS
AGENT FOR THE APPLICANT , Description:TECHNICAL FIELD
The present subject matter is related, in general to artificial intelligence, and more particularly, but not exclusively to a system and a method for generating personalized multimedia content for plurality of users.
| # | Name | Date |
|---|---|---|
| 1 | 201741005622-FER.pdf | 2020-05-26 |
| 1 | Power of Attorney [17-02-2017(online)].pdf | 2017-02-17 |
| 2 | Form 5 [17-02-2017(online)].pdf | 2017-02-17 |
| 2 | Correspondence by Agent_Form 1_31-05-2017.pdf | 2017-05-31 |
| 3 | PROOF OF RIGHT [26-05-2017(online)].pdf | 2017-05-26 |
| 3 | Form 3 [17-02-2017(online)].pdf | 2017-02-17 |
| 4 | REQUEST FOR CERTIFIED COPY [22-02-2017(online)].pdf | 2017-02-22 |
| 4 | Form 18 [17-02-2017(online)].pdf_382.pdf | 2017-02-17 |
| 5 | Description(Complete) [17-02-2017(online)].pdf | 2017-02-17 |
| 5 | Form 18 [17-02-2017(online)].pdf | 2017-02-17 |
| 6 | Description(Complete) [17-02-2017(online)].pdf_381.pdf | 2017-02-17 |
| 6 | Drawing [17-02-2017(online)].pdf | 2017-02-17 |
| 7 | Description(Complete) [17-02-2017(online)].pdf_381.pdf | 2017-02-17 |
| 7 | Drawing [17-02-2017(online)].pdf | 2017-02-17 |
| 8 | Description(Complete) [17-02-2017(online)].pdf | 2017-02-17 |
| 8 | Form 18 [17-02-2017(online)].pdf | 2017-02-17 |
| 9 | Form 18 [17-02-2017(online)].pdf_382.pdf | 2017-02-17 |
| 9 | REQUEST FOR CERTIFIED COPY [22-02-2017(online)].pdf | 2017-02-22 |
| 10 | PROOF OF RIGHT [26-05-2017(online)].pdf | 2017-05-26 |
| 10 | Form 3 [17-02-2017(online)].pdf | 2017-02-17 |
| 11 | Form 5 [17-02-2017(online)].pdf | 2017-02-17 |
| 11 | Correspondence by Agent_Form 1_31-05-2017.pdf | 2017-05-31 |
| 12 | Power of Attorney [17-02-2017(online)].pdf | 2017-02-17 |
| 12 | 201741005622-FER.pdf | 2020-05-26 |
| 1 | searchstrategyE_20-05-2020.pdf |