Abstract: Present disclosure relates to techniques for generating a video for matched users. Said techniques discuss extracting plurality of information for users present on a multimedia platform, the plurality of information comprising at least one of demographic information, profile information, and shared experiences, creating one or more groups among said users based on one or more attributes of the shared experiences, and processing the at least one of the demographic information and the profile information for each user within their groups to determine interest of each user. It further discusses monitoring preference of each user with respect to different types of content available on said multimedia platform, matching at least one couple in at least one group based at least on the determined interest and the preferences of the users, and generating video for the at least one matched couple.
[0001] The present disclosure generally relates to virtual reality. More specifically, the present
disclosure relates to generating a video/ animation based on the various shared experiences
of the matched users or couple from the start.
BACKGROUND OF THE INVENTION:
[0002] In the digital communication era, messaging platforms have always played a pivotal role
for exchanging data/information among users. Thus, messaging platforms have evolved
enormously with the progression of technology. In particular, messaging platforms from
earlier days that only allowed information to be exchanged between users using text
evolved to exchanging of multimedia messages with advancement in technology.
However, in the current day scenario there exist messaging platforms that allow the users
to communicate with each other by various means. Some of said means may include
representing one’s emotions/thoughts through emoji or an emoticon. Whereas other means
may include conveying message or emotion through a sticker or an avatar. These platforms
allow the users to modify a picture/selfie into an avatar or sticker and share it with others
to convey their thoughts and feelings.
[0003] In addition, some of the messaging platforms have evolved to include games for
entertainment purpose. Such messaging platforms have built in option to play games with
other peoples across messaging platform. Whereas some of the messaging platforms have
evolved to an extent that they allow their users to come on such platform and perform
activities of common interest together, with various limitations.
[0004] Further, some of the messaging platforms have also found their application in virtual
reality. These messaging platforms have evolved to an extent that they offer their users to
create a virtual world of their own and allow them to experience their virtual world, wherein
the nature of interactions among users of the virtual world is often limited by the constraints
of the system implementing the virtual world. Moreover, said platform may be equipped
2
to allow their users to host various virtual competitions, play virtual games etc. together on
such platforms.
[0005] The virtual world shared experiences are generally focused on encouraging users who are
in different locations to come together, collaborate, and socialize over a common set of
objects. The users entering the virtual world participate in shared experiences, interact with
other users and then get connected or matched to become a couple. The matched users or
couple then do various activities together.
[0006] Hence, it would be advantageous if the matched users or couple are provided with a
video/animation of the various shared experiences of the couple from the start. This would
be advantageous in enabling the user to keep the relationship going online, thereby
enhancing the user experience. In view of the foregoing, there exists a need in the art to
provide a system and method that provides the above advantage.
[0007] SUMMARY OF THE INVENTION:
[0008] The present disclosure overcomes one or more shortcomings of the prior art and provides
additional advantages discussed throughout the present disclosure. Additional features and
advantages are realized through the techniques of the present disclosure. Other
embodiments and aspects of the disclosure are described in detail herein and are considered
a part of the claimed disclosure.
[0009] In one non-limiting embodiment of the present disclosure, the present application discloses
a method for generating a video for matched users. The method comprises extracting
plurality of information for users present on a multimedia platform, the plurality of
information comprising at least one of: demographic information, profile information, and
shared experiences, creating one or more groups among said users based on one or more
attributes of the shared experiences, processing the at least one of the demographic
information and the profile information for each user within their groups to determine
interest of each user, monitoring preference of each user with respect to different types of
content available on said multimedia platform, matching at least one couple in at least one
group based at least on the determined interest and the preferences of the users, and
generating video for the at least one matched couple.
3
[0010] In another non-limiting embodiment of the present disclosure, the method further
comprises extracting chat history of the plurality of users and matching the at least one
couple based on the chat history of users.
[0011] In yet another non-limiting embodiment of the present disclosure, generating the video for
the at least one matched couple comprises generating a map of matching and one or more
shared experiences between the at least one matched couple, creating one or more snippets
of the map of matching and one or more shared experiences, combining said one or more
snippets to generate the video.
[0012] In yet another non-limiting embodiment of the present disclosure, the generated video
comprise text data, audio data, emoji, stickers, avatars, and images, audio-visual media
[0013] In yet another non-limiting embodiment of the present disclosure, the method further
comprises sharing the generated video with the at least one matched couple.
[0014] In yet another non-limiting embodiment of the present disclosure, the present application
discloses a system for generating a video for matched users. The system comprises a
memory unit, a transceiver, and a processing unit in communication with the memory unit.
The processing unit is configured to extract plurality of information for users present on a
multimedia platform and the plurality of information comprises at least one of:
demographic information, profile information, and shared experiences. The processing unit
is configured create one or more groups among said users based on one or more attributes
of the shared experiences, process the at least one of the demographic information and the
profile information for each user within their groups to determine interest of each user,
monitor preference of each user with respect to different types of content available on said
multimedia platform, match at least one couple in at least one group based at least on the
determined interest and the preferences of the users, and generate video for the at least one
matched couple.
[0015] In yet another non-limiting embodiment of the present disclosure, the processing unit is
configured to extract chat history of the plurality of users and match the at least one couple
based on the chat history of users.
4
[0016] In yet another non-limiting embodiment of the present disclosure, to generate the video for
the at least one matched couple, the processing unit is configured to generate a map of
matching and one or more shared experiences between the at least one matched couple,
create one or more snippets of the map of matching and one or more shared experiences,
and combine said one or more snippets to generate the video.
[0017] In yet another non-limiting embodiment of the present disclosure, the processing unit is
configured to share the generated video with the at least one matched couple.
[0018] The foregoing summary is illustrative only and is not intended to be in any way limiting.
In addition to the illustrative aspects, embodiments, and features described above, further
aspects, embodiments, and features will become apparent by reference to the drawings and
the following detailed description.
OBJECTS OF THE INVENTION:
[0019] The main object of the present invention is to match at least one couple from plurality of
users present on a multimedia platform.
[0020] Further object of the present invention is to generating video or animation for the at least
one matched couple on a multimedia platform.
[0021] Further object of the present invention is to increase the interaction of users on a
multimedia platform and keeping their relationship going online, thereby enhancing the
user experience.
BRIEF DESCRIPTION OF DRAWINGS:
[0022] The accompanying drawings, which are incorporated in and constitute a part of this
disclosure, illustrate exemplary embodiments and, together with the description, serve to
explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference
number identifies the figure in which the reference number first appears. The same
numbers are used throughout the figures to reference like features and components. Some
embodiments of system and/or methods in accordance with embodiments of the present
subject matter are now described, by way of example only, and with reference to the
accompanying figures, in which:
5
[0023] Fig. 1 illustrates an environment in which the functionalities of present application may be
implemented, in accordance with an embodiment of the present disclosure.
[0024] Fig. 2 illustrates an exemplary data flow for generating a video for matched users or
couples, in accordance with an embodiment of the present disclosure;
[0025] Fig. 3(a) illustrates a block diagram of a network for generating a video for matched users
or couples, in accordance with another embodiment of the present disclosure;
[0026] Fig. 3(b) illustrates a block diagram of a processing unit, in accordance with another
embodiment of the present disclosure;
[0027] Fig. 4 illustrate a flowchart of an exemplary method for generating a video for matched
users or couples, in accordance with an embodiment of the present disclosure;
[0028] It should be appreciated by those skilled in the art that any block diagrams herein represent
conceptual views of illustrative systems embodying the principles of the present subject
matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition
diagrams, pseudo code, and the like represent various processes which may be substantially
represented in computer readable medium and executed by a computer or processor,
whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF DRAWINGS:
[0029] In the present document, the word “exemplary” is used herein to mean “serving as an
example, instance, or illustration.” Any embodiment or implementation of the present
subject matter described herein as “exemplary” is not necessarily to be construed as
preferred or advantageous over other embodiments.
[0030] While the disclosure is susceptible to various modifications and alternative forms, specific
embodiment thereof has been shown by way of example in the drawings and will be
described in detail below. It should be understood, however that it is not intended to limit
the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to
cover all modifications, equivalents, and alternative falling within the scope of the
disclosure.
6
[0031] The terms “comprises”, “comprising”, “include(s)”, or any other variations thereof,
are intended to cover a non-exclusive inclusion, such that a setup, system or method
that comprises a list of components or steps does not include only those components
or steps but may include other components or steps not expressly listed or inherent to
such setup or system or method. In other words, one or more elements in a system or
apparatus proceeded by “comprises… a” does not, without more constraints, preclude
the existence of other elements or additional elements in the system or apparatus.
[0032] In the following detailed description of the embodiments of the disclosure, reference
is made to the accompanying drawings that form a part hereof, and in which are shown
by way of illustration specific embodiments in which the disclosure may be practiced.
These embodiments are described in sufficient detail to enable those skilled in the art
to practice the disclosure, and it is to be understood that other embodiments may be
utilized and that changes may be made without departing from the scope of the present
disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0033] In the present document some of the terms may be used repeatedly throughout the
disclosure. For clarity said terms are illustrated below:
[0034] Emoji in context of the present application may be defined as a set of graphical
symbols or a simple pictorial representation that represents an idea or concept,
independent of any language and specific words or phrases. In particular, emoji may
be used to convey one’s thoughts and emotions through a messaging platform without
any bar of language. Further, the term emoji or emoticon may mean more or less same
in the context of the present application and may be used interchangeably throughout
the disclosure, without departing from the scope of the present application.
[0035] Sticker in context of the present application may relate to an illustration which is
available or may be designed (using various application) to be placed on or added to
a message. In simple words sticker is an elaborate emoticon, developed to allow more
depth and breadth of expression than what is possible by means of ‘emojis’ or
‘emoticons.’ Stickers are generally used, on digital media platforms, to quickly and
simply convey an emotion or thought. In some embodiments, the stickers may be
7
animated, derived from cartoon-like characters or real-life peoples etc. and are often
intended to be witty, cute, irreverent or creative, but in a canned kind of way. In some
embodiments, stickers may also be designed to represent real-world events in more
interactive and fascinating form to be shared between users on various multimedia
messaging platforms.
[0036] Avatar in context of the present application relates to graphical representation of a
user, user’s image/selfie or the user's character. Thus, it may be said that an avatar
may be configured to represent emotion/expression/feeling of the user by means of an
image converted into avatar capturing such emotion/expression/feelings by various
facial expressions or added objects such as heart, kisses etc. Further, it is to be
appreciated that an avatar may take either a two-dimensional form as an icon on
platform such as messaging/chat platforms and or a three-dimensional form such as in
virtual environment. Further, the term avatar, profile picture, userpic mean same in
context of the present application and may be used interchangeably throughout the
disclosure without departing from the scope of the present application.
[0037] Term virtual world in context of the present application may refer to a computer
simulated environment, wherein said environment may represent a real or fictitious
world governed by rules of interaction. In other words, virtual world may refer to
simulated environment where a user may be able to make changes in the virtual
environment as per his/her choice and is allowed to interact within such environment
via his/her avatar. In particular, users in the virtual world may appear on a platform
in the form of representations referred to as avatars. The degree of interaction between
the avatars and the simulated environment may be implemented by one or more
applications that govern such interactions as simulated physics, exchange of
information between users, and the like. In an exemplary embodiment, the term virtual
world, virtual environment and virtual reality may be used to interchangeably without
departing from the scope of the present application.
[0038] The term shared platform in context of the present application may refer to a
multimedia communication platform that is common for all the users and can be
accessed by one or more users simultaneously at any given point of time. Said shared
8
platform may be resident on the user device in the form of an application or widget
and may remain connected to a central server, wherein it is said server that allows
multiple users to gain access to the shared platform at any given point of time. It is to
be appreciated that the shared platform may remain connected to the central server via
web presence. In an exemplary embodiment, shared platform discussed in the present
application allows multiple users, from the comfort of their places, to come on a single
platform to perform numerus activities together that may be of interest to each other.
Thus, shared platform is one that allow multiple users to come on a single platform
and have shared experience.
[0039] The term shared experience in context of the present application may refer to an
experience that two or more users may go through when performing an activity of
common interest on the shared platform. In an illustrative example, the activities that
may be performed on shared platform may include but are not limited do reading a
book together, watching a movie, playing a game, singing a song, chatting, gardening
together, cooking together etc. Further it is to be appreciated that the term common
experience and shared experience may be used interchangeably throughout the present
disclosure.
[0040] Hike-land may be a virtual-reality platform that allows users to have an immersive
experience of their self-created virtual-world through their avatars. Hike-land may
also be referred as a virtual-reality platform that allows other users to gain access to
other virtual-world, with their consent. Further, Hike-land may be a virtual-reality
platform that allows different users to come on single platform and have collegial
experience while performing activities of common interest.
[0041] Offline location in context of the present application may relate to a real-location that
may facilitate a user to have, in person, immersive experience of the virtual-world
created by him/her or by some other user. To make this possible, the offline location
may remain connected to base virtual world present at the server, via web presence.
Further, to make the user experience more realistic, the offline location may be
equipped with various sensors and hardware that create sensations, such as movements
9
of body parts, heat, water, snow fall, wind, temperature etc., to the user experiencing
the virtual-world.
[0042] Fig. 1 illustrates an environment 100 in which the functionalities of present
application may be implemented, in accordance with an embodiment of the present
disclosure.
[0043] In an embodiment of the present disclosure, the environment 100 may comprise a
number of users present on a multimedia platform 110. The users may perform one or
more activities together on the multimedia platform 110. The one or more activities
may comprise the activities of the shared experiences discussed above. The
multimedia platform 110 may store all the information associated with the users. In
one non-limiting embodiment of the present disclosure, only the shared experiences
of the user within a predetermined time period are considered for processing.
[0044] In an embodiment of the present disclosure, the information may comprise profile
data, demographic details of the user, details of various event attended by the users on
the multimedia platform 110, peer to peer or group chat history. The users present on
the multimedia platform 110 may be divided into a number of groups based on one or
more attributes of the shared experiences. For example, group-1 shall comprise user
1, user 2, user 3, and user 4, and group-2 shall comprise user 5, user 6, user 7, and user
8 as shown in fig. 1.
[0045] In an embodiment of the present disclosure, the information associated with the user
may be further processed to determine the interest of the user. The activities of the
user is continuously monitored on the multimedia platform 110 to determine the
preferences of the user with respect to different types of content. Then, the interest
and preference of each user are considered to form or match couples within each
group. For example, the users 1 and 4 may match and form a couple-1 and users 6 and
7 may match and form couple 2. In one non-limiting embodiment, the chat history
may also be used for forming or matching couples within the group.
[0046] In an embodiment of the present disclosure, a map of matching and shared experiences
may be created for each of the couples within the groups. The map of matching may
10
comprise activities or interaction due to which the match happened. The shared
experience may comprise the experiences or activities that the matched users have
done together before matching in the virtual world. In one non-limiting embodiment,
the shared experience may also comprise the experiences or activities that the matched
users have done together after matching in the multimedia platform 110.
[0047] The map of matching and the shared experiences may be created in the form of
snippets of various events, activities, and shared experiences done together. The
snippets may be then compiled to generate a video or an animation. The video may be
shared with each of the matched couple. Thus, the generation of video may facilitate
enhanced interaction of users on the multimedia platform 110 and keeping their
relationship going online, thereby enhancing the user experience.
[0048] In an exemplary embodiment of the present disclosure, the multimedia platform 110
comprises a number of users. The users participate in various shared experiences
together during the lifecycle on the multimedia platform. The information of the user
are extracted from the multimedia platform. The extracted information comprises the
demographic information, profile information, shared experiences, and chat history of
the user.
[0049] The users are then classified into groups based on the attributes of the shared
experiences such as watching movie, playing game, augmented reality, or virtual
reality. For example, users of group -1 usually watches Bollywood romantic movies,
users of group-2 usually plays soccer or sports related games on the multimedia plat
form 110.
[0050] After classification into groups, the demographic information and profile information
of the users are processed to determine the interests of the user with respect to the
shared experience in which they participated. The activities and chat history of the
users are also monitored to determine the user specific preference with respect to
various types of content. For example, in group-1 user 2 and 3 watches only movies
for 10-15 minutes while surfing the content on the multimedia platform, while users
11
1 and 4 watches movies quite often and they have interest in movies as per the profile
information. Also, users 1 and 4 are of a same age category.
[0051] Thus, users 1 and 4 are matched based on the user interest and user specific preference
with respect to particular type of content. Similarly, the users 6 and 7 of the group-2
spend most of their time on the multimedia platform playing games and the profile
information of the users 6 and 7 also indicate that the user have interest in playing
games. Hence, users 6 and 7 are matched based on their interests and preferences.
[0052] A map of matching and shared experiences may be created for each of the couples
within the groups. A video or animation may be then created based on the map of
matching and shared experiences. The video may comprise a snippet of movie they
watched together, various events, activities in which they participated together. The
video or the animation may be shared with the matched users to facilitate enhanced
interaction of users on the multimedia platform 110.
[0053] Fig. 2 illustrates an exemplary data flow for generating a video for matched users or
couples, in accordance with an embodiment of the present disclosure.
[0054] In an embodiment of the present disclosure, a system 210 may extract information
from the multimedia platform. The information of the users may comprise at least one
of the information discussed above. The system 210 may the create groups of two or
more users based on the attributes of the shared experiences. In one non-limiting
embodiment of the present disclosure, the system may only consider the shared
experiences of the user within a predetermined time period.
[0055] The system 210 may process the extracted information of the users within each group
to determine the interest of the user. The system 210 may also monitor activities of
the user on the multimedia platform. The system 210 may then match one or more
users within the group based on the interest of the users and the activities of the user
on the multimedia platform.
[0056] In an embodiment of the present disclosure, the system 210 may determine a map of
matching and shared experiences of each matched couple based on the extracted
12
information and fed it to video generation unit 220 of the system 210. The video
generation unit 220 may be configured to generate video or animation for each of the
matched couples based on the map of matching and the shared experiences.
[0057] In an embodiment of the present disclosure, the video or animation may comprise
snippets of various milestones or significant events of their relationship or the snippets
of first of every shared experience they have done together on the virtual platform.
The video or the animation may comprise one or more of the following but not limited
to text data, audio data, emoji, stickers, avatars, and images, audio-visual media, etc.
[0058] The system 210 may then share the generated video or animation with the respective
matched couple. In one non-limiting embodiment, the matched user or couple may
share the video or animation with their respective connections in the virtual world.
Thus, the generated video may facilitate enhanced interaction of users on the
multimedia platform and keeping their relationship going online, thereby enhancing
the user experience.
[0059] Fig. 3(a) illustrates a block diagram of a network 300 for generating a video for
matched users or couples and fig. 3(b) illustrates a block diagram of a processing unit
305, in accordance with an embodiment of the present disclosure.
[0060] According to an embodiment of present disclosure, the network 300 may include one
or more of networks, but not limited to, internet, a local area network, a wide area
network, a peer-to-peer network, and/or other similar technologies for connecting
various entities as discussed below. In an aspect, various elements/entities such as a
system 310 and user devices 320 and 330 of the network 300 as shown in fig. 3(a)
may communicate within the network 300 through web presence. In fig. 3(a), only
two user devices 320 and 330 are shown only for the sake of ease and should not be
construed as limiting the scope and multiple user devices may be connected to the
system 310.
[0061] The user devices 320 and 330 may be operated by users for communications. In one
non-limiting embodiment, the user devices 320 and 330 may be operated by user to
interact or communicate in a virtual environment/platform or any other multimedia
13
platform. The system 310 may remain operatively couple to one or more user device
320, 330 to receive and process the communications or interactions received from the
user devices.
[0062] The user devices 320 and 330 may represent desktop computers, laptop computers,
mobile devices (e.g., Smart phones or personal digital assistants), tablet devices, or
other type of computing devices, which have computing, messaging and networking
capabilities. The user devices 320 and 330 may be equipped with one or more
computer storage devices (e.g., RAM, ROM, PROM, SRAM, etc.), communication
unit and one or more processing devices (e.g., central processing units) that are
capable of executing computer program instructions.
[0063] According to an exemplary embodiment, the communication may be in the form
exchange of one or more of the following, but not limited to text, audio, video, emoji,
stickers, animations, and images, audio-visual media etc.
[0064] In an embodiment of the present disclosure, the system 310 may comprise a memory
unit 301, a transceiver 303, a processing unit 305 and input/output (I/O) interface 307.
The I/O interface 307 may include a variety of software and hardware interfaces, for
example, a web interface, a graphical user interface, input device, output device and
the like. The I/O interface 307 may allow the system 307 to interact with the user
directly or through other devices. The memory unit 301 may be communicatively
coupled to the processing unit 305.
[0065] Further, the memory unit 301 may store data which comprises plurality of information
associated with the users on a multimedia platform. The plurality of information may
include the following, but not limited to demographic information of each user, profile
information of each user, and shared experiences of each user. In one non-limiting
embodiment of the present disclosure, the plurality of information may also include
peer to peer chat history or group chat history of type of media shared between the
users within a group.
[0066] In an embodiment of the present disclosure, the memory unit 301 may include any
computer-readable medium known to a person skilled in the art including, for
14
example, volatile memory, such as static random access memory (SRAM) and
dynamic random access memory (DRAM), and/or non-volatile memory, such as read
only memory (ROM), erasable programmable ROM, flash memories, hard disks,
optical disks, and magnetic tapes.
[0067] In an embodiment of the present disclosure, the data may be stored within the memory
unit 301 in the form of various data structures. Additionally, the data may be organized
using data models, such as relational or hierarchical data models or lookup tables. The
other data may store data, including temporary data and temporary files, generated by
the processing unit 305 for performing the various functions of the system 310. The
processing unit 305 may comprise at least one processor 309, a video generation unit
311, and a memory 313 in communication with each other.
[0068] As used herein, the term ‘units’ refers to an application specific integrated circuit
(ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory
that execute one or more software or firmware programs, a combinational logic circuit,
and/or other suitable components that provide the described functionality. In an
embodiment, the other units may be used to perform various miscellaneous
functionalities of the system 310. It will be appreciated that such units (301, 305) may
be represented as a single unit or a combination of different units.
[0069] In an embodiment of the present disclosure, the processing unit 305 may be
configured to extract plurality of information for users present on a multimedia
platform. The plurality of information comprises at least one of: demographic
information, profile information, and shared experiences.
[0070] The demographic information of the user may comprise at least one of the following
but not limited to age, race, ethnicity, gender, marital status, income, education, and
employment of each of the user. The profile information may comprise details
provided by the user in question and answer format, while creating the profile.
[0071] The shared experiences may comprise may include events attended or activities
performed by the users during their life cycle on the multimedia platform as discussed
above. In one non-limiting embodiment of the present disclosure, the processing unit
15
305 may only extract the shared experiences of the user that happened within a
predetermined time period. The shared experiences may comprise watching a movie
together, reading a book together gardening, playing a game, making a house, keeping
a pet, etc.
[0072] In an embodiment of the present disclosure, the processing unit 305 may be configured
to create one or more groups among said users based on one or more attributes of the
shared experiences. In an exemplary scenario, the shared experience may be a movie
and the one or attributes associated with movie may be at least one of but not limited
to a genre of movie, language of the movie, actors present in the movie, time duration
of the movie.
[0073] In another exemplary embodiment, the shared experience may be game played on the
multimedia platform together and the attributes associated with game mat at least one
of but not limited to action games, action-adventure games, adventure games, roleplaying games, simulation games, strategy games, sports games, puzzle games, idle
games, etc.
[0074] The processing unit 305 may be configured to filter the users at group level based on
the types of shared experience and the attributes associated with the shared experience.
For example, the processing unit 305 may create a first group of users who like
watching Bollywood romantic movies and may create a second group of users who
like watching Hollywood action movies. A number of group may be created based on
type of shared experience and attributes associated with the shared experience.
[0075] In an embodiment of the present disclosure, the processing unit 305 may be configured
to process the at least one of the demographic information and the profile information
for each user within their groups to determine interest of each user. The processing of
the demographic information and the profile information may facilitate in analyzing
the age group and present status of the user and users’ interest in different types of
shared experiences.
[0076] The processing unit 305 may be then configured to monitor preference of each user
with respect to different types of content available on the multimedia platform. The
16
system 310 may continuously monitor activities of the user devices 320 and 330 for
determining the type of content preferred by the respective user of the user devices
320 and 330. The activities of the user may be used to determine primary and
secondary behavior of user on the multimedia platform.
[0077] The processing unit 305 may be then configured to match at least one couple within
the created groups based at least on the determined interest and the preferences of the
users. In one non limiting embodiment of the present disclosure, the processing unit
305 may be configured to match the at least one couple further based on the chat
history of users. The chat history may be peer to peer chat history or group chat
history. The chat history may also comprise media content shared between the users
during the chat.
[0078] In an embodiment of the present disclosure, the video generation unit 311 of the
processing unit 305 may be configured to generate video or an animation for the at
least one matched couple. For generation of the video for the at least one matched
couple, the processing unit is configured to: generate a map of matching and one or
more shared experiences between the at least one matched couple, create one or more
snippets of the map of matching and one or more shared experiences, and combine
said one or more snippets to generate the video.
[0079] In an embodiment of the present disclosure, the map of matching may comprise
activities or interaction due to which the match happened. The one or more shared
experience may comprise the shared experiences or activities that the matched user
have done together before the matching on the multimedia platform. In one nonlimiting embodiment, the generated video may comprise shared experiences or
activities that the matched user have done after the matching.
[0080] The video or animation may comprise snippets of various events, activities, and
shared experiences done together. The video or the animation may comprise one or
more of the following but not limited to text data, audio data, emoji, stickers, avatars,
and images, audio-visual media, etc. In one non-limiting embodiment, the video or
animation may comprise the snippets of various milestones or significant events of
17
their relationship or the snippets of first of every shared experience they have done
together on the multimedia platform.
[0081] In one non-limiting embodiment of the present disclosure, the processing unit 305
may share the video or animation with respective matched users or couples via the
transceiver 303. The processing unit 305 may also be configured to share the video or
animation with the connections of the matched users or couples on the multimedia
platform. In one non-limiting embodiment, the matched user or couple may share the
video or animation with their respective connections in the multimedia platform or
virtual world.
[0082] In an embodiment of the present disclosure, the matched users may customize or
update the video or animation as per their choice. In one non-limiting embodiment,
the processing unit 305 may receive a preference of the respective user and the
processing unit 305 may customize the video or the animation based on the preference
of the matched user. For example, the user may only want to have video or animation
or story of various milestones of their relationship. The processing unit 305 may only
generate the video or animation comprising the milestones in such scenario.
[0083] Thus, the system 310 may facilitate enhanced interaction of users on the multimedia
platform and keeping their relationship going online, thereby enhancing the user
experience.
[0084] Fig. 4 illustrate a flowchart of an exemplary method 400 for generating a video for
matched users or couples, in accordance with an embodiment of the present disclosure.
[0085] At step 401, the method 400 discloses extracting plurality of information for users
present on a multimedia platform. The plurality of information comprises at least one
of: demographic information, profile information, and shared experiences. The
demographic information of the user may comprise at least one of the following but
not limited to age, race, ethnicity, gender, marital status, income, education, and
employment of each of the user. The profile information may comprise details
provided by the user in question and answer format, while creating the profile.
18
[0086] The shared experiences may comprise may include events attended or activities
performed by the users during their life cycle on the multimedia platform as discussed
above. In one non-limiting embodiment of the present disclosure, the extracted shared
experiences may only be recent shared experiences of the user which happened within
a predetermined time period. The shared experiences may comprise watching a movie
together, reading a book together gardening, playing a game, making a house, keeping
a pet, etc.
[0087] At step 403, the method 400 discloses creating one or more groups among said users
based on one or more attributes of the shared experiences. In an exemplary scenario,
the shared experience may be a movie and the one or attributes associated with movie
may be at least one of but not limited to a genre of movie, language of the movie,
actors present in the movie, time duration of the movie.
[0088] In another exemplary embodiment, the shared experience may be game played on the
multimedia platform together and the attributes associated with game mat at least one
of but not limited to action games, action-adventure games, adventure games, roleplaying games, simulation games, strategy games, sports games, puzzle games, idle
games, etc.
[0089] The users present on the multimedia platform may be filtered at group level based on
the types of shared experience and the attributes associated with the shared experience.
For example, a first group of users likes watching Bollywood romantic movies and a
second group of users likes watching Hollywood action movies. A number of such
group may be created based on type of shared experience and attributes associated
with the shared experience.
[0090] At step 405, the method 400 discloses processing the at least one of the demographic
information and the profile information for each user within their groups to determine
interest of each user. The processing of the demographic information and the profile
information may facilitate in analyzing the age group and present status of the user
and users’ interest in different types of shared experiences.
19
[0091] At step 407, the method 400 discloses monitoring preference of each user with respect
to different types of content available on the multimedia platform. The user device
activities may be continuously monitored for determining the type of content preferred
by the respective user. The activities of the user may be used to determine primary
and secondary behavior of user on the multimedia platform.
[0092] At step 409, the method 400 discloses matching at least one couple within the created
groups based at least on the determined interest and the preferences of the users. In
one non-limiting embodiment of the present disclosure, matching the at least one
couple further based on the chat history of users. The chat history may be peer to peer
chat history or group chat history. The chat history may also comprise media content
shared between the users during the chat.
[0093] At step 411, the method 400 discloses generating a video or an animation for the at
least one matched couple. The step of generating comprises generating a map of
matching and one or more shared experiences between the at least one matched
couple, creating one or more snippets of the map of matching and one or more shared
experiences, and combining the one or more snippets to generate the video.
[0094] In an embodiment of the present disclosure, the map of matching may comprise
activities or interaction due to which the match happened. The one or more shared
experience may comprise the shared experiences or activities that the matched user
have done together before the matching on the multimedia platform. In one nonlimiting embodiment, the generated video may comprise shared experiences or
activities that the matched users have done after the matching.
[0095] The video or animation may comprise snippets of various events, activities, and
shared experiences done together. video or the animation may comprise one or more
of the following but not limited to text data, audio data, emoji, stickers, avatars, and
images, audio-visual media, etc.
[0096] In one non-limiting embodiment, the video or animation may comprise the snippets
of various milestones or significant events of their relationship or the snippets of first
of every shared experience they have done together on the multimedia platform.
20
[0097] In an embodiment of the present disclosure, the method 400 discloses sharing the
video or animation with respective matched users or couples. The video or animation
may be shared with the connections of the matched users or couples on the multimedia
platform based on user permission. In one non-limiting embodiment, the matched user
or couple may share the video or animation with their respective connections in the
multimedia platform or virtual world.
[0098] In an embodiment of the present disclosure, the matched users may customize or
update the video or animation as per their choice. In one non-limiting embodiment, a
preference of the respective user may be received and the video or the animation may
be customized based on the preference of the matched users. For example, the user
may only want to have video or animation or story of various milestones of their
relationship. The processing unit 305 may only generate the video or animation
comprising the milestones in such scenario.
[0099] Thus, the method 400 may facilitate enhanced interaction of users on the multimedia
platform and keeping their relationship going online, thereby enhancing the user
experience. In one non-limiting embodiment of the present disclosure, a server may
perform all the step of method 400. In another non-limiting embodiment of the present
disclosure, the steps of method 400 may be performed in an order different from the
order described above.
[00100] The illustrated steps are set out to explain the exemplary embodiments shown,
and it should be anticipated that ongoing technological development will change the
manner in which particular functions are performed. These examples are presented
herein for purposes of illustration, and not limitation. Further, the boundaries of the
functional building blocks have been arbitrarily defined herein for the convenience of
the description. Alternative boundaries can be defined so long as the specified
functions and relationships thereof are appropriately performed. Alternatives
(including equivalents, extensions, variations, deviations, etc., of those described
herein) will be apparent to persons skilled in the relevant art(s) based on the teachings
contained herein. Such alternatives fall within the scope and spirit of the disclosed
embodiments. Also, the words “comprising,” “having,” “containing,” and
21
“including,” and other similar forms are intended to be equivalent in meaning and be
open ended in that an item or items following any one of these words is not meant to
be an exhaustive listing of such item or items or meant to be limited to only the listed
item or items. It must also be noted that as used herein and in the appended claims,
the singular forms “a,” “an,” and “the” include plural references unless the context
clearly dictates otherwise.
[00101] Although the present invention has been described in considerable detail with
reference to figures and certain preferred embodiments thereof, other versions are
possible. Therefore, the spirit and scope of the present invention should not be limited
to the description of the preferred versions contained herein.
[00102] Reference Numerals
100 Environment
110 Multimedia Platform
200 Data flow diagram
210 System
220 Video generation unit
300 Network
310 System
301 Memory unit
303 Transceiver
305 Processing unit
307 Input/output interface
309 One or more processors
311 Video generation unit
313 Memory
320 User device
330 User device
400 Method
We Claim:
1. A method for generating a video for matched users, the method comprising:
extracting plurality of information for users present on a multimedia platform, wherein the
plurality of information comprises at least one of: demographic information, profile information,
and shared experiences;
creating one or more groups among said users based on one or more attributes of the shared
experiences;
processing the at least one of the demographic information and the profile information for
each user within their groups to determine interest of each user;
monitoring preference of each user with respect to different types of content available on
said multimedia platform;
matching at least one couple in at least one group based at least on the determined interest
and the preferences of the users; and
generating video for the at least one matched couple.
2. The method as claimed in claim 1, further comprising:
extracting chat history of the plurality of users; and
matching the at least one couple based on the chat history of users.
3. The method as claimed in claim 1, wherein generating the video for the at least one matched
couple comprises:
generating a map of matching and one or more shared experiences between the at least one
matched couple;
creating one or more snippets of the map of matching and one or more shared experiences;
and
combining said one or more snippets to generate the video.
4. The method as claimed in claim 1, wherein the generated video comprise text data, audio
data, emoji, stickers, avatars, and images, audio-visual media.
23
5. The method as claimed in claim 1, further comprising:
sharing the generated video with the at least one matched couple.
6. A system for generating a video for matched users, the system comprising:
a memory unit;
a transceiver; and
a processing unit in communication with the memory unit, and configured to:
extract plurality of information for users present on a multimedia platform, wherein
the plurality of information comprises at least one of: demographic information, profile
information, and shared experiences;
create one or more groups among said users based on one or more attributes of the
shared experiences;
process the at least one of the demographic information and the profile information
for each user within their groups to determine interest of each user;
monitor preference of each user with respect to different types of content available
on said multimedia platform;
match at least one couple in at least one group based at least on the determined
interest and the preferences of the users; and
generate video for the at least one matched couple.
7. The system as claimed in claim 6, the processing unit is configured to:
extract chat history of the plurality of users; and
match the at least one couple based on the chat history of users.
8. The system as claimed in claim 6, wherein to generate the video for the at least one matched
couple, the processing unit is configured to:
generate a map of matching and one or more shared experiences between the at least one
matched couple;
create one or more snippets of the map of matching and one or more shared experiences;
and
combine said one or more snippets to generate the video.
24
9. The system as claimed in claim 6, wherein the generated video comprise text data, audio
data, emoji, stickers, avatars, and images, audio-visual media.
10. The system as claimed in claim 6, wherein the processing unit is configured to:
share the generated video with the at least one matched couple.
| # | Name | Date |
|---|---|---|
| 1 | 202011016066-STATEMENT OF UNDERTAKING (FORM 3) [14-04-2020(online)].pdf | 2020-04-14 |
| 2 | 202011016066-PROVISIONAL SPECIFICATION [14-04-2020(online)].pdf | 2020-04-14 |
| 3 | 202011016066-POWER OF AUTHORITY [14-04-2020(online)].pdf | 2020-04-14 |
| 4 | 202011016066-FORM 1 [14-04-2020(online)].pdf | 2020-04-14 |
| 5 | 202011016066-DRAWINGS [14-04-2020(online)].pdf | 2020-04-14 |
| 6 | 202011016066-DECLARATION OF INVENTORSHIP (FORM 5) [14-04-2020(online)].pdf | 2020-04-14 |
| 7 | 202011016066-Proof of Right [25-09-2020(online)].pdf | 2020-09-25 |
| 8 | 202011016066-DRAWING [14-04-2021(online)].pdf | 2021-04-14 |
| 9 | 202011016066-CORRESPONDENCE-OTHERS [14-04-2021(online)].pdf | 2021-04-14 |
| 10 | 202011016066-COMPLETE SPECIFICATION [14-04-2021(online)].pdf | 2021-04-14 |
| 11 | abstract.jpg | 2021-10-18 |
| 12 | 202011016066-FORM 18 [13-03-2024(online)].pdf | 2024-03-13 |
| 13 | 202011016066-FER.pdf | 2025-06-04 |
| 14 | 202011016066-FORM 3 [16-07-2025(online)].pdf | 2025-07-16 |
| 1 | 202011016066searchE_11-12-2024.pdf |