Abstract: Disclosed herein is an apparatus and method for dynamically generating visual content on a communication device. The apparatus monitors one or more conversation between one or more users. It applies a Natural language processing technique on conversation data to determine a plurality of vectors. It generates one or more visual content based on the plurality of vectors and ranks the generated visual contents based on one or more predefined conditions.
The present disclosure relates to generation of visual content on a mobile device, and more particularly to a method and apparatus for dynamically generating visual content on a communication device.
BACKGROUND
[0002] In general, visual content such as emojis (i.e. images, graphical symbols, or ideograms) are typically used in electronic messages and communications to convey emotions, thoughts, or ideas. Emoji are available for use through a variety of digital devices (e.g., mobile telecommunication devices and tablet computing devices) and are often used when drafting personal e-mails, posting messages on the Internet (e.g., on a social networking site or a web forum), and messaging between mobile devices.
[0003] The number of emoji a user can choose from has grown vastly in recent years. There are emoji available for almost every subject matter imaginable. Due to the expansion in number, usage, availability, and variety of emoji, it can be time consuming, and sometimes overwhelming, for users to browse through and select appropriate emoji for a given context when participating in emoji-applicable computing activities.
[0004] Also, the users rely on the availability of emojis on a platform over which the users are communicating. The platforms in turn depend on an external content generation server for getting these emojis. In many scenarios of conversation, the best suited visual content might not be available on the platform or too few if any.
[0005] Hence, there is need in the art for techniques which may generate visual contents on the device and may recommend the generated visual content to the user.
OBJECT OF THE DISCLOSURE
[0006] An object of the present disclosure is to generate visual content on a communication device.
[0007] Another object of the present disclosure is to recommend the visual content to a user.
SUMMARY
[0008] The present disclosure overcomes one or more shortcomings of the prior art and provides additional advantages discussed throughout the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
[0009] In an embodiment of the present disclosure, a method for dynamically generating visual content on a communication device is disclosed. The method comprises monitoring one or more conversation between two or more users and applying a Natural language processing (NLP) technique on conversation data to determine a plurality of vectors. The method further comprises generating one or more visual content based on the plurality of vectors and ranking the generated visual contents based on one or more predefined conditions.
[0010] In yet another embodiment of the present disclosure, the method further comprises extracting the conversation data from the monitored one or more conversation, wherein the conversation data comprises messages and corresponding responses captured during the one or more conversation and displaying the visual contents with their ranking to the user.
[0011] In still another embodiment of the present disclosure, the plurality of vectors are determined by: applying the Natural language processing (NLP) technique upon the conversation data to determine a context parameter; mapping the conversation data to expression or emotion to determine an emotion parameter; determining an identity parameter of the user based on the conversation data; and converting the parameters into plurality of vectors.
[0012] In yet another embodiment of the present disclosure, the one or more predefined conditions include at least one of past user preference, conversation context, linguistic domain and demographic information.
[0013] In still another embodiment of the present disclosure, the plurality of vectors is at least one of context vector, identity vector, emotion vector or a combination thereof and the plurality of vectors are determined based on at least one of user data, conversation context, past generated vectors or a combination thereof.
[0014] In yet another embodiment of the present disclosure, the method further comprises generating the visual content in real time, wherein the visual content is a combination of an image and text.
[0015] In another embodiment of the present disclosure, an apparatus for dynamically generating visual content on a communication device is disclosed. The apparatus comprises a visual content generator to monitor one or more conversation between two or more users and apply a Natural language processing (NLP) technique on conversation data to determine a plurality of vectors. The visual content generator also generates one or more visual content based on the plurality of vectors. The apparatus also comprises of a ranking unit to rank the generated visual contents based on one or more predefined conditions.
[0016] In yet another embodiment of the present disclosure, the visual content generator is configured to: extract the conversation data from the monitored one or more conversation, wherein the conversation data comprises messages and corresponding responses captured during the one or more conversation; and display the visual contents with their ranking to the user.
[0017] In still another embodiment of the present disclosure, the visual content generator is configured to: apply the Natural language processing (NLP) technique upon the conversation data to determine a context parameter; map the conversation data to expression or emotion to determine an emotion parameter; determine an identity
parameter of the user based on the conversation data; and convert the parameters into plurality of vectors.
[0018] In yet another embodiment of the present disclosure, the visual content generator determines plurality of vectors based on at least one of user data, conversation context, past generated vectors or a combination thereof.
[0019] In still another embodiment of the present disclosure, the visual content generator generates the visual content in real time, and wherein the visual content is a combination of an image and text.
[0020] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of apparatus and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
[0022] Figure 1 shows an example of chat between two users using a visual content, in accordance with prior art;
[0023] Figure 2 shows a block diagram 200 illustrating an apparatus for dynamically generating visual content on a communication device, in accordance with an embodiment of the present disclosure; and
[0024] Figure 3 shows a method 300 for dynamically generating visual content on a communication device, in accordance with an embodiment of the present disclosure.
[0025] The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
[0026] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0027] In the present document some of the terms may be used repeatedly throughout the disclosure. For clarity said terms are illustrated below:
[0028] Emoji in context of the present application may be defined as a set of graphical symbols or a simple pictorial representation that represents an idea or concept, independent of any language and specific words or phrases. In particular, emoji may be used to convey one's thoughts and emotions through a messaging platform without any bar of language. Further, the term emoji or emoticon may mean more or less same in the context of the present application and may be used interchangeably throughout the disclosure, without departing from the scope of the present application.
[0029] Sticker in context of the present application may relate to an illustration which is available or may be designed (using various application) to be placed on or added to a message. In simple words sticker is an elaborate emoticon, developed to allow more depth and breadth of expression than what is possible by means of 'emojis' or 'emoticons'. Stickers are generally used, on digital media platforms, to quickly and simply convey an emotion or thought. In some embodiments, the stickers may be animated, derived from cartoon-like characters or real-life peoples etc. and are often
embodiments, stickers may also be designed to represent real-world events in more interactive and fascinating form to be shared between users on various multimedia messaging platforms.
[0030] Avatar in context of the present application relates to graphical representation of a user, user's image/selfie or the user's character. Thus, it may be said that an avatar may be configured to represent emotion/expression/feeling of the user by means of an image converted into avatar capturing such emotion/expression/feelings by various facial expressions or added objects such as heart, kisses etc. Further, it is to be appreciated that an avatar may take either a two-dimensional form as an icon on platform such as messaging/chat platforms and or a three-dimensional form such as in virtual environment. Further, the term avatar, profile picture, userpic mean same in context of the present application and may be used interchangeably throughout the disclosure without departing from the scope of the present application.
[0031] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0032] Disclosed herein is an apparatus and method for dynamically generating visual content on a communication device. When people chat with each other, they use various visual contents such as emojis/stickers/avatars along with texts to express their feeling, emotions, reactions etc. Such emojis/stickers/avatars are usually provided to the user by a platform over which the users are interacting with each other. Such platform may be a messaging platform, email platform, gaming platform etc. In general, the emojis/stickers/avatars are provided to such platform by an external server which stores
a number of emojis. However, it can be time consuming, and sometimes overwhelming, for users to browse through and select appropriate emoji for a given context.
[0033] Such an example of conversation between two users has been shown in fig. 1. As shown in fig. 1, John and Mary are having a conversation with each other over a messaging platform. As an example, it can be observed that John has informed Mary that "he has been promoted". In response, Mary has shared a "smile" emoji selected from a number of emojis available on the messaging platform. It can be noted that Mary has to browse through the number of emojis available on the messaging platform and choose one of the emojis out of them. It can be cumbersome and time consuming for Mary to select the emoji. Also, John may share another text before Mary chooses and send the emoji to John. Hence, the flow of the conversation may be interrupted. In an alternate scenario, Mary may not even share the emoji if she receives another text from John before sharing the emoji.
[0034] Also, Mary has to select the emoji from the available emojis only. The available emojis always not be the emojis Mary might have wanted to share. Mary might have wanted some other emoji which may convey her reaction more appropriately. For example, Mary may have wanted to "cheer" for John or may have wanted to "give a shout out" to John. However, the available emojis may not have an appropriate emoji to depict this reaction of Mary. Hence, Mary may not be able to show her reaction by way of emoji and may have to settle for other emoji. Also, as can be seen from fig. 1, the messaging platform is interacting with an external server to receive the emojis, which may be time consuming and bandwidth consuming.
[0035] Hence, there is a need to generate visual content such as emojis/stickers/avatars in real time based on the context between users and recommend them to the user. However, the extent of relation of emoji with the given context may vary from one person to another person depending upon various relationship factors (for example, whether they share a professional relationship or a friendly relationship or a family), demographic information of the users, their chat history etc. And therefore, to create
understand the above parameters. The benefit of generating and recommending such emoji may be to provide a better real-world experience for the users. Also, generating the emojis on the communication device may save time and bandwidth. How the above benefits are achieved are explained in upcoming paragraphs of the specification.
[0036] Implementations of the apparatus and methods described herein can be used to suggest one or more emoji to users for insertion into electronic communications. Content can include text (e.g., words, phrases, abbreviations, characters, and/or symbols), emoji, images, audio, video, and combinations thereof. For example, content can be analyzed by the apparatus as a user types or enters the content and, based on the analysis, the apparatus can generate and recommend emoji suggestions to the user in real-time. A given emoji suggestion can include one or more emoji characters that, if selected, will be inserted into the content. The user may then select one of the emoji suggestions, and the emoji of the suggestion can be inserted into the content at the appropriate location (e.g., at or near a current input cursor position).
[0037] Fig. 2 shows a block diagram illustrating an apparatus 200 for dynamic generating visual content on a communication device, in accordance with an embodiment of the present disclosure. Although the present disclosure is explained considering that the apparatus 200 is implemented on a mobile device, it may be understood that the apparatus 200 may be implemented in a variety of user devices. Examples of the user devices may include, but are not limited to, a IoT device, IoT gateway, portable computer, a personal digital assistant, a handheld device, and a workstation.
[0038] In one implementation, the apparatus 200 may comprise an I/O interface 202, a memory 204, a visual content generator 206 and a ranking unit 208. The memory 208 may be communicatively coupled to the visual content generator 206 and the ranking unit 208. The visual content generator 206 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the visual content
generator 206 is configured to fetch and execute computer-readable instructions stored in the memory 208. The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may enable the apparatus 202 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting many devices to one another or to another server.
[0039] In one implementation, visual content generator 206 may comprise a monitoring unit 210, an application unit 212 and a generation unit 214. According to embodiments of present disclosure, these units 210-214 may comprise hardware components like processor, microprocessor, microcontrollers, application-specific integrated circuit for performing various operations of the apparatus 200. It must be understood to a person skilled in art that the visual content generator 206 may also perform all the functions of the units 210-214 according to various embodiments of the present disclosure.
[0040] Now, following description explains the embodiments of the invention referring to interaction between John and Mary only. However, it may be understood to a skilled person that the above-mentioned scenario is merely an example, and there may be multiple other scenarios in which the present disclosure may be implemented. For example, the user may be interacting with multiple users in a group chat interface.
[0041] As shown in figure 1, John is interacting with Mary. The user data related to the user John and Mary may be created and stored in the memory 204 of the apparatus 200. The user data may comprise of age, name, gender, demographic, and profession associated with each user. The demographic data may comprise of location, address, marital status, interests of user such as in sports, movies, social media, vacation spots etc. The generation unit 214 receives the user data form the memory 204.
[0042] According to an embodiment, the monitoring unit 210 may first receive the plurality of user data corresponding to the users (John, and Mary). This helps the apparatus 200 to have a fair understanding about the background of each user. Then, the monitoring unit 210 may monitor a plurality of past conversation data based on past conversation happened between the users for a predefined time interval. For example, the past conversation data may correspond to the past conversation happened between John and Mary. The past conversation data may comprise of textual data, audio data, video audio, and graphical data. Each past conversation data indicates multiple instances of past conversation between the users during the predefined time interval. The past conversation data may be stored in the memory 204.
[0043] The monitoring unit 210 may monitor the past conversation happened between John and Mary in last one month or 6 months or any predefined time interval. It may be understood to a skilled person that the predefined time interval may vary from few hours to several days, weeks, months or year.
[0044] In an exemplary embodiment, the monitoring unit 210 may monitor the current one or more conversation between John and Mary and may extract conversation data based on the monitoring. The conversation data may comprise messages and corresponding responses captured during the current conversation. The conversation data may be in the form of texts, audio, or video form.
[0045] Then, the application unit 212 may apply a Natural language processing (NLP) technique on the conversation data to determine a plurality of vectors. In an embodiment, the plurality of vectors may be context vector, emotion vector and identity vector. In an embodiment, the application unit 212 may determine the plurality of vectors based on at least one of user data, conversation context, past generated vectors or a combination thereof.
[0046] To determine a context parameter, the conversation data may be first tokenized and lemmatized. These are well known NLP preprocessing techniques. In an embodiment, tokenization may refer to identification of different words and sentences from text based on separator sequence which in the case of English language are period
(.) and space. Lemmatization may refer to getting the base word after removing the endings from a word for example work, works, working, worked has the same base word work. In the above example, John and Mary are talking about John having a promotion and it may be determined that the context of the conversation is "promotion". The application unit 212 may also use the past conversation data to determine the context of the current conversation.
[0047] The application unit 212 may also determine an emotion/expression parameter based on the conversation data. In an embodiment, the emojis and stickers may be mapped through a pretrained expression text mapping neural network. This neural network learning is an offline process which maps a given sticker or emoji to its textual representation giving its expression/emotion etc. In an embodiment, this neural network works in getting the textual equivalent of a conversation in which some text, some emojis, some stickers or other forms of emotion and expression has been used. So, to convert all different emotion/expressions modes to a normalized textual form, this kind of neural network can be used. This neural network may be applied on the conversation data to determine the emotion in the conversation.
[0048] The application unit 212 may also determine an identity parameter which indicates the identity of the user, based on conversation data. As explained above, the user data may be stored in the memory 204 and the application unit 212 may extract the user data from the memory 204.
[0049] The application unit 212 may then use word to vector technique to determine vectors from the determined context, emotion and identity of the user. In an embodiment, the word to vector technique basically means that each word is converted to a vector embedding. In an embodiment, this technique may use a principle that similar words keep similar neighbors and should have similar vector representation. In an embodiment, the application unit 212 may determine a context vector, emotion vector and identity vector.
[0050] Based on the determined vectors, the generation unit 214 may generate one or more visual content such as emojis. In an embodiment, the generation unit 214 may use generative adversarial network to generate the one or more visual contents. In an embodiment, the generation unit 214 may generate the visual content in real time as in when the conversation is happening between the users. In an embodiment, the visual content may be combination of an image and text.
[0051] In an embodiment, the generation unit 214 may consider user data such as user preference, linguistic domain and demographic information while generating the visual content. For example, if the user data shows that a user prefers "Tamil language", then the generation unit 214 may generate a visual content which is a combination of an image and a text in Tamil language.
[0052] The generated visual content is then provided to a ranking unit 208 which then ranks the generated one or more visual contents. In an embodiment, the ranking unit 208 ranks the one or more visual contents based on one or more predefined conditions. The ranking unit 208 may rank the generated visual contents in many ways. For example, the ranking unit 208 may give a rank such as 1, 2, 3... to the generated visual content and may highlight the preferred content i.e. content with rank 1 with more intensity while suggesting the visual content to the user. The intensity of the visual content may decrease with the ranking i.e. content with rank 2 may be highlighted with more intensity than content with rank 3. In another embodiment, the ranking unit 208 may suggest the generated visual content with rank number 1, 2, 3... It should be noted that there could be other ways to suggest the ranked visual content to the user and such ways fall within the scope of the present disclosure.
[0053] In an embodiment, the one or more predefined conditions include at least one of past user preference, conversation context, linguistic domain and demographic information. For example, let us consider that Mary is an Indian girl whose preferred language is "Hindi". The ranking unit 208 then may rank a visual content in Hindi as the most preferred visual content.
[0054] In an embodiment, the ranking can be done by sorting on the basis of some distance measure like cosine similarity between the context vector and the generated content's vector. The cosine similarity metric may measure how similar two vectors are in a given space by measuring the cosine of the angle between the two vectors.
[0055] Figure 3 depicts a method 300 for dynamically generating visual content on a communication device, in accordance with an embodiment of the present disclosure. As illustrated in figure 3, the method 300 includes one or more blocks illustrating a method for embedding a creative content with an image. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform specific functions or implement specific abstract data types.
[0056] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described.
[0057] At block 302, the method 300 may include monitoring one or more conversation between two or more users.
[0058] At block 304, the method 300 may include applying a Natural language processing (NLP) technique on conversation data to determine a plurality of vectors. The plurality of vectors may include at least one of context vector, identity vector, emotion vector or a combination thereof.
[0059] At block 306, the method 300 may include generating one or more visual content based on the plurality of vectors.
[0060] At block 308, the method 300 may include ranking the generated visual contents based on one or more predefined conditions. The one or more predefined conditions may include at least one of past user preference, conversation context, linguistic domain and demographic information.
[0061] At block 310, the method may include displaying the one or more visual content with their rankings to the user.
[0062] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
[0063] When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0064] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
[0065] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
[0066] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise.
[0067] Advantages of the embodiment of the present disclosure are illustrated herein:
1. The variety of visual content can be generated.
2. Being highly context oriented to the chat conversation.
3. Being cost effective for reducing the manual efforts in sticker generation.
4. Being on-device, it is very fast and seamless.
[0068] Referral Numerals:
Reference Numeral Description
100 An example of prior art
200 Block diagram of the apparatus
202 I/O Interface
204 Memory
206 Virtual Content Generator
208 Ranking unit
210 Monitoring Unit
212 Application unit
214 Generation Unit
218 Identification Unit
300 Method for dynamically generating visual content on communication device a
We Claim:
1. A method (300) for dynamically generating visual content on a
communication device, the method comprising:
monitoring (302) one or more conversation between two or more users;
applying (304) a Natural language processing (NLP) technique on conversation data to determine a plurality of vectors;
generating (306) one or more visual content based on the plurality of vectors; and
ranking (308) the generated visual contents based on one or more predefined conditions.
2. The method (300) as claimed in claim 1, further comprises:
extracting the conversation data from the monitored one or more conversation, wherein the conversation data comprises messages and corresponding responses captured during the one or more conversation; and
displaying the one or more visual content with their ranking to the user.
3. The method (300) as claimed in claim 1, wherein the plurality of vectors are
determined by:
applying the Natural language processing (NLP) technique upon the conversation data to determine a context parameter;
mapping the conversation data to expression or emotion to determine an emotion parameter;
determining an identity parameter of the user based on the conversation data; and
converting the parameters into plurality of vectors.
4. The method (300) as claimed in claim 1, wherein the one or more predefined
conditions include at least one of past user preference, conversation context,
linguistic domain and demographic information.
5. The method (300) as claimed in claim 1, wherein the plurality of vectors include at least one of context vector, identity vector, emotion vector or a combination thereof and the plurality of vectors are determined based on at least one of user data, conversation context, past generated vectors or a combination thereof.
6. The method (300) as claimed in claim 1, further comprise generating the visual content in real time, wherein the visual content is a combination of an image and text.
7. An apparatus (200) for dynamically generating visual content on a communication device, the apparatus (200) comprising:
a visual content generator (206) configured to:
monitor one or more conversation between two or more users;
apply a Natural language processing (NLP) technique on conversation data to determine a plurality of vectors; and
generate one or more visual content based on the plurality of vectors; and
a ranking unit (208) coupled to the visual content generator and configured to:
rank the generated visual contents based on one or more predefined conditions.
8. The apparatus (200) as claimed in claim 7, wherein the visual content
generator (206) is configured to:
extract the conversation data from the monitored one or more conversation, wherein the conversation data comprises messages and corresponding responses captured during the one or more conversation; and
display the one or more visual content with their ranking to the user.
9. The apparatus (200) as claimed in claim 7, wherein the visual content
generator (206) is configured to:
apply the Natural language processing (NLP) technique upon the conversation data to determine a context parameter;
map the conversation data to expression or emotion to determine an emotion parameter; determine an identity parameter of the user based on the conversation data; and convert the parameters into plurality of vectors.
10. The apparatus (200) as claimed in claim 7, wherein the one or more predefined conditions include at least one of past user preference, conversation context, linguistic domain and demographic information.
11. The apparatus (200) as claimed in claim 7, the visual content generator (206) determines the plurality of vectors based on at least one of user data, conversation context, past generated vectors or a combination thereof and the plurality of vectors include at least one of context vector, identity vector, emotion vector or a combination thereof.
12. The apparatus (200) as claimed in claim 7, wherein the visual content generator (206) generates the visual content in real time, and wherein the visual content is a combination of an image and text.
| # | Name | Date |
|---|---|---|
| 1 | 202111014770-STATEMENT OF UNDERTAKING (FORM 3) [31-03-2021(online)].pdf | 2021-03-31 |
| 2 | 202111014770-POWER OF AUTHORITY [31-03-2021(online)].pdf | 2021-03-31 |
| 3 | 202111014770-FORM 1 [31-03-2021(online)].pdf | 2021-03-31 |
| 4 | 202111014770-DRAWINGS [31-03-2021(online)].pdf | 2021-03-31 |
| 5 | 202111014770-DECLARATION OF INVENTORSHIP (FORM 5) [31-03-2021(online)].pdf | 2021-03-31 |
| 6 | 202111014770-COMPLETE SPECIFICATION [31-03-2021(online)].pdf | 2021-03-31 |
| 7 | 202111014770-Proof of Right [05-04-2021(online)].pdf | 2021-04-05 |
| 8 | 202111014770-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |