Abstract: The present invention relates to a method for real time rendering of a metaverse master trainer of an expert on a user device comprising one or more processors, and memory storing one or more programs for execution by the one or more processors. The method may include generating the metaverse master trainer for rendering using generative adversarial network. The method may include receiving at least one of metaverse information data sample of the expert. The method may include generating at least one facial expressions for speech and emotions from the received metaverse information data sample. Th method may include assigning a voice to the metaverse master trainer and adding to a metaverse master trainer library. The method may include engaging the user in an interactive metaverse session using generated metaverse master trainer. The method may include dynamically adapting at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive. The method may include evaluating the user activity in the interactive metaverse session using an AI technique. The method may include providing feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation. <>
Claims:We claim
1. A method for real time rendering of a metaverse master trainer (112) of an expert on a user device comprising one or more processors, and memory storing one or more programs for execution by the one or more processors, the method comprising:
- generating the metaverse master trainer (112) for rendering using generative adversarial network by
- receiving at least one of metaverse information data sample of the expert;
- generating at least one facial expressions for speech and emotions from the received metaverse information data sample;
- assigning a voice to the metaverse master trainer; and
- adding to a metaverse master trainer library;
- engaging the user in an interactive metaverse session using generated metaverse master trainer (112);
- dynamically adapting at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive;
- evaluating the user activity in the interactive metaverse session using an AI technique; and
- providing feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation.
2. The method as claimed in claim1, further comprising:
- selecting at least one of a trainer, voices, and accents of the digitally rendered metaverse master trainer from the metaverse master trainer library based on user input; and
- dynamically adapting selected metaverse master trainer for the interactive metaverse session to use at least one of resources, pictures, animation, drawing tool, audio, video clip, and AR/VR.
3. The method as claimed in claim 1, wherein the expert can create metaverse master trainer (112) for real time rendering, wherein the metaverse master trainer is trained to emphasize/enunciate keywords while speaking, give pauses for better intelligibility, blink, smile.
4. The method as claimed in claim 1, wherein the digital rendered metaverse master trainer (112) is resynthesized using GANs and custom vocoders for
- generating essential facial positions and expressions of the expert;
- generating AI based speech/audio based upon a customized voice setting; and
- synchronizing a lip movement with the audio based on the contents and generated AL speech/audio.
5. The method as claimed in claim1, wherein the interactive metaverse session includes content delivery, job training, skill training, job interview, practice session or other similar training or assessment session.
6. The method as claimed in claim 5, wherein the delivering content through the metaverse master trainer (112) is performed by
- customizing and organizing the content by identifying at least one of objective, essential keywords, definitions, basic concepts, and example applications;
- presenting the customized content to the user; and
- engaging the user to at least one activity related to the delivered content to reinforce the learning.
7. The method as claimed in claim1, wherein engaging the user include providing engagement plan by
- performing real time analytics on the user performance with a query and response history; and
- forming a suitable index to watch specific events in the session.
8. The method as claimed in claim1, wherein the user activity is evaluated by
- evaluating the user response based on a set of criteria decided by the human expert;
- analyzing the user response through NLP techniques, wherein the analyzing includes identifying the keywords or named entities, speech-based analysis, AI similarity modelsfrom the user’s response;
- analyzing the user behavior using AI-based real time gaze tracking and emotion detection;
- providing a follow up query based on analysed user response and the user behaviour;
- monitoring the user progress; and
- providing quantitative performance scores for evaluated user activity based on monitored user progress.
9. The method as claimed in claim1, wherein providing feedback include suggesting a personalize content to the user based on both individual and collective engagements of users with the content as well as the individual’s goals digitally mapped in an appropriate database.
10. The method as claimed in claim 1, wherein dynamically responding comprises:
- receiving a query from the user related to the content or activity in the interactive session;
- fetching suitable matching response from the response database; wherein the response database anticipates the user query and store the response for the anticipated query;
- detecting whether the suitable matching response of the received query is not found in response database; and
- sending the detected received query to the expert for a response, wherein the expert response is stored in the response database.
11. A system for real time rendering of a metaverse master trainer (112) of an expert on a user device comprising one or more processors, and memory storing one or more programs for execution by the one or more processors, the system comprising:
- a metaverse master trainer (112) is generated for rendering using generative adversarial network by
- receiving at least one of metaverse information data sample of the expert;
- generating at least one facial expressions for speech and emotions from the received metaverse information data sample;
- assigning a voice to the metaverse master trainer; and
- adding to a metaverse master trainer library;
- engage the user in an interactive metaverse session using generated metaverse master trainer;
- dynamically adapt at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive;
- an evaluating module (114) is configured to evaluate the user activity in the interactive metaverse session using an AI technique; and
- a feedback module (124) is configured to provide feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation.
12. The system as claimed in claim11, further comprising:
- selecting at least one of a trainer, voices, and accents of the digitally rendered metaverse master trainer from the metaverse master trainer library based on user input; and
- dynamically adapting selected metaverse master trainer for the interactive metaverse session to use at least one of resources, pictures, animation, drawing tool, audio, video clip, and AR/VR.
13. The system as claimed in claim 11, wherein the expert can create metaverse master trainer (112) for real time rendering, wherein the metaverse master trainer (112) is trained to emphasize/enunciate keywords while speaking, give pauses for better intelligibility, blink, smile.
14. The system as claimed in claim 11, wherein the digital rendered metaverse master trainer (112) is resynthesized using GANs and custom vocoders for
- generating essential facial positions and expressions of the expert;
- generating AI based speech/audio based upon a customized voice setting; and
- synchronizing a lip movement with the audio based on the contents and generated AI speech/audio.
15. The system as claimed in claim 11, wherein the interactive metaverse session includes content delivery, job training, skill training, job interview, practice session or other similar training or assessment session.
16. The system as claimed in claim 15, wherein a content management module (116) is configured to delivery content through the metaverse master trainer is performed by
- customize and organize the content by identifying at least one of objective, essential keywords, definitions, basic concepts, and example applications;
- present the customized content to the user; and
- engage the user to at least one activity related to the delivered content to reinforce the learning.
17. The system as claimed in claim 11, wherein engaging the user include providing engagement plan by
- performing real time analytics on the user performance with a query and response history; and
- forming a suitable index to watch specific events in the session.
18. The system as claimed in claim 11, wherein the evaluation module (114) is configured to evaluate the user activity by
- evaluate the user response based on a set of criteria decided by the human expert;
- analyze the user response through NLP techniques, wherein the analyzing includes identifying the keywords or named entities, speech-based analysis, AI similarity modelsfrom the user’s response;
- analyze the user behavior using AI-based real time gaze tracking and emotion detection;
- provide a follow up query based on analysed user response and the user behaviour;
- monitor the user progress; and
- provide quantitative performance scores for evaluated user activity based on monitored user progress.
19. The system as claimed in claim 11, wherein the feedback module (124) is configured to suggest a personalize content to the user based on both individual and collective engagements of users with the content as well as the individual’s goals digitally mapped in an appropriate database.
20. The system as claimed in claim 11, wherein a smart query module (120) is configured to provide dynamically response by
- receive a query from the user related to the content or activity in the interactive session;
- fetch suitable matching response from the response database; wherein the response database anticipates the user query and store the response for the anticipated query;
- detect whether the suitable matching response of the received query is not found in response database; and
- sending the detected received query to the expert for a response, wherein the expert response is stored in the response database.
, Description:A METHOD AND SYSTEM FOR REAL TIME RENDERING OF A METAVERSE MASTER TRAINER OF AN EXPERT ON A USER DEVICE
FIELD OF THE INVENTION
The present invention relates to a method and system for real time rendering of a metaverse master trainer of an expert on a user device.
BACKGROUND
There is a severe shortage of high-quality trainers especially in modern employable skills, profession-specific communication skills, STEM (science, technology, engineering, and Mathematics) subjects, and job interview skills. Only a small percentage of students have access to the best schools/colleges and trainers. Moreover, a human expert cannot pay equal attention to more than a few students or trainees for a long time. There is need to produce a large number of qualified, well-trained graduates to meet the needs of the 21st-century industry, and often re-train or update employees in new skills too. However, there are only a few domain expert trainers. From the trainers’ perspective, teaching live or video recording needs many hours of preparation, practice, recording, and editing. The students and trainees may need individually coached by an expert who is available anytime and can be reached from anywhere. The metaverse environment may be used to address above problems. Metaverse brings presence, avatars, home space, teleporting (hypothetical transfer of matter or energy from one point to another without traversing the physical space between them), interoperability, virtual goods, natural interfaces, etc. The metaverse environment uses augmented reality (AR) or virtual reality (VR) or mixed reality (MR) technologies. There is need for a method for using metaverse based trainer for purpose of specialized skills training and conducting face-to-face interviews on a large scale. There lies a need for a mechanism for real time rendering of a metaverse master trainer of an expert on a user device
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified format that are further described in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the invention, nor is it intended for determining the scope of the invention.
In an implementation, the present invention relates to a method for real time rendering of a metaverse master trainer of an expert on a user device comprising one or more processors, and memory storing one or more programs for execution by the one or more processors. The method may include generating the metaverse master trainer for rendering using generative adversarial network. The method may include receiving at least one of metaverse information data sample of the expert. The method may include generating at least one facial expressions for speech and emotions from the received metaverse information data sample. Th method may include assigning a voice to the metaverse master trainer and adding to a metaverse master trainer library. The method may include engaging the user in an interactive metaverse session using generated metaverse master trainer. The method may include dynamically adapting at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive. The method may include evaluating the user activity in the interactive metaverse session using an AI technique. The method may include providing feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation. To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawing. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a block diagram of a system for real time rendering of a metaverse master trainer of an expert on a user device, according to an embodiment of the present subject matter;
Figure 2 illustrates a flow diagram depicting a method for real time rendering of a metaverse master trainer of an expert on a user device, according to an embodiment of the present subject matter;
Figure 3 illustrates a flow diagram depicting an exemplary embodiment of a method for generating metaverse master trainer for real time rendering, according to an embodiment of the present matter;
Figure 4 illustrates a flow diagram depicting an exemplary embodiment of a method for generating and evaluating of one or more user activity, according to an embodiment of the present matter;
Figure 5 illustrates a flow diagram depicting another exemplary embodiment of a method for generating and evaluating of one or more user activity, according to an embodiment of the present matter; and
Figure 6 illustrates a flow diagram depicting an exemplary embodiment of a method for performing a job interview or a skill training, according to an embodiment of the present matter.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
DETAILED DESCRIPTION
It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”
The terminology and structure employed herein is for describing, teaching and illuminating some embodiments and their specific features and elements and does not limit, restrict or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
Whether or not a certain feature or element was limited to being used only once, either way it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings. The present disclosure is related to the system for a real time rendering of a metaverse master trainer of an expert on a user device. Figure 1 illustrates a block diagram 100 of a system 102 for real time rendering of a metaverse master trainer of an expert on a user device, according to an embodiment of the present subject matter. In an embodiment, the system 102 may be incorporated in a User Equipment (UE). Examples of the UE may include, but are not limited to a television, a laptop, a tab, a smart phone, a Personal Computer (PC). Details of the above aspects performed by the system 102 shall be explained below.
The system 102 includes a processor 104, a memory 106, data 108, a metaverse master trainer 112, an evaluation module 114, a content management module 116, an interactive metaverse session module 118, a smart query module 120, a response database 122, and a feedback module 124. In an embodiment, the processor 104, the memory 106, data 108, the metaverse master trainer 112, the evaluation module 114, the content management module 116, the interactive metaverse session module 118, the smart query module 120, the response database 122, and the feedback module 124 may be communicatively coupled to one another. At least one of the pluralities of the module 110 may be implemented through an AI model. A function associated with AI may be performed through the non-volatile memory or the volatile memory, and/or the processor.
The processor 104 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
A plurality of processors controls the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory or the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning. Here, being provided through learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed on a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system. The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
The learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
According to the present subject matter, in a method of an electronic device, a method of extracting sentiments or mood associated with one or more users with respect to one or more art image. The artificial intelligence model may be obtained by training. Here, "obtained by training" means that a predefined operation rule or artificial intelligence model configured to perform a desired feature (or purpose) is obtained by training a basic artificial intelligence model with multiple pieces of training data by a training technique. The artificial intelligence model may include a plurality of neural network layers. Each of the plurality of neural network layers includes a plurality of weight values and performs neural network computation by computation between a result of computation by a previous layer and the plurality of weight values.
Visual understanding is a technique for recognizing and processing things as does human vision and includes, e.g., object recognition, object tracking, image retrieval, human recognition, scene recognition, 3D reconstruction/localization, or image enhancement.
As would be appreciated, the system 102, may be understood as one or more of a hardware, a software, a logic-based program, a configurable hardware, and the like. In an example, the processor 104 may be a single processing unit or a number of units, all of which could include multiple computing units. The processor 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, processor cores, multi-core processors, multiprocessors, state machines, logic circuitries, application-specific integrated circuits, field-programmable gate arrays and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 104 may be configured to fetch and/or execute computer-readable instructions and/or data stored in the memory 106.
In an example, the memory 106 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and/or dynamic random-access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM (EPROM), flash memory, hard disks, optical disks, and/or magnetic tapes. The memory 106 may include the data 108. The data 108 serves, amongst other things, as a repository for storing data processed, received, and generated by one or more of the processor 104, the memory 106, the data 108, the metaverse master trainer 112, the evaluation module 114, the content management module 116, the interactive metaverse session module 118, the smart query module 120, the response database 122, and the feedback module 124.
The module (s) 110, amongst other things, may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement data types. The module(s) 110 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions.
Further, the module(s) 110 may be implemented in hardware, as instructions executed by at least one processing unit, e.g., processor 104, or by a combination thereof. The processing unit may be a general-purpose processor that executes instructions to cause the general-purpose processor to perform operations or, the processing unit may be dedicated to performing the required functions. In another aspect of the present disclosure, the module(s) 110 may be machine-readable instructions (software) which, when executed by a processor/processing unit, may perform any of the described functionalities. In some example embodiments, the module (s) 110 may be machine-readable instructions (software) which, when executed by a processor 104/processing unit, perform any of the described functionalities. In an embodiment, the metaverse master trainer 112 may be configured to generated for rendering using generative adversarial network. The metaverse master trainer 112 may be configured to receive at least one of metaverse information data sample of the expert. The metaverse master trainer 112 may be configured to generate at least one facial expressions for speech and emotions from the received metaverse information data sample. The metaverse master trainer 112 may be configured to assign a voice to the metaverse master trainer and add to a metaverse master trainer library. The user may select at least one of a trainer, voices, and accents of the digitally rendered metaverse master trainer from the metaverse master trainer library based on user input. The metaverse master trainer 112 may be configured to dynamically adapt selected metaverse master trainer for the interactive metaverse session to use at least one of resources, pictures, animation, drawing tool, audio, video clip, and AR/VR. The expert may create metaverse master trainer 112 for real time rendering. The metaverse master trainer 112 may be configured to trained to emphasize/enunciate keywords while speaking, give pauses for better intelligibility, blink, and smile. The digital rendered metaverse master trainer 112 may be resynthesized using GANs and custom vocoders for generating essential facial positions and expressions of the expert, generating AI based speech/audio based upon a customized voice setting; and synchronizing a lip movement with the audio based on the contents and generated AL speech/audio. The metaverse master trainer 112 may be created from the domain expert’s digital photograph or a short video clip using artificial intelligence generative algorithms to generate all essential facial positions and expressions which are used to re-synthesize the expert’s digital version on the user’s device. The metaverse master trainer may be configured to be perfectly lip-synchronized with the audio based on the contents to appear as if the real human expert is training and interacting. The expert’s picture may be substituted by any real person’s face or AI-generated realistic human face, so the user may a choice of trainers, voices, and accents. The audio itself may be synthesized using artificial intelligence generative algorithms matching a real or imaginary person or character. The specific AI technology may be utilized GANs and custom vocoders for face, expression, and voice generation. Further, the metaverse master trainer 112 may be configured to dynamically adapt at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive. The interactive metaverse module 118 may be configured to engage the user in an interactive metaverse session using generated metaverse master trainer. The interactive metaverse session module 118 may be configured to providing engagement plan by performing real time analytics on the user performance with a query and response history. The interactive metaverse session module 118 may be configured to form a suitable index to watch specific events in the session. The interactive metaverse session module 118 may be configured to scale up to hundreds and thousands of people, while catering to the needy at appropriate times based on real time analytics. For example, in a virtual interview, the analytics may indicate real time performance with a question and response history that is much quicker to assess than a long-recorded video that usual online interviews create. Further, the recorded video maybe there but the analytics may be forming a suitable index to watch specific events in the session. The interactive metaverse session 118 may be configured to deliver the content or perform job training, skill training, job interview, practice session or other similar training or assessment session. For example, the expert teaches certain essential keywords or terminologies, basic concepts, and applications and immediately asks the students to do certain activities to reinforce the learning. The interactive metaverse session 118 may be configured to engage the user in the activities, which may be aimed towards developing listening, speaking, reading, writing, comprehension, logical thinking, creative applications, group interaction, presentations, etc. The expert trainer may use various resources, pictures, animations, drawing tools, audio, video clips, references, and AR/VR, etc. and the metaverse master trainer does the same however dynamically adapted to the individual need to maintain realistic Training-learnings experience. The content management module 116 may be configured to delivery content through the metaverse master trainer 112. The content management 116 may be configured to customize and organize the content by identifying at least one of objective, essential keywords, definitions, basic concepts, and example applications. The content management module may be configured to present the customized content to the user. The content management module 116 may be configured to engage the user to at least one activity related to the delivered content to reinforce the learning. The experts may create the contents in a text form following standard formats and organization and present them through their metaverse master trainer instead of creating videos. The users may choose to be taught at a pace suitable for their ability to learn and can even select an alternative trainer and voice that they feel comfortable with. The contents may be organized topic-wise – starting with objective, essential keywords (e.g., technical terms), definitions, basic concepts, and example applications. Each content may be separately identifying the parts to be delivered by the digital expert and that to be completed by the individual. The evaluating module 114 may be to be configured to evaluate the user activity in the interactive metaverse session using an AI technique The evaluation module 114 may be configured to evaluate the user activity. The evaluation module 114 may be configured to evaluate the user response based on a set of criteria decided by the human expert. The evaluation module 114 may be configured to analyze the user response through NLP techniques. The analyzing includes identifying the keywords or named entities, speech-based analysis, AI similarity models from the user’s response. The evaluation module 114 maybe configured to analyze the user behavior using AI-based real time gaze tracking and emotion detection. The evaluation module 114 may be configured to provide a follow up query based on analysed user response and the user behaviour. The evaluation module 114 may be configured to monitor the user progress. The evaluation module 114 may be configured to provide quantitative performance scores for evaluated user activity based on monitored user progress. For example, in the case of communication skills training, students’ pronunciation, accent, clarity, and fluency may be evaluated by the evaluation module 114. The smart query module 120 may be to be configured to provide dynamically response. The smart query module 120 may be configured to receive a query from the user related to the content or activity in the interactive session. The smart query 120 may be configured to fetch suitable matching response from the response database. The response database anticipates the user query and store the response for the anticipated query. The smart query module 120 may be detect whether the suitable matching response of the received query is not found in response database. The smart query module 120 may be configured to send the detected received query to the expert for a response, wherein the expert response is stored in the response database. For example, dynamic response from the Job interviewer/trainer by analyzing the user’s answer through NLP techniques and/or the user’s behavior through vision techniques. For example, the evaluator may sense the keywords or named entities from the user's response and then provide a related question to either go deeper into a topic or a follow up question to verify the truth or consistency of the answers. In the training mode, if the user's answer to the next question is not satisfactory, it may offer to practice the correct answer. The scores are analyzed; compared with the average of all students and presented to the students, course administrator, and the domain expert. In an embodiment, the feedback module 124 maybe to configured to provide feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation. The feedback module 124 may be configured to suggest a personalize content to the user based on both individual and collective engagements of users with the content as well as the individual’s goals digitally mapped in an appropriate database.
Figure 2 illustrates a flow diagram 200 depicting a method for real time rendering of a metaverse master trainer of an expert on a user device, according to an embodiment of the present subject matter. The method 200 may include generating at a step 201, the metaverse master trainer for rendering using generative adversarial network. The method may include receiving at least one of metaverse information data sample of the expert. The method may include generating at least one facial expressions for speech and emotions from the received metaverse information data sample. The method may include assigning a voice to the metaverse master trainer. The method may include adding to a metaverse master trainer library. The user may select at least one of a metaverse master trainer, voices, and accents of the digitally rendered metaverse master trainer from the metaverse master trainer library based on user input. The metaverse master trainer 112 may be configured to dynamically adapt selected metaverse master trainer for the interactive metaverse session to use at least one of resources, pictures, animation, drawing tool, audio, video clip, and AR/VR. The expert may create metaverse master trainer 112 for real time rendering. Further the metaverse master trainer 112 may be trained to emphasize/enunciate keywords while speaking, give pauses for better intelligibility, blink, smile. The digital rendered metaverse master trainer 112 is resynthesized using GANs and custom vocoders. The metaverse master trainer 112 may be configured to generate essential facial positions and expressions of the expert. The metaverse master trainer 112 may be configured to generate AI based speech/audio based upon a customized voice setting and synchronizing the lip movement with the audio based on the contents and generated AL speech/audio. Moving forward, the method 200 may include engaging at a step 203, the user in an interactive metaverse session using generated metaverse master trainer. The interactive metaverse session may include content delivery, job training, skill training, job interview, practice session or other similar training or assessment session. The metaverse master trainer 112 may be configured to the deliver content. The content management module 116 may be configured to customize and organize the content by identifying at least one of objective, essential keywords, definitions, basic concepts, and example applications. The content management module 116 may be configured to present the customized content to the user. The interactive metaverse session module 118 may be configured to engage the user to at least one activity related to the delivered content to reinforce the learning. The interactive metaverse module 118 may be configured to engage the user by providing engagement plan by performing real time analytics on the user performance with a query and response history and forming a suitable index to watch specific events in the session. The method 200 may include dynamically adapting at a step 205, at least one of voice, facial expression, emotion, and content of rendered metaverse master trainer based on the user activity in the interactive.
Moving forward, the method 200 may include evaluating at a step 207, the user activity in the interactive metaverse session using an AI technique. The evaluation module 114 may be configured to the user activity. The evaluation module 114 may be configured to evaluate the user response based on a set of criteria decided by the human expert. The evaluation module 114 may be configured to analyze the user response through NLP techniques. Further, the analyzing may include identifying the keywords or named entities, speech-based analysis, AI similarity modelsfrom the user’s response. The evaluation module 114 may be configured to analyze the user behavior using AI-based real time gaze tracking and emotion detection. The evaluation module 114 may be configured to provide a follow up query based on analysed user response and the user behaviour. The evaluation module 114 may be configured to monitor the user progress. The evaluation module 114 may be configured to provide quantitative performance scores for evaluated user activity based on monitored user progress. The smart query module 120 may be configured to dynamically respond to the user query. The smart query module 120 may be configured to receive a query from the user related to the content or activity in the interactive session. The smart query module 120 may be configured to fetch suitable matching response from the response database. Further, the response database anticipates the user query and store the response for the anticipated query The smart query module may be configured to detect whether the suitable matching response of the received query is not found in response database. The smart query module 118 may be configured to send the detected received query to the expert for a response. The expert response may be stored in the response database. Subsequently, the method 200 may include providing at a step 209, feedback for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation. The feedback module may be configured to provide feedback include suggesting a personalize content to the user based on both individual and collective engagements of users with the content as well as the individual’s goals digitally mapped in an appropriate database.
Figure 3 illustrates a flow diagram 300 depicting an exemplary embodiment of a method for generating metaverse master trainer for real time rendering, according to an embodiment of the present matter. The method 400 may include receiving at a step 301, at least one of metaverse information data sample of the expert. This step corresponds with the step 201.The method 300 may include generating at a step 303, at least one facial expressions for speech and emotions from the received metaverse information data sample. The method may include assigning at a step 305, a voice to the metaverse master trainer. The method may include adding at a step 307, to a metaverse master trainer library. The method 300 may include generating at a step 309, the metaverse master trainer 112 of the human expert using generative adversarial network. The metaverse master trainer 112 may be reproduce on the user’s device using small amount of data. This eliminates the need for video creation and transmission. It also eliminates the need for rerecording videos for every change. This has a tremendous advantage over all online video courses, video call interviews etc. The user may select at least one of a metaverse master trainer, voices, and accents of the digitally rendered metaverse master trainer from the metaverse master trainer library based on user input. The metaverse master trainer 112 may be configured to dynamically adapt selected metaverse master trainer for the interactive metaverse session to use at least one of resources, pictures, animation, drawing tool, audio, video clip, and AR/VR. Further, the preparation of training and interview materials, and the modern techniques of AI, GAN etc. are used to assist the trainer manifest himself/herself in digital version to multiple users performing different tasks simultaneously as per individual user’s need. Further, the metaverse master trainer 112 may be configured to use GAN to improve the skills.
Figure 4 illustrates a flow diagram 400 depicting an exemplary embodiment of a method for generating and evaluating of one or more user activity, according to an embodiment of the present matter. The method 400 may generating at a step 401, the content to be delivery through the metaverse master trainer. The step 401 may include customizing and organizing the content by identifying at least one of objective, essential keywords, definitions, basic concepts, and example applications. The right contents may suggest depending on the level of the user to improve on their weakest areas. However, the user can always select from all lessons what he or she wants to learn. The suggestion follows both individual and collective engagements of users with the content as well as the individual’s goals digitally mapped in an appropriate database. The expert may create courses and train in any specialized skills using text, drawings, video, audio, and other modern tools and media such as AR VR. The method 400 may include adding at a step 403, the generated content in the database. The method may include presenting at a step 405, the customized content to the user. The metaverse master trainer 118 may be configured to engage the user to at least one activity related to the delivered content to reinforce the learning. The method 400 may evaluating at a step 405, the user activity in the interactive metaverse session using an AI technique. The evaluation module 114 may be configured to use AI based tools for text generation, translation and speech to text based upon the evaluation to offer interactive feedback for the users attempt. This is just as a real trainer would offer an animated feedback upon the attempt made by a student or trainee in a face-to-face class. For example, in a communication skills course the feedback can be “Well done, you need to add more of a confidence in your speech, for example, try to follow me as I say”. The method 400 may providing feedback at a step 407, for the user activity associated with at least one interactive metaverse session for at least one user based on evaluation.
Figure 5 illustrates a flow diagram 500 depicting another exemplary embodiment of a method for generating and evaluating of one or more user activity, according to an embodiment of the present matter. In an embodiment, the experts may create the contents in a text form following standard formats and organization and present them through their metaverse master trainer 118. The user may choose to be taught at a pace suitable for their ability to learn and can even select an alternative trainer and voice that they feel comfortable with. The contents are organized topic-wise – starting with objective, essential keywords (e.g., technical terms), definitions, basic concepts, and example applications. Each content may separately identify the parts to be delivered by the metaverse master trainer 118 and that to be completed by the user. The expert teaches certain essential keywords or terminologies, basic concepts, and applications and immediately asks the user to do certain activities to reinforce the learning. Activities may aim towards developing listening, speaking, reading, writing, comprehension, logical thinking, creative applications, group interaction, presentations, etc. Further, artificial intelligence techniques may be used to evaluate the users work and scores and feedback are given by the simulated expert. For example, in the case of communication skills training, students’ pronunciation, accent, clarity, and fluency are evaluated using Machine learning algorithms.
Figure 6 illustrates a flow diagram 600 depicting an exemplary embodiment of a method for performing a job interview or a skill training, according to an embodiment of the present matter.
In an embodiment, the metaverse master trainer 118 may be configured to perform the job interview practice training. The expert from the industry may create technical and HR questions and sample correct/good or incorrect/bad answers relevant to the specific job and the requirements and conduct the interview using his/her or the HR personnel’s digital version for many applicants simultaneously. The evaluation of the candidate’s answer is done by artificial intelligence technique and scores are stored. Further, questions may also be automatically selected from a pre-created large question bank, based upon dynamic evaluation of the individual’s response making the experience personalized. The question topics may be selected by means of keywords determined by parts of speech tagging or named entity recognition and context similarity of those to identified domains relevant for the interviewing agency or partner. The Job interview on a large-scale would have company and position specific questions and expected answers created by the expert recruiters and hiring managers. The artificial intelligence powered evaluator module 114 may be configured to evaluate the answers based on a set of criteria decided by the company. This step corresponds to the step 207. The elevator module 114 may be configured to detect any anomalies in behavior, honesty, authenticity, language abilities, etc. instantly and consistently and raise flags if necessary. Further, the metaverse master trainer 112 may be configured to asks predefined questions, the student gets to practice with a sample answer; Next the trainer 112 asks a similar question, and the student gives his/her own answer and gets a score. This practice not only builds confidence in facing a job interview, and helps students prepare and practice good answers.
Further, the training materials and contents may be easily updated or upgraded, many subjects can be added as the industry needs change over time without the expert having to record videos. Even new techniques of training using AR VR, digital twins, etc. can be plugged in. A category of lessons may design to help students improve their short-term memory and comprehension of what they are listening for a length of up to 2 to 3 sentences which with practice significantly improves the patience to `Listen’, `Comprehend’, and “Playback”. This leads to sustained improvement in spoken communication skills - both listening, remembering the essence, and communicating it back effectively. When the digital version of the experts may be applied to learning language skills, as in ACSESS software, it is like a `Language Lab’ with you anywhere to practice and rapidly improve your communication skills. Using STEM terms and their usage to teach English makes the STEM students and professionals quickly learn and apply English skills which is crucial to get ahead in their career, business, society, and professional circles. Institutions, colleges, universities can use specially designed, easy-to-use and versatile learning management system features of dashboard and monitor and do the necessary interventions efficiently for thousands of students. The current practice of trying to achieve the same thing through the conventional Instructor-led face-to-face or online classes would have required at least one instructor and several teaching assistants dedicated for a class size of 30 to have a similar effect. The digital version of the experts and digital Interviewers help students develop better eye contact skills while communicating. Which the existing text+ audio versions do not. For the real Job Interviews conducted by the employers, the digital versions of the industry experts, HR personnel, or hiring agencies using position-specific contents (questions and correct or reasonably good answers) have the advantage of simultaneous interviews of a large number of applicants in an efficient, objective, uniform and unbiased manner saving much time and money.
In view of the aforesaid, there are provided various advantageous features relating to the present disclosure;
• The metaverse master trainer may handle many users, perform various tasks of a trainer as a simulation without the expert doing it himself/herself and pay individual attention to each user/student/trainee.
• The metaverse master trainer may use the AI-driven evaluator module 116 to instantly evaluate and give the score, feedback, and suggestions to each individual student.
• The metaverse master trainer may provide quantitative performance scores and the comparison with peers’ averages provide limited gamification, challenge and motivate the student to retry and get better results leading to further practice of the communication skills. The student experiences measurable progress which keeps the motivation high for the student to continue learning at his/her pace.
• Being cloud-hosted web application, the metaverse master trainers may be available 24/7, for any duration, and from anywhere on many platforms, thus giving the learners the flexibility of learning.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.
Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
| Section | Controller | Decision Date |
|---|---|---|
| Section 15 and 43(1) | Vikas Gupta | 2023-05-31 |
| Section 15 and 43(1) | Vikas Gupta | 2023-05-31 |
| Section 15 and 43(1) | Vikas Gupta | 2023-05-31 |
| # | Name | Date |
|---|---|---|
| 1 | 202141058015-STATEMENT OF UNDERTAKING (FORM 3) [13-12-2021(online)].pdf | 2021-12-13 |
| 2 | 202141058015-FORM FOR STARTUP [13-12-2021(online)].pdf | 2021-12-13 |
| 3 | 202141058015-FORM FOR SMALL ENTITY(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 4 | 202141058015-FORM 1 [13-12-2021(online)].pdf | 2021-12-13 |
| 5 | 202141058015-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 6 | 202141058015-EVIDENCE FOR REGISTRATION UNDER SSI [13-12-2021(online)].pdf | 2021-12-13 |
| 7 | 202141058015-DRAWINGS [13-12-2021(online)].pdf | 2021-12-13 |
| 8 | 202141058015-DECLARATION OF INVENTORSHIP (FORM 5) [13-12-2021(online)].pdf | 2021-12-13 |
| 9 | 202141058015-COMPLETE SPECIFICATION [13-12-2021(online)].pdf | 2021-12-13 |
| 10 | 202141058015-STARTUP [22-12-2021(online)].pdf | 2021-12-22 |
| 11 | 202141058015-FORM28 [22-12-2021(online)].pdf | 2021-12-22 |
| 12 | 202141058015-FORM-9 [22-12-2021(online)].pdf | 2021-12-22 |
| 13 | 202141058015-FORM 18A [22-12-2021(online)].pdf | 2021-12-22 |
| 14 | 202141058015-FER.pdf | 2022-03-07 |
| 15 | 202141058015-Proof of Right [18-03-2022(online)].pdf | 2022-03-18 |
| 16 | 202141058015-FORM-26 [18-03-2022(online)].pdf | 2022-03-18 |
| 17 | 202141058015-OTHERS [27-05-2022(online)].pdf | 2022-05-27 |
| 18 | 202141058015-FER_SER_REPLY [27-05-2022(online)].pdf | 2022-05-27 |
| 19 | 202141058015-COMPLETE SPECIFICATION [27-05-2022(online)].pdf | 2022-05-27 |
| 20 | 202141058015-CLAIMS [27-05-2022(online)].pdf | 2022-05-27 |
| 21 | 202141058015-ABSTRACT [27-05-2022(online)].pdf | 2022-05-27 |
| 22 | 202141058015-US(14)-HearingNotice-(HearingDate-10-04-2023).pdf | 2023-03-01 |
| 23 | 202141058015-Correspondence to notify the Controller [06-04-2023(online)].pdf | 2023-04-06 |
| 24 | 202141058015-FORM-26 [07-04-2023(online)].pdf | 2023-04-07 |
| 25 | 202141058015-US(14)-ExtendedHearingNotice-(HearingDate-02-05-2023).pdf | 2023-04-18 |
| 26 | 202141058015-Correspondence to notify the Controller [28-04-2023(online)].pdf | 2023-04-28 |
| 27 | 202141058015-Written submissions and relevant documents [16-05-2023(online)].pdf | 2023-05-16 |
| 28 | 202141058015-Annexure [16-05-2023(online)].pdf | 2023-05-16 |
| 29 | 202141058015-PatentCertificate31-05-2023.pdf | 2023-05-31 |
| 30 | 202141058015-IntimationOfGrant31-05-2023.pdf | 2023-05-31 |
| 1 | 202141058015E_04-03-2022.pdf |