Sign In to Follow Application
View All Documents & Correspondence

Systems And Methods For Managing A Responsive Virtual Meeting Room

Abstract: A method (600) for managing a responsive virtual meeting room for a plurality of participants. The method includes receiving (602) a set of inputs from the participants. The method also includes providing (608) the virtual meeting room based on the received set of inputs from the plurality of participants. The method further includes monitoring (612) the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Further, the method includes dynamically updating (616) the virtual meeting room based on the designated active participant. Dynamically updating the virtual meeting room comprises at least one of updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 January 2023
Publication Number
04/2023
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
mail@lexorbis.com
Parent Application

Applicants

Comviva Technologies Limited
A-26, Info City, Sector 34, Gurgaon-122001, Haryana, India

Inventors

1. JAIN, Manish
43, Vasudha Enclave, Pitampura, Delhi – 110034, India
2. GOYAL, Gaurav
T8-001, CHD Avenue 71, Sector-71, Gurgaon – 122001, Haryana, India

Specification

FIELD OF THE INVENTION
[0001] The present invention generally relates to virtual meetings, and more particularly relates to systems and methods for managing a responsive virtual meeting room.
BACKGROUND
[0002] Virtual conferencing provides a convenient way for participants to meet or hold conversations without being physically together. A conventional virtual conference system typically allows multiple participants to communicate in a collaborative and real-time conference over a network. In this manner, various participants may interact and communicate information in a virtual environment despite being separated geographically.
[0003] Despite various advantages of virtual conference systems, such systems fail to provide real connect and/or real feel of the conversations. Specifically, such systems generally display generic screens like names, images, and/or videos of the participants during the virtual conference, which fails to engage the participants in the virtual environment. Some virtual conference systems try to provide a real-world experience to the participants. However, such systems require extensive external devices such as, Virtual Reality (VR) devices, Augmented Reality (AR) devices, and various sensors which makes the overall system complex and expensive. Because of these reasons, many people still find virtual conferencing to be a poor substitute for face-to-face meetings.
[0004] Therefore, there exists a need to find a solution for the above-mentioned technical problems.
SUMMARY
[0005] This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention nor is it intended for determining the scope of the invention.
[0006] According to one embodiment of the present disclosure, a method for managing a responsive virtual meeting room for a plurality of participants is disclosed. The method includes receiving a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of an avatar and a selection of participant position in a virtual meeting room corresponding to a virtual meeting. The method also includes providing the virtual meeting room based on the received set of inputs from the plurality of participants. The method further includes monitoring the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Furthermore, the method includes dynamically updating the virtual meeting room based on the designated active participant. Also, dynamically updating the virtual meeting room comprises at least one of updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting.
[0007] According to another embodiment of the present disclosure, a method for managing a responsive virtual meeting room for a plurality of participants is disclosed. The method includes receiving a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of a virtual representation of the participants and a selection of participant position in a virtual meeting room corresponding to a virtual meeting. The method also includes generating a perspective view of the virtual meeting room corresponding to each participant based on at least one of the input from the set of inputs of each of the plurality of participants. The perspective view corresponding to a participant includes a virtual representation of all the participants other than the corresponding participant. The method further includes monitoring the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Furthermore, the method includes dynamically updating each of the perspective views of the virtual meeting room based on the designated active participant. Moreover, the method includes providing each of the dynamically updated perspective views corresponding to each of the plurality of participants as a different data stream.
[0008] According to another embodiment of the present disclosure, a system for managing a responsive virtual meeting room for a plurality of participants is disclosed. The system includes a memory and at least one processor communicably coupled to the memory. The at least one processor is configured to receive a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of an avatar and a selection of participant position in a virtual meeting room corresponding to a virtual meeting. The at least one processor is also configured to provide the virtual meeting room based on the received set of inputs from the plurality of participants. Further, the at least one processor is configured to monitor the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Moreover, the at least one processor is configured to dynamically update the virtual meeting room based on the designated active participant, wherein dynamically updating the virtual meeting room comprises at least one of updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting.
[0009] According to another embodiment of the present disclosure, a system for managing a responsive virtual meeting room for a plurality of participants is disclosed. The system includes a memory and at least one processor communicably coupled to the memory. The at least one processor is configured to receive a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of a virtual representation of the participants and a selection of participant position in a virtual meeting room corresponding to a virtual meeting. The at least one processor is also configured to generate a perspective view of the virtual meeting room corresponding to each participant based on at least one of the input from the set of inputs of each of the plurality of participants. The perspective view corresponding to a participant includes a virtual representation of all the participants other than the corresponding participant. Further, the at least one processor is configured to monitor the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Moreover, the at least one processor is configured to dynamically update each of the perspective views of the virtual meeting room based on the designated active participant. Also, the at least one processor is configured to provide each of the dynamically updated perspective views corresponding to each of the plurality of participants as a different data stream.
[0010] To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0012] Figure 1 illustrates a schematic block diagram depicting an environment for the implementation of a system for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure;
[0013] Figure 2 illustrates an exemplary block diagram of the system for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure;
[0014] Figure 3 illustrates an exemplary block diagram of various modules of the system for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure;
[0015] Figure 4 illustrates an exemplary process flow for managing a responsive virtual meeting room for a plurality of participants using various modules of the system, according to an embodiment of the present disclosure;
[0016] Figure 5 illustrates an exemplary process flow for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure;
[0017] Figure 6 illustrates a flow chart of a method for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure;
[0018] Figure 7 illustrates a flow chart of a method for managing a responsive virtual meeting room for a plurality of participants, according to another embodiment of the present disclosure;
[0019] Figures 8A-8C illustrate generation of a virtual meeting room with selected positions of the plurality of participants, according to an embodiment of the present disclosure;
[0020] Figures 9A-9C illustrate different perspective views of the virtual meeting room corresponding to the plurality of participants, according to an embodiment of the present disclosure;
[0021] Figures 10A-10C illustrate generation of recorded data streams from different perspective views of the virtual meeting room, according to an embodiment of the present disclosure; and
[0022] Figures 11A-11B illustrate dynamically udpating of the virtual meeting room based on a presentation mode, according to an embodiment of the present disclosure.
[0023] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0024] For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the various embodiments and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
[0025] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the invention and are not intended to be restrictive thereof.
[0026] Reference throughout this specification to “an aspect,” “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[0027] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises... a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
[0028] The terms “user” and “participant” may be used interchangeably throughout the specification.
[0029] The present disclosure aims to provide a responsive virtual meeting room that provides a real-world feel of a discussion between a plurality of participants. Specifically, the present disclosure provides the virtual meeting room which includes a responsive and interactive virtual representation of all the participants involved in the virtual meeting.
[0030] Figure 1 illustrates a schematic block diagram depicting an environment for the implementation of a system 100 for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure.
[0031] The plurality of participants may be represented by respective user devices 102a-102n (also referred to as “the user device 102”). The user device 102 may include, but not limited to, a tablet PC, a Personal Digital Assistant (PDA), a smartphone, a palmtop computer, a laptop computer, a desktop computer, a server, a cloud server, a remote server, a communications device, a wireless telephone, or any other machine controllable through the wireless-network and capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. The user devices 102 may enable the participants to interact and communicate in a virtual environment by means of virtual conferences and/or meetings implemented by the system 100. In an embodiment, each of the user devices 102 may include an application configured to act as an interface between the user devices 102 and the system 100. The application may be installed on an Operating System (OS) of the user device 102. The OS typically presents or displays the application through a graphical user interface (“GUI”) of the OS. The application may be configured to receive inputs from the respective participant to utilize the system 100 for a desired virtual meeting. In an exemplary embodiment, at least one of the participants may act as a host. The host may initialize the virtual meeting with a set of inputs. The set of inputs by the host may define one or more characteristics of the virtual meeting. The characteristics of the virtual meeting may include, but not limited to, a date of meeting, a time of meeting, a number of participants, a selection of the virtual meeting room, a selection of one or more virtual objects in the virtual meeting room, and a selection of one or more virtual services. The application installed at the user devices 102 of other participants may also be configured to receive a set of inputs from the participants. The set of inputs received from other participants may include a selection of an avatar or virtual representation of the participant, and a selection of participant's position in the virtual meeting room. The application may also be configured to allow the participants to join the virtual meeting and experience the virtual meeting room. In an embodiment, each of the set of inputs received by the respective application from the different participants may be passed to the system 100. The user devices 102 may be communicably coupled to the system 100 via a network 104 to enable the application to communicate with the system 100.
[0032] The network 104 may be configured to establish a connection path between the different user devices 102 and the system 100. In an exemplary embodiment, the network 104 may be any suitable communication network which may include, but not limited to, wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a zig-bee network, a cellular telephone network like 4G network, 5G network, an 802.11, 802.16, 802.20, 802.1Q, or a WiMax network. Further, the network 104 may be a public network, such as the internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
[0033] The system 100 may be configured to act as a unified server to interact with each of the participants via the respective user devices 102 to create and manage a responsive virtual meeting room. The system 100 may be communicably coupled to each of the user devices 102 via the network 104. The system 100 may establish one or more data connection paths with each of the user devices 102 to receive and/or transmit information. The system 100 may be configured to communicate with the application installed at each of the user devices 102 to receive various inputs from the participants and transmit the desired information/data. In an exemplary embodiment, the system 100 may be configured to initialize the virtual meeting by creating a virtual meeting room based on the set of inputs received from the user. Further, the system 100 may be configured to update the visual representation of the virtual meeting room with the establishment of data communication paths with the participants and the set of inputs received from each of the participants. For example, the virtual meeting room may display the avatars and/or virtual representations of the participants, when the participants join the virtual meeting.
[0034] Further, the system 100 may also be configured to monitor the virtual meeting to identify active participants during the virtual meeting. The system 100 may be configured to dynamically update the virtual meeting room, and specifically, the avatar of the participants based on the identified active participants. The system 100 may be configured to update the characteristics of the avatars corresponding to the participants based on the identified active participants. The characteristics of the avatar which may be updated by the system 100 may include, but not limited to, facial expressions, head position, body position, and viewing angle. For example, if a participant is speaking, the system 100 may identify such a participant as an active participant and update the viewing angle of each of the avatars of the other participants towards such active participant. In an exemplary embodiment, the system 100 may be configured to monitor different data communication paths established with the participants to identify the active participant. In an exemplary embodiment, the system 100 may utilize data transmission over the data connection paths established with the user devices 102 for the identification of active participants. For example, if a participant is speaking, the system 100 may receive data packets at a data connection path which corresponds to audio signals. Thus, when the system 100 receives data packets at such data connection path, the system 100 may identify that the corresponding participant is speaking.
[0035] In some embodiments, the system 100 may also be configured to generate a distinct perspective view of the virtual meeting room for each participant based on the position of the participant in the virtual meeting room. The generated perspective views of the virtual meeting room provide an immersive feel to the participants. The perspective view of the virtual meeting room makes the participant feel like the participant is present in the room. Thus, the system 100 enhances the overall feel and connect of the virtual meeting. Further, the system 100 prevents a need for various extensive software and hardware for providing and managing the interactive and responsive virtual meeting room.
[0036] Further, the system 100 may include the modules/engines/units implemented with an Artificial Intelligence (AI) module that may include a plurality of neural network layers. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), and Restricted Boltzmann Machine (RBM). The learning technique is a method for training a predetermined target device (for example, a robot, or a unified server) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. At least one of a plurality of CNN, DNN, RNN, RMB models and the like may be implemented to thereby achieve execution of the present subject matter’s mechanism through an AI model. A function associated with AI model may be performed through the non-volatile memory, the volatile memory, and the processor. The processor may include one or a plurality of processors. At this time, one or a plurality of processors may be a general¬purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU). One or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or AI model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
[0037] Figure 2 illustrates an exemplary block diagram of the system 100 for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure. In an exemplary embodiment, the system 100 may be configured to operate as a standalone device or a system based in a server/cloud architecture communicably coupled to the user devices 102.
[0038] The system 100 may be configured to receive and process inputs from the user devices 102 to create and manage the virtual meeting room for a virtual meeting. The system 100 may include a processor/controller 202, an Input/Output (I/O) interface 204, one or more modules 206, a transceiver 208, and a memory 210.
[0039] In an exemplary embodiment, the processor/controller 202 may be operatively coupled to each of the I/O interface 204, the modules 206, the transceiver 208, and the memory 210. In one embodiment, the processor/controller 202 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor/controller 202 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, Digital Signal Processing (DSP) units, etc. In one embodiment, the processor/controller 202 may include a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor/controller 202 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor/controller 202 may execute a software program, such as code generated manually (i.e., programmed) to perform the desired operation.
[0040] The processor/controller 202 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204. The I/O interface 204 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMAX, or the like, etc.
[0041] Using the I/O interface 204, the system 100 may communicate with one or more I/O devices, specifically, to the user devices 102 associated with the plurality of participants. Other examples of the input device may be an antenna, microphone, touch screen, touchpad, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light¬emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light¬emitting diode display (OLED) or the like), audio speaker, etc. In an embodiment, the I/O interface 204 may enable input and output to and from the system 100 using suitable devices such as, but not limited to, a display, a keyboard, a mouse, a touch screen, a microphone, a speaker, and so forth.
[0042] The processor/controller 202 may be disposed in communication with a communication network via a network interface. In an embodiment, the network interface may be the I/O interface 204. The network interface may connect to the communication network to enable connection of the system 100 with the outside environment and/or user device/system. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface and the communication network, the system 100 may communicate with other devices. The network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
[0043] In an exemplary embodiment, the processor/controller 202 may be configured to receive a set of inputs from each of the plurality of participants via the user devices 102. In an embodiment, the set of inputs received from a participant acting as a host may include, one or more characteristics of the virtual meeting. Further, the characteristics of the virtual meeting may include, but not limited to, a date of meeting, a time of meeting, a number of participants, a selection of the virtual meeting room, a selection of one or more virtual objects, and a selection of one or more virtual services. In an embodiment, the processor/controller 202 may provide a plurality of virtual rooms having different characteristics such as, but not limited to, a theme of room, space of a room, amenities in the room, and so forth. Further, the selection of the virtual meeting room may indicate a selection of a meeting room from the plurality of virtual meeting rooms provided by the processor/controller 202. Moreover, the virtual services may include, but not limited to, tea/coffee services, water dispensers, projectors, cookies, sound system, name cards, note pads, and so forth. The processor/controller 202 may generate the virtual meeting room based on the received set of inputs from the host participant or another participant in the virtual meeting. In another embodiment, the set of inputs from the host and other participants may include, but not limited to, a selection of an avatar and a selection of participant position in a virtual meeting room corresponding to a virtual meeting. In an embodiment, the participant may provide the selection of the avatar from a plurality of pre-stored avatars which may be provided by the processor/controller 202 via the mobile application’s interface at each of user devices 102a-102n. In another embodiment, the participant may provide the selection of the avatar from a plurality of images created and/or uploaded by the participant. Further, the selection of participant position may indicate a relative position of the participant in view of the other participants such as, other participants that will be sitting adjacent to the participant, opposite to the participant, and so forth.
[0044] The processor/controller 202 may be configured to provide the virtual meeting room based on the received set of inputs from the plurality of participants. For example, the processor/controller 202 may be configured to generate a visual representation of a meeting room which is based on the inputs received from the host participant and the other participants. The virtual meeting room may resemble a real-meeting room including representation of a table, chairs, and other amenities as selected by the participants. In an embodiment, the processor/controller 202 may be configured to generate a perspective view of the virtual meeting room corresponding to each participant based on at least one of the input from the set of inputs of each of the plurality of participants. For example, based on the selection of participant position in the virtual meeting room, the processor/controller 202 may generate the perspective view. The perspective view corresponding to a participant includes a virtual representation/avatar of all the participants other than the corresponding participant. In alternative embodiment, the perspective view corresponding a participant may include either a front or a side view of virtual representations/avatars of all the participants other than the corresponding participant and a backside of the virtual representation/avatar of the corresponding participant. Thus, a participant may be able to visualize the virtual meeting room as he/she is present in said meeting room.
[0045] In some embodiments, the processor/controller 202 may be configured to monitor the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Specifically, the system 100 may establish one or more data connection paths with each of the plurality of participants/user devices 102. The processor/controller 202 may designate the at least one participant as the active participant based on real-time data transmission in the one or more data connection paths established with the plurality of participants/user devices 102. In an embodiment, the real-time data transmission may be caused due to any suitable action by the participant, such as, when the participant speaks, when the participant types, when the participant shares some data, and so forth. For example, if a participant speaks, the corresponding user device may transmit data packets on the data connection path established for audio signals. Similarly, if a participant share data, the user device may transmit data packets on the data connection path established for the data transmission. Thus, the processor/controller 202 may monitor each of the data connection paths established with the user devices 102 to identify actions of the participants. Each of said actions may cause a flow of data packets in the data paths which may be used by the processor/controller 202 to identify the active participant.
[0046] The processor/controller 202 may be configured to dynamically update the virtual meeting room based on the designated active participant. Specifically, the processor/controller 202 may dynamically update the virtual meeting room by updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting. The characteristics of the avatar may include, not limited to, facial expressions, head position, body position, and viewing angle. For example, when the processor/controller 202 identifies an active participant, the processor/controller 202 may change the viewing angle of all the other participants toward the active participant. Similarly, upon identifying the active participant, the processor/controller 202 may change the facial expressions of the avatars of other participants to gaze at the active participant or move the head position of the avatars towards the active participant, or completely move the body of the avatars to face the active participant. In an embodiment, such characteristics and associated actions of the avatars may be prestored in a database and may be implemented based relevant and context of the virtual meeting. For example, during joining of the virtual meeting, the avatars may be associated with actions such as, but not limited to, entering the virtual meeting room, sitting on a designated chair, switching on a personal computing device, opening a notepad, and so forth. Similarly, during the virtual meeting, the avatars may be associated with actions such as, but not limited to, taking notes, listening, and/or speaking. Further, during the exit of the virtual meeting, the avatars may be associated with actions such as, but not limited to, shutting down the personal computing device, closing the notepad, leaving the chair, or leaving the virtual meeting room. Thus, the processor/controller 202 may prestored the avatars and associated actions in view of corresponding context of the virtual meeting. In some embodiments, the processor/controller 202 may generate action(s) of the avatars during the initial establishment and/or during the release of the data connection paths. Examples of the actions of the avatars during the initial establishment of the data connection paths may include, but not limited to, the avatar entering the virtual meeting room, the avatar sitting in a selected position, and the avatar introducing the corresponding participant. Further, examples of the action of the avatars during the release of the data connection may include, but not limited to, the avatar leaving the selected position, the avatar leaving the virtual meeting room, and the avatar providing signing off comments and/or gestures.
[0047] Further, the processor/controller 202 may be configured to dynamically update the virtual meeting room to display virtual service(s) requested by the participants. For example, if a participant requests for tea, the processor/controller 202 may display a person and/or an avatar of a person bringing the tea for the participant. In an embodiment, the person bringing the tea may be a support staff who is not be a part of the discussion in the virtual meeting. Similarly, if a participant provides inputs to have a cookie, the processor/controller 202 may display the corresponding avatar picking up a cookie from the table and eating the cookie. Similarly, the processor/controller 202 may be configured to display other services such as, water services, switching ON of a projector, switching ON of an audio system, and so forth. In an embodiment, the participants may provide inputs for the virtual services via a separate interface provided by the system 100.
[0048] In an exemplary embodiment, the processor/controller 202 may be configured to generate a data stream corresponding to the different perspective views of the participants. The data stream may include a video component, an audio component, and/or a combination thereof. Further, the processor/controller 202 may be configured to generate an overall perspective view of the virtual meeting room for a non-participant user. The overview perspective view of the virtual meeting room includes a virtual representation of each of the plurality of participants. Moreover, the processor/controller 202 may be configured to store the overall perspective view as a data stream.
[0049] The processor/controller 202 may also be configured to process the different data streams to identify one or more events corresponding to the plurality of participants. Examples of the events may include, but not limited to, speaking action(s) by the participants, presentation by the participants, typing action by the participant, and so forth. The processor/controller 202 may be configured to store timestamps and/or data at each of the one or more events. Further, the processor/controller 202 may provide access to the different data streams based on the one or more identified events. For example, if a participant searches for a slide of the presentation, that he/she discussed during the meeting, the processor/controller 202 may also allow the participant to directly view the slide without analyzing the complete data stream. Specifically, the processor/controller 202 may also allow the participant to filter from the stored data streams based on the events and/or timestamp. Examples of filtering of the stored data streams based on timestamp may include allowing a participant to view the data stream exactly at a requested timestamp. In another embodiment, a summarized version of each data stream may be prepared by the processor/controller 202 which only includes brief durations around one or more timestamps associated with activities of the one or more avatars. The summarized version of the data stream may define various events of the virtual meeting with their corresponding timestamps. Example of events of the virtual meeting may include, but not limited to, when a particular participant spoke, when a presentation initiated, when a recording of the meeting initiated, when a participant joined the virtual meeting, and so forth. In some embodiments, the summarized version of the virtual meeting may be stored in a transcript format and the processor/controller 202 may utilize techniques such as, but not limited, speech to text conversion, natural language processing, and so forth, to generate such summarized version of the data streams.
[0050] In an embodiment, the processor/controller 202 may also determine activation of a presentation mode by at least one participant in the virtual meeting room. The processor/controller 202 may be configured to determine the activation of the presentation mode based on command(s) received by the application installed at the user device 102. Further, upon determining the activation of presentation mode, the processor/controller 202 may dynamically update each of the perspective views of the virtual meeting room. Specifically, the processor/controller 202 may display the documents/display screen projected by the participant at each of the perspective views. Further, the processor/controller 202 may also display the avatar of the participant who has activated the presentation mode. Moreover, the processor/controller 202 may enable a user to toggle between the presentation mode with the projected display screen and the discussion mode with the virtual representation of all the participants. In an embodiment, the toggle between the presentation mode and the discussion may be performed automatically by the processor/controller 202 based on predefined rules. Examples of predefined rules may include, but not limited to, a lapse of a predetermined timer, identification of an active participant, identification of another action by the participants, and so forth. Alternatively, the participants may toggle between the discussion mode and the presentation mode manually as per the requirement.
[0051] In an embodiment, the processor/controller 202 may be configured to store the data streams in a database 212 of the memory 210. In some embodiments, the memory 210 may be communicatively coupled to the processor/controller 202. The memory 210 may be configured to store data, and instructions executable by the processor/controller 202. In one embodiment, the memory 210 may communicate via a bus within the system 100. The memory 210 may include, but not limited to, a non-transitory computer-readable storage media, such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 210 may include a cache or random-access memory for the processor/controller 202. In alternative examples, the memory 210 is separate from the processor/controller 202, such as a cache memory of a processor, the system memory, or other memory. The memory 210 may be an external storage device or database for storing data. The memory 210 may be operable to store instructions executable by the processor/controller 202. The functions, acts, or tasks illustrated in the figures or described may be performed by the programmed processor/controller 202 for executing the instructions stored in the memory 210. The functions, acts, or tasks are independent of the particular type of instructions set, storage media, processor, or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code, and the like, operating alone or in combination.
Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like.
[0052] In some embodiments, the one or more modules 206 may be included within the memory 210. The one or more modules 206 may include a set of instructions that may be executed to cause the system 100 to perform any one or more of the methods /processes disclosed herein. In some embodiments, the one or more modules 206 may be configured to perform one or more operations of the processor/controller 202. The one or more modules 206 may be configured to perform various desired operations of the present disclosure using the data stored in the database 212/memory 210 to generate and/or manage the virtual meeting room for the virtual meeting as discussed herein. In an embodiment, each of the one or more modules 206 may be a hardware unit that may be outside the memory 210. Further, the memory 210 may include an operating system 214 for performing one or more tasks of the system 100, as performed by a generic operating system in the communications domain. The transceiver 208 may be configured to receive and/or transmit signals to and from the user devices 102 associated with the participants. In one embodiment, the database 212 may be configured to store the information as required by the one or more modules 206 and the processor/controller 202 to perform one or more functions for generating and managing the virtual meeting rooms for the virtual meetings.
[0053] Further, the present disclosure contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal. Further, the instructions may be transmitted or received over the network via a communication port or interface or using a bus (not shown). The communication port or interface may be a part of the processor/controller 202 or may be a separate component. The communication port may be created in software or may be a physical connection in hardware. The communication port may be configured to connect with a network, external media, the display, or any other components in the system, or combinations thereof. The connection with the network may be a physical connection, such as a wired Ethernet connection, or may be established wirelessly. Likewise, the additional connections with other components of the system 100 may be physical or may be established wirelessly. The network may alternatively be directly connected to the bus. For the sake of brevity, the architecture, and standard operations of the operating system 214, the memory 210, the database 212, the processor/controller 202, the transceiver 208, and the I/O interface 204 are not discussed in detail.
[0054] Figure 3 illustrates an exemplary block diagram of various modules of the system 100 for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure. Figure 4 illustrates an exemplary process flow for managing a responsive virtual meeting room for a plurality of participants using various modules of the system 100, according to an embodiment of the present disclosure. Figures 3 and 4 are explained in conjunction with each other for the sake of better explanation.
[0055] The system 100 may communicate with each of the user device 102 via a suitable client interface such as, but not limited to, a desktop application 302, a web browser 304, a mobile application 306, and other suitable modules/components. The desktop application 302 may be installed on a computing device of the participant of the virtual meeting. The desktop application 302 may be a software package including instructions for a processing system of the computing device to implement the desired functionality of the system 100. The mobile application 306 may be installed on a mobile device of the participant. The mobile application 306 may enable the associated participant to participate in the virtual meeting via a personal mobile device. The mobile application 306 may also be a software package including instructions for a processing system of the mobile device to implement the desired functionality of the system 100. Further, the web browser 304 may be an application software which may allow the participant to participate in the virtual meeting via any suitable user device 102 using a weblink. The web browser 304 may be accessed by the computing device and/or the mobile device associated with each participant. Further, the system 100 may implement the desired functionality via the weblink to be accessed using the web browser 304. The user device 102 may also include any additional module/component to participant in the virtual meeting implemented using the system 100.
[0056] The system 100 may include various modules such as, a profile management module 312, a meeting management module 314, a controller module 316, an AI module 318, a room generation module 320, an avatar and gesture management module 322, a call management module 324, a chat management module 326, a notification module 328, and a display management module 330. The modules 312-330 may be a part of the one or more modules 206 of the system 100. Further, the various operations of the modules 312-330 may be explained in view of the process flow illustrated in Fig. 4.
[0057] The profile management module 312 may be configured to manage profiles of the participants. For example, the profile management module 312 may be configured to manage a demographic profile of each of the participants of the virtual meeting. The demographic profile of the participant may include information such as, but not limited to, name, age, sex, location, email, department, designation, and so forth. Further, the profile of participants may include information such as, but not limited to, username, password, display name, display image, pre-selected avatar, audio/video configuration settings. The profile management module 312 may be communicably coupled with the memory 210 to store the profiles of the participants. The profile management module 312 may also be communicably coupled with the meeting management module 314 to communicate data/information during the virtual meeting. For example, whenever a participant joins the virtual meeting, the profile of the participant may be fetched from the memory 210 and the virtual meeting may be generated based on the information included in the profile of the participant. Further, the participant may modify/update the information corresponding to the profile of the participant based on the requirements. In an embodiment, the profile management module 312 may receive a request from the meeting management module 314 to manage the profile of the meeting and/or the participants. Further, the profile management module 312 may be configured to transmit the profiles of the meeting and/or the participants when requested by the meeting management module 314.
[0058] The meeting management module 314 may be configured to manage the virtual meetings. Specifically, the meeting management module 314 may be configured to schedule the virtual meetings, generate calendar invites for the virtual meeting, identify time zones for the virtual meeting, and so forth. The meeting management module 314 may be configured to receive inputs from the participants via the client interface. In an exemplary embodiment, the participant(s) may request the meeting management module 314 to create, join, and/or record the meeting. The meeting management module 314 may include an interface to communicate with the client interface and receive the request(s) of the participants. The meeting management module 314 may be communicably coupled to the controller module 316 to manage the virtual meeting based on the received inputs from the participant(s).
[0059] The controller module 316 may be communicably coupled to various modules including, the meeting management module 314, an admin interface 402, the chat management module 326, the call management module 324, and the notification module 328. The controller module 316 may be configured to collect various data for the meeting management module 314 for effective and efficient management of the virtual meeting. The controller module 316 may also be configured to control meeting actions, such as, but not limited to, joining the meeting, leaving the meeting, mute/unmute options, video, and audio control, presentation action, recording, sharing, and so forth. Further, the admin interface 402 may also be a part of the system 100 and/or the modules 206. The admin interface 402 may be configured to upload various virtual rooms and transmit the uploaded virtual rooms to the controller module 316. In an embodiment, the admin interface 402 may be configured to enable an admin user to interact with the system 100 to provide user-defined inputs/data.
[0060] The chat management module 326 may be configured to manage chat messages between the participants and the system 100. The chat management module 326 may be configured to receive data packets corresponding to the chat messages and share the data packets and/or information with the controller module 316.
[0061] The call management module 324 may be configured to manage calls such as, but not limited to, audio/video Session Initiation Protocol (SIP) calls, Internet Protocol (IP) calls, Public Switched Telephone Network (PSTN) calls, and so forth. Specifically, the call management module 324 may be configured to manage calls between the user devices 102 and the system 100. In an exemplary embodiment, the call management module 324 may be configured to establish a data communication path for calls between each of the user devices 102 and the system 100. The call management module 324 may establish a persistent connection with each participant. Whenever a user speaks, the call management module 324 may receive information in the form of packets. The call management module 324 may be configured to transmit the received packets to the controller module 316.
[0062] The controller module 316 may be configured to receive packets/information/data from the admin interface 402, the meeting management module 314, the call management module 324, and the chat management module 326 and share such received packets/information/data with the AI module 318. The AI module 318 may be configured to process the packets/information/data received from the controller module 316 to generate and update the virtual meeting room. The AI module 318 may be implemented by a technique such as, but not limited to, deep learning and machine learning. The AI module 318 may be configured to generate various displays such as, the different perspective views of the virtual meeting room, the overview of the virtual meeting room, the avatars, and corresponding gestures. The AI module 318 may be configured to generate the various displays based on the real-time data received from the various modules and/or the controller module 316. For example, when a participant speaks, the AI module 318 may be configured to generate a view of the virtual meeting room where every other participant is looking at the speaking participant. The AI module 318 may also be configured to generate and store the data streams corresponding to different views of the virtual meeting room.
[0063] In an embodiment, the AI module 318 may be communicably coupled to the room generation module 320 to generate the virtual meeting room. The room generation module 320 may be configured to generate, store, and manage meeting room information/data. For example, the room generation module 320 may include various rooms with different themes, names, capacities, and availability. In an exemplary embodiment, the room generation module 320 may be configured to generate a plurality of room selections for a host participant. Also, the room generation module 320 may be configured to generate and/or present the virtual meeting room based on the received inputs from the participants. In an embodiment, the room generation module 320 may receive inputs from the participants via the AI module 318 to generate the virtual meeting room. Specifically, the AI module 318 may be configured to share the processed input data to the room generation module 320 for the generation of different views of the virtual meeting room. The room generation module 320 may be communicably coupled to the memory 210 to store the generated virtual rooms and/or associated information.
[0064] Further, the AI module 318 may be communicably coupled to the avatar and gesture management module 322. The AI module 318 may be configured to share the processed packets/information/data pertaining to avatars and corresponding gestures to the avatar and gesture management module 322. The avatar and gesture management module 322 may be configured to generate avatars corresponding to the participants based on received information from the AI module 318. The avatar and gesture management module 322 may also be configured to generate human-like actions/gestures of the avatar based on received inputs from the AI module 318. The avatar and gesture management module 322 may be configured to share the generated information/data corresponding to the avatars and corresponding gestures to the AI module 318 during the virtual meeting.
[0065] The AI module 318 may be communicably coupled to the display management module 330 to generate various displays of the virtual meeting rooms, the avatars, and corresponding gestures based on data/information received from the controller module 316, the room generation module 320, and the avatar and gesture management module 322. The display management module 330 may be configured to assist the AI module 318 to generate different displays of the virtual meeting room based on different modes of the virtual meeting. The different modes of the virtual meeting may include a discussion mode where the participants speak and discuss, or a presentation mode where at least one participant project a screen. The display management module 330 may also be coupled to the memory 210 to store and retrieve information required to generate the desired displays.
[0066] The controller module 316 may also be coupled to the notification module 328 and share the information/data received from various other modules. The notification module 328 may be configured to generate various notifications for the virtual meeting based on the received information/data from the controller module 316. Such notifications may include, but not limited to, calendar invites, email invites, message notifications, incoming call invites, timeslots booking, and so forth. The notification module 328 may be configured to generate and share the notifications with the participants as per the requirements.
[0067] Embodiments as explained above are exemplary in nature and the system 100 may include and/or omit any module/component based on the requirement. Further, the system 100 may follow any suitable sequence of operations required to achieve the desired objective of the present disclosure.
[0068] Figure 5 illustrates an exemplary process flow 500 for managing a responsive virtual meeting room for a plurality of participants, according to another embodiment of the present disclosure. According to the process flow 500, a host participant H may open an interface to schedule a virtual meeting. The interface may be provided by the application installed on the user device corresponding to the host participant H. The host participant H may provide a set of inputs to the system 100 to schedule the virtual meeting. The set of inputs may include, but not limited to, a selection of a room, a selection of avatar, a number of seats/participants, and other essential details required to schedule the virtual meeting.
[0069] Next, the system 100 may receive the set of inputs from the host participant. The system 100 may validate the set of inputs. Specifically, the system 100 may determine whether the received set of inputs corresponds to a valid virtual meeting request or not. In an embodiment, the set of inputs may include selection of meeting invite/notification and the system 100 may validate the participant upon right selection of meeting invite/notification. Moreover, the set of inputs may also include additional information such as, but not limited to, user profile, access request, meeting credential and so forth. The system 100 may validate the participant based on said additional information received as the set of inputs. Upon successful validation, the system 100 may send a confirmation acknowledging the reception of the received set of inputs and/or generation of the virtual meeting. The system 100 may analyze the details of the virtual meeting received as the set of inputs from the host participant to identify the other participants A-C of the virtual meeting. The system 100 may send an invitation to each of the participants A-C. Next, when the host participant H joins the meeting, the system 100 may validate the joining of the host participant H and establish a persistent connection with the user device corresponding to the host participant H. The system 100 may also generate a view for the host participant H. The view for the host participant H may illustrate a selected virtual meeting room, a number of vacant chairs corresponding to the invited participants, and one or more other virtual services which may be selected by the host participant H. Further, the system 100 may determine whether the host participant H has initiated a recording of the virtual meeting. Upon determining that the recording of the virtual meeting has been requested, the system 100 may start recording the virtual meeting from the perspective of the host participant H, the other participants A-C, and for a non-participant.
[0070] Further, the system 100 may update the perspective view of the virtual meeting room for the host participant H to illustrate the human-like actions of the avatar corresponding to the host participant H. For example, the system 100 may display the avatar corresponding to the host participant H entering the virtual meeting room.
[0071] Next, the system 100 may receive an indication from each of the plurality of participants A-C indicating the joining of the meeting by the participants. Upon receiving said indications, the system 100 may validate the joining of the participants A-C and establish a persistent connection with the user device corresponding to each of the participants A-C to exchange information. Further, to validate a participant, the system 100 may check whether the participant is an intended participant, or whether the participant has joined in response to a valid meeting invitation. Thereafter, the system 100 may generate different perspective views of the virtual meeting room for each of the plurality of participants A-C. The perspective view corresponding to a participant may include an avatar of all the participants other than the corresponding participant. For example, the perspective view generated for the participant A may display all the participants B-C and the host participant H. Similarly, the perspective views may be generated for each of the participants B-C. The system 100 may also record the different perspective views corresponding to each of the participants H and A-C and generate different data streams corresponding to each of the different perspective views. Further, the system 100 may also record and/or store each of the generated perspective views as data streams corresponding to each of the participants A-C, host H, and the non-participant.
[0072] Next, when the host participant H speaks, the system 100 may designate the host participant H as the active participant. Specifically, when the host participant H speaks, the system 100 may receive information/data over the established persistent connection. Therefore, the system 100 may identify that the participant H is speaking based on the received information/data over the established persistent connection. Further, the system 100 may update the virtual meeting room to make the avatar corresponding to the host participant H perform speaking action, and the avatars corresponding to the other participants A-C view towards the avatar of the host participant H. Therefore, in the perspective view of the host participant H, the system 100 may display all the participants A-C viewing the host participant H. Similarly, in the perspective view of the participant A, the system 100 may display the avatar of the host participant H speaking and the avatars of the participant B and the participant C viewing towards the avatar of the host participant H. Similarly, the system 100 may update the perspective view of the participant B and the participant C. The system 100 may also update the data streams corresponding to the recordings of the different perspective views of the virtual meeting room based on the dynamic update of the virtual meeting room.
[0073] Next, when the participant B starts speaking, the system 100 may receive information/data over the established persistent connection with the participant B. Thereafter, the system 100 may update the perspective views of each of the participants. For example, for the host participant H, the system 100 may generate the perspective view displaying the participant B as speaking and other participants A and C viewing towards the participant B. Similarly, for the participant A, the system 100 may generate the perspective view displaying the participant B as speaking and other participants including the host participant H and the participant C viewing toward the participant B. Similarly, the system 100 may generate the perspective view corresponding to other participants B and C. Thus, the system 100 may intelligently monitor the virtual meeting and the data transmission to dynamically update the virtual meeting room and the gestures of avatars. The system 100 may also update the data streams corresponding to a recording of different perspective views of the virtual meeting room based on the dynamic update of the virtual meeting room.
[0074] Figure 6 illustrates a flow chart of a method 600 for managing a responsive virtual meeting room for a plurality of participants, according to an embodiment of the present disclosure.
[0075] At step 602, the method 600 may include receiving a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of an avatar and a selection of participant position in a virtual meeting room corresponding to a virtual meeting.
[0076] At step 604, the method 600 may include establishing one or more data connection paths with each of the plurality of participants.
[0077] At step 606, the method 600 may include generating the virtual meeting room based on the received set of inputs. The received set of inputs comprises one or more characteristics of the virtual meeting. Further, the one or more characteristics of the virtual meeting comprise at least one of a date of meeting, a time of meeting, a number of participants, a selection of virtual meeting room, a selection of one or more virtual objects, and a selection of one or more virtual services. At step 608, the method 600 may include providing the virtual meeting room based on the received set of inputs from the plurality of participants.
[0078] Next, at step 610, the method 600 may include generating one or more actions of avatars corresponding to the participants during an initial establishment of the one or more data connection paths with the plurality of participants. The one or more actions comprise at least one of the one or more avatars entering the virtual meeting room, the avatar sitting in a selected position, and the avatar introducing the corresponding participant.
[0079] At step 612, the method 600 may include monitoring the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting. Further, at step 614, the method 600 may include designating the at least one participant as the active participant based on real-time data transmission in the one or more data connection paths established with the plurality of participants.
[0080] Next, at step 616, the method 600 may include dynamically updating the virtual meeting room based on the designated active participant. In an embodiment, dynamically updating the virtual meeting room comprises at least one of updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting. The one or more characteristics of the avatars comprise at least one of the facial expressions, head position, body position, and viewing angle.
[0081] At step 618, the method 600 may include receiving, from at least one participant, a request for at least one of the one or more virtual services in the virtual meeting room. Next at step 620, the method 600 may include dynamically updating the virtual meeting room to display the requested at least one virtual service in the virtual meeting room.
[0082] Next, at step 622, the method 600 may include generating one or more actions of avatars corresponding to the participants during a release of the one or more data connection paths with the plurality of participants. The one or more actions during the release of data connection paths comprises at least one of the one or more avatars leaving the selected position, the avatar leaving the virtual meeting room, and the avatar providing signing off comments.
[0083] Embodiments, as discussed above, are exemplary in nature and the method 600 may include any additional step or omit any of the above-mentioned steps to perform the desired objective of the present disclosure. Further, the steps of the method 600 may be performed in any suitable order and/or by any suitable component of the system 100 in order to achieve the desired advantages.
[0084] Figure 7 illustrates a flow chart of a method 700 for managing a responsive virtual meeting room for a plurality of participants, according to another embodiment of the present disclosure.
[0085] At step 702, the method 700 may include receiving a set of inputs from each of the plurality of participants. The set of inputs comprises at least one of a selection of a virtual representation of the participants and a selection of participant position in a virtual meeting room corresponding to a virtual meeting
[0086] At step 704, the method 700 may include generating a perspective view of the virtual meeting room corresponding to each participant based on at least one of the input from the set of inputs of each of the plurality of participants. The perspective view corresponding to a participant includes a virtual representation of all the participants other than the corresponding participant.
[0087] At step 706, the method 700 may include monitoring the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting.
[0088] At step 708, the method 700 may include dynamically updating each of the perspective views of the virtual meeting room based on the designated active participant.
[0089] Next, at step 710, the method 700 may include providing each of the dynamically updated perspective views corresponding to each of the plurality of participants as a different data stream.
[0090] At step 712, the method 700 may include generating an overview perspective view of the virtual meeting room for a non-participant user. The overview perspective view of the virtual meeting room includes a virtual representation of each of the plurality of participants.
[0091] Next, at step 714, the method 700 may include determining an activation of presentation mode by at least one participant in the virtual meeting. At step 716, the method 700 may include dynamically updating each of the perspective views of the virtual meeting room based on the determined activation of the presentation mode. Further, upon determining the activation of presentation mode, each of the perspective views displays at least one of a document or a display screen projected by at least one participant of the plurality of participants.
[0092] Next, at step 718, the method 700 may include processing the different data streams to identify one or more events corresponding to the plurality of participants. The one or more events correspond to at least one of speaking action by the participants, a presentation by the participants, and typing action by the participants.
[0093] Lastly, at step 720, the method 700 may include providing access to the different data streams based on the one or more identified events. In an embodiment, the method may also include transmitting the data streams to participants and/or storing the data streams in a database.
[0094] Embodiments, as discussed above, are exemplary in nature and the method 700 may include any additional step or omit any of the above-mentioned steps to perform the desired objective of the present disclosure. Further, the steps of the method 700 may be performed in any suitable order and/or by any suitable component of the system 100 in order to achieve the desired advantages.
[0095] Figures 8A-8C illustrate generation of a virtual meeting room with the selected positions of the plurality of participants, according to an embodiment of the present disclosure. Specifically, Fig. 8A illustrates a perspective view generated for participant A, which indicates that participant C may sit opposite to the participant A and other participant B may sit on the left side of the participant A and the host participant may sit on the right side of the participant A. Similarly, Fig. 8B and 8C illustrate perspective views generated for the participant B and the participant C, respectively.
[0096] Figures 9A-9C illustrate different perspective views of the virtual meeting room corresponding to the plurality of participants, according to an embodiment of the present disclosure. Figures 9A-9C illustrate the virtual representation and/or the avatars of the plurality of participants. Specifically, Fig. 9A illustrates a perspective view generated for the host participant, where the avatars of all the other participants A, B, and C are visible to the host participant. In illustrated Fig. 9A, the host participant is designated as the active participant, and the avatars of other participants A, B, and C are looking at the host participant. Similarly, Fig. 9B illustrates a perspective view generated for the participant A, when the participant A is designated as the active participant and the other participants Host, B, and C are looking at the participant A. Fig. 9C illustrates the perspective view generated for the host participant when the participant B is designated as the active participant and the other participants C and A are looking at the participant B.
[0097] Figures 10A-10C illustrate generation of a recorded data stream from different perspective views of the virtual meeting room, according to an embodiment of the present disclosure. Fig. 10A illustrates a recorded data stream generated from the perspective of the host participant. Fig. 10B illustrates a recorded data stream generated from the perspective of the participant A. Further, Fig. 10C illustrates a recorded data stream generated from the perspective of a user who has not participated in the virtual meeting. Accordingly, Fig. 10C illustrates all the participants of the virtual meeting during the discussion. The system 100 may also allow the participant to filter from the stored data streams based on the events and/or timestamp. Examples of filtering of the stored data streams based on timestamp may include allowing a participant to view the data stream exactly at a requested timestamp. In another embodiment, a summarized version of each data stream may be prepared by the processor/controller 202 which only includes brief durations around one or more timestamps associated with activities of the one or more avatars. The summarized version of the data stream may define various events of the virtual meeting with their corresponding timestamps. Example of events of the virtual meeting may include, but not limited to, when a particular participant spoke, when a presentation initiated, when a recording of the meeting initiated, when a
participant joined the virtual meeting, and so forth. In some embodiments, the summarized version of the virtual meeting may be stored in a transcript format and the processor/controller 202 may utilize techniques such as, but not limited, speech to text conversion, natural language processing, and so forth, to generate such summarized version of the data streams.
[0098] Figures 11A-11B illustrate the dynamic updating of the virtual meeting room based on a presentation mode, according to an embodiment of the present disclosure. In an embodiment, during the presentation mode, the system 100 may update the perspective view corresponding to each of the plurality of participants with a projected display. In an embodiment, the system 100 may also display the avatar of the participant who is projecting the display, as illustrated in Fig. 11B. Further, the system 100 may allow a user to toggle between the discussion mode, as shown in Fig. 11A and the presentation mode, as shown in Fig. 11B.
[0099] Therefore, the present disclosure provides a simple, effective, and efficient technique for implementing virtual meetings which provide real- comment and feel to the participants. Further, the present disclosure provides an interactive and intuitive virtual reality environment during the virtual meeting without the need for extensive, complex, and costly software and hardware components.
[0100] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0101] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.

WE CLAIM:

. A method (600) for managing a responsive virtual meeting room for a plurality of
articipants, the method (600) comprising:
receiving (602) a set of inputs from each of the plurality of participants, wherein the et of inputs comprises at least one of a selection of an avatar and a selection of participant osition in a virtual meeting room corresponding to a virtual meeting;
providing (608) the virtual meeting room based on the received set of inputs from he plurality of participants;
monitoring (612) the virtual meeting to designate at least one participant as an active articipant based at least on real-time data communication during the virtual meeting; and
dynamically updating (616) the virtual meeting room based on the designated active articipant, wherein dynamically updating the virtual meeting room comprises at least one f updating one or more characteristics of the avatars corresponding to the plurality of articipants or one or more characteristics corresponding to an environment of the virtual eeting.
. The method (600) as claimed in claim 1, comprising:
generating (606) the virtual meeting room based on the received set of inputs, herein the received set of inputs comprises one or more characteristics of the virtual eeting, wherein the one or more characteristics of the virtual meeting comprises at least ne of a date of meeting, a time of meeting, a number of participants, a selection of virtual eeting room, a selection one or more virtual objects, and a selection of one or more virtual ervices.
. The method (600) as claimed in claim 1, wherein the one or more characteristics of
he avatars comprises at least one of facial expression, head position, body position, and iewing angle.
. The method (600) as claimed in claim 1, comprising:
establishing (604) one or more data connection paths with each of the plurality of articipants; and
designating (614) the at least one participant as the active participant based on real- ime data transmission in the one or more data connection paths established with the plurality f participants.
. The method (600) as claimed in claim 4, comprising:
generating (610) one or more actions of avatars corresponding to the participants uring an initial establishment of the one or more data connection paths with the plurality f participants,
wherein the one or more actions comprise at least one of the one or more avatars ntering the virtual meeting room, the avatar sitting in a selected position, and the avatar ntroducing the corresponding participant.
. The method (600) as claimed in claim 5, comprising:
generating (622) one or more actions of avatars corresponding to the participants uring release of the one or more data connection paths with the plurality of participants,
wherein the one or more actions comprise at least one of the one or more avatars eaving the selected position, the avatar leaving the virtual meeting room, and the avatar roviding signing off comments.
. The method (600) as claimed in claim 2, comprising:
receiving (618), from at least one participant, a request for at least one of the one or ore virtual services in the virtual meeting room; and
dynamically updating (620) the virtual meeting room to display the requested at least ne virtual service in the virtual meeting room.
. A method (700) for managing a responsive virtual meeting room for a plurality of
articipants, the method (700) comprising:
receiving (702) a set of inputs from each of the plurality of participants, wherein the et of inputs comprises at least one of a selection of a virtual representation of the articipants and a selection of participant position in a virtual meeting room corresponding o a virtual meeting;
generating (704) a perspective view of the virtual meeting room corresponding to ach participant based on at least one of the input from the set of inputs of each of the lurality of participants, wherein the perspective view corresponding to a participant ncludes a virtual representation of all the participants other than the corresponding articipant;
monitoring (706) the virtual meeting to designate at least one participant as an active articipant based at least on real-time data communication during the virtual meeting;
dynamically updating (708) each of the perspective views of the virtual meeting oom based on the designated active participant; and
providing (710) each of the dynamically updated perspective views corresponding o each of the plurality of participants as a different data stream.
. The method (700) as claimed in claim 8, comprising:
generating (712) an overview perspective view of the virtual meeting room for a non- articipant user, wherein the overview perspective view of the virtual meeting room includes virtual representation of each of the plurality of participants.
0. The method (700) as claimed in claim 9, comprising:
determining (714) an activation of presentation mode by at least one participant in he virtual meeting; and
dynamically updating (716) each of the perspective views of the virtual meeting oom based on the determined activation of presentation mode.
1. The method (700) as claimed in claim 10, wherein upon determining the activation f presentation mode, each of the perspective views displays at least one of a document or a isplay screen projected by at least one participant of the plurality of participants.
2. The method (700) as claimed in claim 8, comprising:
processing (718) the different data streams to identify one or more events orresponding to the plurality of participants,
wherein the one or more events corresponds to at least one of speaking action by the articipants, presentation by the participants, and typing action by the participants.
3. The method (700) as claimed in claim 12, comprising:
providing (720) access to the different data streams based on the one or more dentified events.
4. A system (100) for managing a responsive virtual meeting room for a plurality of articipants, the system (100) comprising:
a memory (210); and
at least one processor (202) communicably coupled to the memory (210), wherein he at least one processor (202) is configured to:
receive a set of inputs from each of the plurality of participants, wherein the set of inputs comprises at least one of a selection of an avatar and a selection of participant position in a virtual meeting room corresponding to a virtual meeting;
provide the virtual meeting room based on the received set of inputs from the plurality of participants;
monitor the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting; and
dynamically update the virtual meeting room based on the designated active participant, wherein dynamically updating the virtual meeting room comprises at least one of updating one or more characteristics of the avatars corresponding to the plurality of participants or one or more characteristics corresponding to an environment of the virtual meeting.
5. The system (100) as claimed in claim 14, wherein the at least one processor (202) is onfigured to:
generate the virtual meeting room based on the received set of inputs, wherein the eceived set of inputs comprises one or more characteristics of the virtual meeting, wherein he one or more characteristics of the virtual meeting comprises at least one of a date of eeting, a time of meeting, a number of participants, a selection of virtual meeting room, a election one or more virtual objects, and a selection of one or more virtual services.
6. The system (100) as claimed in claim 14, wherein the one or more characteristics of he avatars comprises at least one of facial expression, head position, body position, and iewing angle.
7. The system (100) as claimed in claim 14, wherein the at least one processor (202) is onfigured to:
establish one or more data connection paths with each of the plurality of participants;
nd
designate the at least one participant as the active participant based on real-time data ransmission in the one or more data connection paths established with the plurality of articipants.
8. The system (100) as claimed in claim 17, wherein the at least one processor (202) is on figured to:
generate one or more actions of avatars corresponding to the participants during an nitial establishment of the one or more data connection paths with the plurality of participants,
wherein the one or more actions comprise at least one of the one or more avatars ntering the virtual meeting room, the avatar sitting in a selected position, and the avatar introducing the corresponding participant.
9. The system (100) as claimed in claim 18, wherein the at least one processor (202) is on figured to:
generate one or more actions of avatars corresponding to the participants during release of the one or more data connection paths with the plurality of participants,
wherein the one or more actions comprise at least one of the one or more avatars leaving the selected position, the avatar leaving the virtual meeting room, and the avatar providing signing off comments.
0. The system (100) as claimed in claim 15, wherein the at least one processor (202) is on figured to:
receive, from at least one participant, a request for at least one of the one or more virtual services in the virtual meeting room; and
dynamically update the virtual meeting room to display the requested at least one virtual service in the virtual meeting room.
1. A system (100) for managing a responsive virtual meeting room for a plurality of participants, the system (100) comprising:
a memory (210); and
at least one processor (202) communicably coupled to the memory (210), wherein he at least one processor (202) is configured to:
receive a set of inputs from each of the plurality of participants, wherein the set of inputs comprises at least one of a selection of a virtual representation of the participants and a selection of participant position in a virtual meeting room corresponding to a virtual meeting;
generate a perspective view of the virtual meeting room corresponding to each participant based on at least one of the input from the set of inputs of each of the plurality of participants, wherein the perspective view corresponding to a participant includes a virtual representation of all the participants other than the corresponding participant;
monitor the virtual meeting to designate at least one participant as an active participant based at least on real-time data communication during the virtual meeting;
dynamically update each of the perspective views of the virtual meeting room based on the designated active participant; and
provide each of the dynamically updated perspective views corresponding to each of the plurality of participants as a different data stream.
2. The system (100) as claimed in claim 21, wherein the at least one processor (202) is on figured to:
generate an overview perspective view of the virtual meeting room for a non- participant user, wherein the overview perspective view of the virtual meeting room includes virtual representation of each of the plurality of participants.
3. The system (100) as claimed in claim 22, wherein the at least one processor (202) is on figured to:
determine an activation of presentation mode by at least one participant in the virtual eeting; and
dynamically update each of the perspective views of the virtual meeting room based n the determined activation of presentation mode.
4. The system (100) as claimed in claim 23, wherein upon determining the activation f presentation mode, each of the perspective views displays at least one of a document or a is play screen projected by at least one participant of the plurality of participants.
5. The system (100) as claimed in claim 21, wherein the at least one processor (200) is on figured to:
process the different data streams to identify one or more events corresponding to he plurality of participants,
wherein the one or more events corresponds to at least one of speaking action by the participants, presentation by the participants, and typing action by the participants.
6. The system (100) as claimed in claim 25, wherein the at least one processor (202) is on figured to:
provide access to the different data streams based on the one or more identified Events.

Documents

Application Documents

# Name Date
1 202311000184-CLAIMS [04-07-2023(online)].pdf 2023-07-04
1 202311000184-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [02-01-2023(online)].pdf 2023-01-02
2 202311000184-STATEMENT OF UNDERTAKING (FORM 3) [02-01-2023(online)].pdf 2023-01-02
2 202311000184-FER_SER_REPLY [04-07-2023(online)].pdf 2023-07-04
3 202311000184-POWER OF AUTHORITY [02-01-2023(online)].pdf 2023-01-02
3 202311000184-AMENDED DOCUMENTS [27-06-2023(online)].pdf 2023-06-27
4 202311000184-FORM 13 [27-06-2023(online)].pdf 2023-06-27
4 202311000184-FORM 1 [02-01-2023(online)].pdf 2023-01-02
5 202311000184-POA [27-06-2023(online)].pdf 2023-06-27
5 202311000184-DRAWINGS [02-01-2023(online)].pdf 2023-01-02
6 202311000184-Proof of Right [27-06-2023(online)].pdf 2023-06-27
6 202311000184-DECLARATION OF INVENTORSHIP (FORM 5) [02-01-2023(online)].pdf 2023-01-02
7 202311000184-RELEVANT DOCUMENTS [27-06-2023(online)].pdf 2023-06-27
7 202311000184-COMPLETE SPECIFICATION [02-01-2023(online)].pdf 2023-01-02
8 202311000184-FORM-9 [18-01-2023(online)].pdf 2023-01-18
8 202311000184-FER.pdf 2023-02-27
9 202311000184-FORM-8 [18-01-2023(online)].pdf 2023-01-18
10 202311000184-FORM-8 [18-01-2023(online)].pdf 2023-01-18
10 202311000184-FORM 18 [18-01-2023(online)].pdf 2023-01-18
11 202311000184-FER.pdf 2023-02-27
11 202311000184-FORM-9 [18-01-2023(online)].pdf 2023-01-18
12 202311000184-COMPLETE SPECIFICATION [02-01-2023(online)].pdf 2023-01-02
12 202311000184-RELEVANT DOCUMENTS [27-06-2023(online)].pdf 2023-06-27
13 202311000184-DECLARATION OF INVENTORSHIP (FORM 5) [02-01-2023(online)].pdf 2023-01-02
13 202311000184-Proof of Right [27-06-2023(online)].pdf 2023-06-27
14 202311000184-DRAWINGS [02-01-2023(online)].pdf 2023-01-02
14 202311000184-POA [27-06-2023(online)].pdf 2023-06-27
15 202311000184-FORM 13 [27-06-2023(online)].pdf 2023-06-27
16 202311000184-AMENDED DOCUMENTS [27-06-2023(online)].pdf 2023-06-27
17 202311000184-FER_SER_REPLY [04-07-2023(online)].pdf 2023-07-04
18 202311000184-CLAIMS [04-07-2023(online)].pdf 2023-07-04
19 202311000184-Response to office action [02-05-2025(online)].pdf 2025-05-02
20 202311000184-US(14)-HearingNotice-(HearingDate-18-11-2025).pdf 2025-10-16
21 202311000184-Correspondence to notify the Controller [14-11-2025(online)].pdf 2025-11-14
22 202311000184-FORM-26 [17-11-2025(online)].pdf 2025-11-17
23 202311000184-US(14)-ExtendedHearingNotice-(HearingDate-27-11-2025)-1100.pdf 2025-11-20
24 202311000184-Correspondence to notify the Controller [21-11-2025(online)].pdf 2025-11-21

Search Strategy

1 Search316E_23-02-2023.pdf
2 202311000184_D4AE_02-02-2024.pdf