Sign In to Follow Application
View All Documents & Correspondence

A Computer Implemented System And Method For Audio Visual Collaboration And Interaction

Abstract: A computer implemented system for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants is envisaged. The system includes communication devices associated with the remotely located participants, these communication devices communicate with each other by using a communication medium. The system accepts a collaboration request between a set of remotely located participants and organizes an audio visual session between the set of participants. A controller module present in the system initiates the session and controls communication between the communication devices during the session. A monitoring and feedback module monitors the communication and provides a feedback to the controller module to enable seamless and self-corrective interaction between the plurality of participants. A polling module then polls the participants and accepts answers to a plurality of questions to evaluate effectiveness of the interaction. Fig.1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 September 2015
Publication Number
13/2017
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
dewan@rkdewanmail.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-26
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
Nirmal Building, 9th Floor, Nariman Point, Mumbai – 400 021, Maharashtra, India.

Inventors

1. JANI, Udeep Harivadan
TCS Compound, Banyan Park, Suren Road, Andheri (East), Mumbai – 400093, Maharashtra
2. MAVANI, Tushar
312, Welfare Chambers, Plot No. 73, Sector 17, Vashi, Navi Mumbai - 400703, Maharashtra
3. UPADHYA, Giriprasad Keshav
Unit-VI, No.78, 79& 83, L-Centre, EPIP Industrial Estate, Whitefield, Bangalore - 560066
4. KOLLIPARA, Sowjanya
Plot No 1, Survey No. 64/2, Software Units Layout, Serilingampally Mandal, Madhapur, Hyderabad – 500034, Telangana
5. NARAYANA, Magesh Babu
Plot no. 83, Road no. 3, Mallikarjun Nagar, Old Bowenpally, Hyderabad – 500011, Telangana
6. PADATHUNATTIL, Sreerudran
Peepul Park, Technopark Campus, Kariyavattom P.O, Trivandrum – 695581, Kerala
7. JAIN, Amit
Vidyasagar Building, 1st Floor, Raheja Township, Near Sai Baba Mandir, Malad(E), Mumbai - 400 097, Maharashtra
8. KUMAR, Alok
Vidyasagar Building, 1st Floor, Raheja Township, Near Sai Baba Mandir, Malad(E), Mumbai - 400 097, Maharashtra

Specification

Claims:1. A system for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants, said system comprising:
a memory configured to store a set of rules;
a processor configured to cooperate with the memory to receive the set of rules and generate a set of commands based on said rules;
a server operatively coupled to a communication medium and accessible to a plurality of communication devices, wherein said communication medium is configured to cooperate with the plurality of communication devices to provide communication paths between said communication devices;
said plurality of communication devices associated with said plurality of participants, wherein each communication device comprises:
at least one video camera having participant video capture capabilities, wherein said video camera is configured to focus and capture videos based on participants activities,
at least one microphone having participant audio capture capabilities,
at least one loudspeaker having audio reproduction capabilities, and
at least one display screen to display captured videos;
wherein said server comprises:
an input module configured to accept a collaboration request between a set of remotely located participants;
an organizer module configured to cooperate with the input module to receive the collaboration request and organize an audio visual session between said set of participants;
a controller module configured to cooperate with the organizer module to initiate said session and further configured to control communication, between said communication devices associated with said set of participants, through said communication medium during said session;
a monitoring and feedback module configured to monitor said communication and provide a feedback to said controller module to enable seamless and self-corrective interaction between said plurality of participants; and
a polling module configured to poll said plurality of participants and accept answers to a plurality of questions to evaluate effectiveness of said interaction.

2. The system as claimed in claim 1, wherein said at least one video camera is configured to focus on participants based on participants activities selected from a group of activities comprising speaking, emoting, moving, gesturing and the like.

3. The system as claimed in claim 1, wherein said system includes an annotation module configured to accept and display annotations on the display screen during an on-going session.

4. The system as claimed in claim 1, wherein said captured videos are displayed on said display screen in real-time.

5. The system as claimed in claim 1, wherein said video camera is further configured to capture videos and store the captured videos in a repository.

6. The system as claimed in claim 1, wherein said system includes a record and store module configured to record and store on-going sessions.

7. The system as claimed in claim 1, wherein said polling module is further configured to graphically display evaluated results on the display screen.

8. The system as claimed in claim 1, wherein said system is equipped with a push-button switch that causes the video camera to swing and focus on a participant pressing the push-button switch.

9. The system as claimed in claim 1, wherein said at least one video camera is placed on a mounting such that it is controllably angularly displaceable.

10. The system as claimed in claim 1, wherein said at least one video camera is fitted with at least one controller that is configured to enable said video camera to swing in response to a pressed push-button switch.

11. The system as claimed in claim 1, wherein said system includes at least one controller configured to link and control all remotely located video cameras associated with each group of participants.

12. The system as claimed in claim 1, wherein said system provides audio visual and video conferencing facilities to the remotely located participants.

13. The system as claimed in claim 1, wherein said system includes monitoring consoles and troubleshooting techniques for providing seamless and self-corrective interaction.

14. The system as claimed in claim 1, wherein said polling modules pre-stores the plurality of questions.

15. The system as claimed in claim 1, wherein said plurality of questions are created by participants for polling.

16. The system as claimed in claim 1, wherein said system includes a plurality of all-in-one devices configured to provide controlling functions during the collaboration and interaction.

17. The system as claimed in claim 3, wherein said annotation module is configured to store said annotations.

18. A method for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants, said method comprising the following:
associating a plurality of communication devices with said plurality of participants, wherein each communication device comprises steps of:
capturing participants videos based on participants activities using at least one video camera,
capturing participants audios using at least one microphone,
reproducing captured audios using at least one loudspeaker, and
displaying captured videos using at least one display screen;
providing communication paths between said communication devices to transmit and receive participants audios and videos;
accepting a collaboration request between a set of remotely located participants;
organizing an audio visual session between said set of participants based on the accepted collaboration request;
initiating said session and controlling, during said session, communication between said communication devices associated with said set of participants, through said communication paths;
monitoring said communication and providing a feedback for enabling seamless and self-corrective interaction between said plurality of participants; and
polling said plurality of participants and accepting answers to a plurality of questions for evaluating effectiveness of said interaction.

19. The method as claimed in claim 18, wherein said method includes a step of focusing said at least one video camera on participants based on participants activities selected from a group of activities comprising speaking, emoting, moving, gesturing and the like.

20. The method as claimed in claim 18, wherein said method includes steps of accepting and displaying annotations on the display screen during an on-going session, said method further includes step of storing said annotations.

21. The method as claimed in claim 18, wherein said method includes a step of capturing videos and displaying the captured videos on said display screen in real-time.

22. The method as claimed in claim 18, wherein said method includes a step of storing the captured videos in a repository.

23. The method as claimed in claim 18, wherein said method further includes steps of recording and storing on-going sessions.

24. The method as claimed in claim 18, wherein said method further includes step of graphically displaying evaluated results on the display screen.

25. The method as claimed in claim 18, wherein said method includes steps for causing the video camera to swing and focus on a participant.

26. The method as claimed in claim 18, wherein said method includes steps of linking and controlling all remotely located video cameras associated with each group of participants with help of at least one controller.

27. The method as claimed in claim 18, wherein said method facilitates audio visual and video conferencing between the remotely located participants.

28. The method as claimed in claim 18, wherein said method includes step of pre-storing the plurality of questions for polling.

29. The method as claimed in claim 18, wherein said method includes step of creating plurality of questions for polling by participants.

30. The method as claimed in claim 18, wherein said system includes step of providing controlling functions during the collaboration and interaction. , Description:TECHNICAL FIELD
The present disclosure relates to the field of collaboration between remotely located participants.
BACKGROUND
To obtain efficient workflows and to enable effective communication between participants placed locally or remotely, high emphasis is usually given on collaboration techniques. There are multiple state-of art techniques that provide voice, data and video communications which are used to complement in-person meetings. However, nowadays such techniques are replacing the in-person meetings. Currently, there exist various collaboration techniques for corporate, social as well as educational environment. Typically, in all these environments, video communication techniques are preferred over text and audio communication. But, there are many services like distance learning and audio visual conferencing that require a system which combines audio, video and textual information. It is necessary that such system provides effective interaction between remotely placed participants. Considering this, there is a need for a system that provides distance learning to the participants in the form of real-time and recorded videos, allows audio visual collaboration and also polls the participants to evaluate success of the sessions.
To achieve the aforementioned requirements, there is a need for a system that seamlessly integrates audio, video and textual information effectively.
OBJECTS
Some of the objects of the present disclosure aimed to ameliorate one or more problems of the prior art or to at least provide a useful alternative are described herein below:
An object of the present disclosure is to provide a system for audio visual collaboration and interaction between a plurality of remotely located participants.
Another object of the present disclosure is to provide a system that is seamless and self-corrective and lossless.
Further object of the present disclosure is to provide a system for real-time audio visual collaboration.
Yet another object of the system of the present disclosure is to provide a single collaboration platform for multiple applications including audio visual meetings, virtual learning, online conferences and the like.
Still another object of the present disclosure is to provide a system that facilitates live streaming, recording and playback in a training environment.
One more object of the present disclosure is to provide a system that facilitates live monitoring of sessions and real-time assistance to participants.
Other objects and advantages of the present disclosure will be more apparent from the following description when read in conjunction with the accompanying figures, which are not intended to limit the scope of the present disclosure.

SUMMARY
The present disclosure relates to a computer implemented system for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants. In an embodiment, the system comprises a memory configured to store a set of rules, a processor configured to cooperate with the memory to receive the set of rules and generate a set of commands based on the rules, and a server operatively coupled to a communication medium and accessible to a plurality of communication devices, wherein the communication medium is configured to cooperate with the plurality of communication devices to provide communication paths between the communication devices. The plurality of communication devices are associated with the plurality of participants, wherein each communication device comprises at least one video camera having participant video capture capabilities, at least one microphone having participant audio capture capabilities, at least one loudspeaker having audio reproduction capabilities, and at least one display screen to display captured videos. The server comprises an input module which is configured to accept a collaboration request between a set of remotely located participants. The server also includes an organizer module which is configured to cooperate with the input module to receive the collaboration request and organize an audio visual session between the set of participants. A controller module present in the server is configured to cooperate with the organizer module to initiate the session and further to control communication, between the communication devices associated with the set of participants, through the communication medium during the session. The server further comprises a monitoring and feedback module which is configured to monitor the communication and provide a feedback to the controller module to enable seamless and self-corrective interaction between the plurality of participants. A polling module is also present in the server to poll the plurality of participants and accept answers to a plurality of questions to evaluate effectiveness of the interaction.
This summary is provided to introduce concepts related to real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants, which is further described below in the detailed description. This summary is neither intended to identify all the essential features of the present disclosure nor is it intended for use in determining or limiting the scope of the present disclosure.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
A computer implemented system and method of the present disclosure for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants, will now be described with the help of accompanying drawings, in which:
Figure 1 illustrates a schematic of an embodiment of the system for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction;
Figure 2 illustrates an exemplary implementation of the system of Figure 1; and
Figure 3 illustrates a method for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction.

DETAILED DESCRIPTION
A preferred embodiment of the present disclosure will now be described in detail with reference to the accompanying drawings. The preferred embodiment does not limit the scope and ambit of the disclosure. The description provided is purely by way of example and illustration.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
According to an implementation, the present subject matter discloses a computer implemented system for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants. The system includes a server that is coupled to a communication medium and accessible to the remotely located participants and a repository. The repository is coupled to the server and is configured to store captured audios, videos and sessions.

Figure 1 illustrates a schematic of a system 100 for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants. The system 100 can be implemented as a variety of communication devices, such as laptops, computers, notebooks, workstations, mainframe computers, servers and the like. In one embodiment, the communication devices wirelessly share data with each other and/or with the system 100. The system 100 described herein, can also be implemented in any network environment comprising a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.

In one implementation, the system 100 includes one or more client devices 104-1, 104-2…104-N, individually and commonly hereinafter referred to as communication device(s) 104, a server 102 and a database 108. The server 102 may be operatively coupled to a communication medium 106 and accessible to the communication devices 104. The communication devices 104 may be implemented as, but are not limited to, hand-held devices, laptops or other portable computers, tablet computers, mobile phones, personal digital assistants (PDAs), Smartphones, and the like. The communication devices 104 may be located within the vicinity of the network-based system 100 or may be located at different geographic location as compared to that of the network-based system 100. Further, the communication devices 104 may themselves be located either within the vicinity of each other, or may be located at different geographic locations. Each of the communication devices 104 may include at least one video camera 104a having participant video capture capabilities, wherein the video camera 104a may be configured to focus and capture videos based on participants activities, at least one microphone 104b having participant audio capture capabilities, at least one loudspeaker 104c having audio reproduction capabilities, and at least one display screen 104d to display captured videos. The video camera 104a may be configured to focus on participants based on participants activities selected from a group of activities comprising speaking, emoting, moving, gesturing and the like. Further, the video camera 104a may be configured to capture videos and store the captured videos in a repository. These captured videos may be displayed on the display screen 104d in real-time.

In one embodiment, the microphone 104b of the system 100 is equipped with a push-button switch (not specifically shown) which when pressed triggers a circuit which transmits a signal to a receiver (not specifically shown) coupled to the video camera 104a which causes the video camera to swing and focus on the participant pressing the push-button switch. It works on a first come first served basis such that the video camera 104a focusses on the participant that presses the button first. For this purpose, the video camera 104a is therefore placed on a mounting (not specifically shown) such that the video camera 104a is controllably angularly displaceable and the video camera 104a is fitted with a controller (not specifically shown) which enables the video camera 104a to swing in response to signals received from the circuit coupled to the push-button switch associated with each participant in a session. This controller may require calibration at the start of each session depending on the number of participants. Also, a single controller may be provided in the system 100 that could link and control all the remotely located video cameras associated with each group of participants.

The communication medium 106 may be a wireless or a wired network, or a combination thereof. The communication medium 106 can be a collection of individual networks, interconnected with each other and functioning as a single large network (e.g., the internet or an intranet). The communication medium 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and such. The communication medium 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), etc., to communicate with each other.

The database 108 may be implemented as, but not limited to, enterprise database, remote database, local database, and the like. The database 108 may be located within the vicinity of the system 100 and the communication devices 104 or may be located at different geographic location as compared to that of the system 100 and the communication devices 104. Further, the database 108 may themselves be located either within the vicinity of each other, or may be located at different geographic locations. Furthermore, the database 108 may be implemented inside the communication devices 104 or inside the system 100 and the database 108 may be implemented as a single database.

In one implementation, the server 102 includes processor(s) 112. The processor 112 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in a memory.

The functions of the various elements shown in the figure, including any functional blocks labeled as “processor(s)”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate hardware. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), non-volatile storage. Other hardware, conventional and/or custom, may also be included.

Also, the server 102 includes interface(s) 110. The interfaces 110 may include a variety of software and hardware interfaces that allow the server 102 to interact with the entities of the communication medium 106, or with each other. The interfaces 110 may facilitate multiple communications within a wide variety of networks and protocol types, including wire networks, for example, LAN, cable, etc., and wireless networks, for example, WLAN, cellular, satellite-based network, etc.

The server 102 may also include a memory 114. The memory 114 may be coupled to the processor 112. The memory 114 can include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.

Further, the server 102 may include module(s) 116 and data 118. The modules 116 may be coupled to the processors 112 and amongst other things, include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The modules 116 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions. Further, the modules 116 can be implemented in hardware, instructions executed by a processing unit / processor, or by a combination thereof. In another aspect of the present subject matter, the modules 116 may be machine-readable instructions (software) which, when executed by a processor/processing unit, perform any of the described functionalities.

In an implementation, the modules 116 may include an input module 120, an organizer module 122, a controller module 124, a monitoring and feedback module 126, a polling module 128, an annotation module 130, a record and store module 132 and other module(s) 134. The other module(s) 134 may include programs or coded instructions that supplement applications or functions performed by the server 102. Further, the data 118 may include input data 136, and other data 138. The other data 138, amongst other things, may serve as a repository for storing data that is processed, received, or generated as a result of the execution of one or more modules in the modules 116. Although the data 118 is shown internal to the server 102, it may be understood that the data 118 can reside in an external repository, which may be coupled to the server 102.

In one implementation, the system 100 provides real-time, seamless, self-corrective, lossless audio visual collaboration and interaction between a plurality of remotely located participants. The system 100, in one embodiment, is a single core collaboration platform that provides flexible audio visual and video conferencing (AV-VC) facilities to the remotely located participants along with better audio, video and data quality. The system 100 comprising the server 102 and the server 102 may include the input module 120 to accept a collaboration request between a set of remotely located participants. The input module 120 is configured to cooperate with the processor 112 to receive the set of commands from the processor 112 and further configured to collect a collaboration requests from participants.

According to the present implementation, the organizer module 122 is configured to cooperate with the input module 120 to receive the collaboration request and organize an audio visual session between the set of participants. On organization of the session, the controller module 124 is configured to cooperate with the organizer module 122 to initiate the session and further configured to control communication, between the communication devices 104 associated with the set of participants, through the communication medium 106. The monitoring and feedback module 126 is then configured to monitor the communication and provide a feedback to the controller module 124 to enable seamless and self-corrective interaction between the plurality of participants. In one embodiment, monitoring consoles and troubleshooting techniques are used by the system 100 to provide seamless and self-corrective interaction. The polling module 128 is configured to poll the plurality of participants and accept answers to a plurality of questions to evaluate effectiveness of the interaction. In an embodiment, a participant creates polls for other participants through the system 100 irrespective of time and work space. In another embodiment, the polling questions are pre-stored and provided to the participants. In one implementation, the polling module 128 is further configured to graphically display evaluated results on the display screen 104d.

The server 102 may also have an annotation module 130 configured to accept and display annotations on the display screen 104d during an on-going session. This annotation module 130 provides seamless annotation during ongoing sessions. In one embodiment, the annotation module 130 enables participants to write, draw, annotate and mark the display screen 104d or electronic files/images or whiteboards during the sessions. These annotations may be stored for future reference. The server 102 may further include a record and store module 132 configured to record and store on-going sessions so that the sessions can be played back later or streamed as per participants convenience.

The system 100 of the present disclosure provides an immersive 3D/ Holographic video transmission collaboration that may span geographic and time boundaries. Additionally, it also facilitates wireless screen sharing wherein the participants can use touch, voice and gestures for audio visual communication. The system 100 further may include all-in-one (AIO) devices configured to provide controlling functions during the collaboration and interaction. In one embodiment, these AIO devices have embedded multi-touch capability with annotation, public addressing, polling, chats ant the like. Thus, the present disclosure envisages a touch, voice and gesture based control system which also facilitates automatic voice translation to participants’ regional/local language.

In one embodiment, the remotely located participants use the system 100 to connect wired/wirelessly for collaborating and interacting with each other using annotation, data sharing, polling and chatting features provided by the system.

Figure 2 of the accompanying drawings illustrates an exemplary implementation of the system of Figure 1. In one embodiment, the system 200 comprises a process developing module 202 a product building module 204 and managed services module 206. The process developing module includes a plurality of modules like planning module 202a for planning various processes, design module 202b for designing various processes, project management module 202c for creating set of rules to manage various projects, and other process developing modules 202n for developing processes. The product building module 204 comprises a plurality of individual collaboration products 204 including audio-video collaboration module 204a, teaching module 204b and other product building modules 204m where each collaboration product caters to specific collaboration needs. The audio-video collaboration module 204a module provides life-like, immersive audio-video collaboration with seamless data sharing. This may be used for meetings, client connect, reviews and the like. The teaching module 204b enables life-like teaching and collaboration with various combinations of trainers and trainee locations. This may be used for distance education, online conferences, and the like. The managed service module 206 continuously improves the system 200 based on users’ feedbacks to provide enhanced products along with new, purpose-built products. It provides products and services that are tightly integrated within the system 200 for a simple and uniform user interface. It includes modules like operations module 206a, booking module 206b and other managed services modules 206o. The operations module 206a facilitates monitoring and troubleshooting of collaboration services to ensure quality assurance and real-time assistance to end-users. It includes infrastructure management, live monitoring and face-to-face support from helpdesk at central location. The booking module 206b facilitates meeting-room and bridge reservations integrated with corporate messaging platform. It allows users to get confirmation on meeting-room booking and audio-video bridge details from a fully automated system.

Figure 3 illustrates a method for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction, according to an implementation of the present disclosure. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, and modules, functions, which perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.

The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300, or an alternative method. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof. In an example, the method 300 may be implemented in a computing system, such as a system 100 for real-time, seamless, self-corrective, lossless audio visual collaboration and interaction.

Referring to method 300, block 302 illustrates, associating a plurality of communication devices with the plurality of participants. In an implementation each communication device comprises steps of capturing participants’ videos based on participants’ activities using at least one video camera, capturing participants’ audios using at least one microphone, reproducing captured audios using at least one loudspeaker, and displaying captured videos using at least one display screen.

Block 304 illustrates, providing communication paths between the communication devices to transmit and receive participants’ audios and videos. In one implementation, the communication medium 106 provides communication paths between the communication devices.

Block 306 illustrates, accepting a collaboration request between a set of remotely located participants. In one implementation, the input module 120 accepts a collaboration request between a set of remotely located participants.

Block 308 illustrates, organizing an audio visual session between the set of participants based on the accepted collaboration request. In one implementation, the organizer module 122 organizes an audio visual session between the set of participants based on the accepted collaboration request.

Block 310 illustrates, initiating the session and controlling, during the session, communication between the communication devices associated with the set of participants, through the communication paths. In one implementation, the controller module 124 initiates the session and controls, during the session, communication between the communication devices associated with the set of participants, through the communication paths.

Block 312 illustrates, monitoring the communication and providing a feedback for enabling seamless and self-corrective interaction between the plurality of participants. In one implementation, the monitoring and feedback module 126 monitors the communication and provides a feedback for enabling seamless and self-corrective interaction between the plurality of participants.

Block 314 illustrates, polling the plurality of participants and accepting answers to a plurality of questions for evaluating effectiveness of the interaction. In one implementation, the polling module 128 polls the plurality of participants and accepts answers to a plurality of questions for evaluating effectiveness of the interaction.

The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.

The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 3690-MUM-2015-IntimationOfGrant26-03-2024.pdf 2024-03-26
1 Form 3 [29-09-2015(online)].pdf 2015-09-29
2 Form 20 [29-09-2015(online)].pdf 2015-09-29
2 3690-MUM-2015-PatentCertificate26-03-2024.pdf 2024-03-26
3 Form 18 [29-09-2015(online)].pdf 2015-09-29
3 3690-MUM-2015-Written submissions and relevant documents [05-02-2024(online)].pdf 2024-02-05
4 Drawing [29-09-2015(online)].pdf 2015-09-29
4 3690-MUM-2015-Correspondence to notify the Controller [18-01-2024(online)].pdf 2024-01-18
5 Description(Complete) [29-09-2015(online)].pdf 2015-09-29
5 3690-MUM-2015-FORM-26 [18-01-2024(online)].pdf 2024-01-18
6 ABSTRACT1.jpg 2018-08-11
6 3690-MUM-2015-Proof of Right [18-01-2024(online)].pdf 2024-01-18
7 3690-MUM-2015-US(14)-HearingNotice-(HearingDate-19-01-2024).pdf 2023-12-26
7 3690-MUM-2015-Power of Attorney-231115.pdf 2018-08-11
8 3690-MUM-2015-Form 1-091015.pdf 2018-08-11
8 3690-MUM-2015-ABSTRACT [27-04-2020(online)].pdf 2020-04-27
9 3690-MUM-2015-Correspondence-231115.pdf 2018-08-11
9 3690-MUM-2015-COMPLETE SPECIFICATION [27-04-2020(online)].pdf 2020-04-27
10 3690-MUM-2015-Correspondence-091015.pdf 2018-08-11
10 3690-MUM-2015-FER_SER_REPLY [27-04-2020(online)].pdf 2020-04-27
11 3690-MUM-2015-FER.pdf 2020-01-29
11 3690-MUM-2015-OTHERS [27-04-2020(online)].pdf 2020-04-27
12 3690-MUM-2015-FER.pdf 2020-01-29
12 3690-MUM-2015-OTHERS [27-04-2020(online)].pdf 2020-04-27
13 3690-MUM-2015-Correspondence-091015.pdf 2018-08-11
13 3690-MUM-2015-FER_SER_REPLY [27-04-2020(online)].pdf 2020-04-27
14 3690-MUM-2015-COMPLETE SPECIFICATION [27-04-2020(online)].pdf 2020-04-27
14 3690-MUM-2015-Correspondence-231115.pdf 2018-08-11
15 3690-MUM-2015-ABSTRACT [27-04-2020(online)].pdf 2020-04-27
15 3690-MUM-2015-Form 1-091015.pdf 2018-08-11
16 3690-MUM-2015-Power of Attorney-231115.pdf 2018-08-11
16 3690-MUM-2015-US(14)-HearingNotice-(HearingDate-19-01-2024).pdf 2023-12-26
17 3690-MUM-2015-Proof of Right [18-01-2024(online)].pdf 2024-01-18
17 ABSTRACT1.jpg 2018-08-11
18 3690-MUM-2015-FORM-26 [18-01-2024(online)].pdf 2024-01-18
18 Description(Complete) [29-09-2015(online)].pdf 2015-09-29
19 Drawing [29-09-2015(online)].pdf 2015-09-29
19 3690-MUM-2015-Correspondence to notify the Controller [18-01-2024(online)].pdf 2024-01-18
20 Form 18 [29-09-2015(online)].pdf 2015-09-29
20 3690-MUM-2015-Written submissions and relevant documents [05-02-2024(online)].pdf 2024-02-05
21 Form 20 [29-09-2015(online)].pdf 2015-09-29
21 3690-MUM-2015-PatentCertificate26-03-2024.pdf 2024-03-26
22 Form 3 [29-09-2015(online)].pdf 2015-09-29
22 3690-MUM-2015-IntimationOfGrant26-03-2024.pdf 2024-03-26

Search Strategy

1 TPOSearch_17-01-2020.pdf

ERegister / Renewals

3rd: 30 Mar 2024

From 29/09/2017 - To 29/09/2018

4th: 30 Mar 2024

From 29/09/2018 - To 29/09/2019

5th: 30 Mar 2024

From 29/09/2019 - To 29/09/2020

6th: 30 Mar 2024

From 29/09/2020 - To 29/09/2021

7th: 30 Mar 2024

From 29/09/2021 - To 29/09/2022

8th: 30 Mar 2024

From 29/09/2022 - To 29/09/2023

9th: 30 Mar 2024

From 29/09/2023 - To 29/09/2024

10th: 30 Mar 2024

From 29/09/2024 - To 29/09/2025

11th: 25 Sep 2025

From 29/09/2025 - To 29/09/2026