Abstract: The present disclosure relates to a system for augmented reality-based learning, the system including: an image capturing device (108) configured to obtain one or more images of a physical scene. A processor (104) is configured to: receive, from the image capturing device, one or more images of the physical scene; analyse, the received images to track a position of the plurality of elements; analyse, upon interaction of any two or more of the plurality of elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; render, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene. The processor is configured to display, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
[0001] The present disclosure relates, in general, to augmented reality and specifically, relates to an augmented reality-based learning assistant to provide education in an interactive way.
BACKGROUND
[0002] Current educational patterns in most countries do not induce much interest amongst the student. Above this, availability of education for uniquely gifted children is even rare. Forming interest in education for one and all still remains to be one of the major concerns in educating children. A deficiency in retention of essential concepts was apparent not only in any local area but also on a national scale. Decreased retention resulted from a lack of active learning, instructional methods which required students to do meaningful learning activities and think about what they were doing in contrast to traditional lecture methods where students passively received information.
[0003] Studies had shown that active learning in order to retain information was crucial for developing higher order thinking skills. Concurrently, according to Dale’s Cone of Learning, retention would not occur without education that involved speaking, listening, reading, writing, and/or reflecting. Therefore, the need to address retention issues was apparent. Also, it is required to make such applications which helps in understanding the abstract knowledge about the difficult concepts.
[0004] The advance of digital technologies has seen the proliferation of various types of mobile terminals capable of communicating and processing information. The e-book is also capable of being stored in a portable device, giving the user the flexibility to read the e-book while doing other things, such as listening to music, etc. Although the book with three-dimensional (3D) graphics is known in the art, there is a lack of technology for allowing the user's participation and interaction with the books, and for satisfying various user requirements. In the market there are various type of simulators vailable but they are not cost-effective, and has less internet connectivity, and lack of instructional based designs.
[0005] Therefore, there is a need in the industry to provide a cost-effective device that can provide education in an interactive way.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] An object of the present disclosure relates, in general, to augmented reality and specifically, relates to an augmented reality-based learning assistant used to provide education in an interactive way.
[0007] Another object of the present disclosure is to provide a system that can enable young children to read books in more interactive and realistic ways by superimposing three-dimensional (3D) rendered models onto books.
[0008] Another object of the present disclosure is to provide a cost-effective and light weight system that can operate in the way which produces a desired result.
[0009] Yet another object of the present disclosure is to provide a system that can work on sustainable development and more intuitive and user-friendly to a user.
SUMMARY
[0010] The present disclosure relates in general, to augmented reality and specifically, relates to an augmented reality-based learning assistant used to provide education in an interactive way.
[0011] In an aspect, the present disclosure provides a system for augmented reality-based learning, the system including: an image capturing device configured to obtain one or more images of a physical scene, the physical scene including one or more elements configured for interaction with each other, each of the elements provided with a position reference in any of x, y, and z axes; a processor operatively coupled with a memory, the memory storing instructions executable by the processor to: receive, from the image capturing device, one or more images of the physical scene; analyse, the received one or more images to track a position of the one or more elements to extract a vector attribute of the elements; analyse, upon interaction of any two or more of the elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; render, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene; wherein, based on the extracted vector attributes of the elements, and the extracted resultant attribute of the any two or more interacting elements, the processor configured to display, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
[0012] In an embodiment, the one or more elements include markers.
[0013] In another embodiment, markers include vector attributes such as origin, vector A, vector B, resultant vector and any combination thereof.
[0014] In another embodiment, the interaction of any two or more of the elements is controlled by moving the vector attributes in any direction.
[0015] In another embodiment, a power source can be installed to supply power to a user device.
[0016] In another embodiment, a peltier unit can be configured for cooling objects at a specific temperature.
[0017] In another embodiment, the image capturing device can be installed in a movable table-top stand.
[0018] In another embodiment, the first three-dimensional virtual overlay of the one or more three-dimensional virtual overlay can be different from the second three-dimensional virtual overlay.
[0019] In an aspect, the present disclosure provides a method for augmented reality-based learning, the method includes: obtaining, from an image capturing device, one or more images of a physical scene, the physical scene includes one or more elements configured for interaction with each other, each of the one or more elements provided with a position reference in any of x, y, and z axes; receiving, at a computing device, one or more images of the physical scene; analysing, at the computing device, the received one or more images to track a position of the one or more elements to extract a vector attribute of the one or more of elements; analysing, at the computing device, upon interaction of any two or more of the one or more elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; and rendering, at the computing device, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene, wherein, based on the extracted vector attributes of the one or more of elements, and the extracted resultant attribute of the any two or more interacting elements, the computing device is configured to display, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
[0020] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The following drawings form part of the present specification and are included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0022] FIGs. 1A and 1B illustrate exemplary representations of a system for augmented reality-based learning, in accordance with an embodiment of the present disclosure.
[0023] FIG. 2 illustrates an exemplary flow diagram for a method for augmented reality-based learning, in accordance with an embodiment of the present disclosure.
[0024] FIG. 3 illustrates an exemplary computer system in which or with which embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
[0025] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0026] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0027] The present disclosure relates in general, to augmented reality and specifically, relates to an augmented reality-based learning assistant used to provide education in an interactive way. The system 100 can be vision-based and can track a marker in accordance with a particular marker definition, so as to overlay a virtual three-dimensional (3D) object or information over a real-world image by tracking the marker. Thereby, a user's perception of reality can be augmented.
[0028] FIGs.1A and 1B illustrate exemplary representations of a system for augmented reality-based learning, in accordance with an embodiment of the present disclosure.
[0029] Referring to FIG. 1A, the system 100 can be configured for interactive learning with augmented reality, and includes a user device that may include an image-capturing device 102. The system further includes an image processing unit 104, a marker tracking unit 106, rendering unit 108, interaction unit 110, a memory 114 and a display interface 112. The user device may be any of a camera and a video camera. The digital camera assembly of the user device can be placed in a movable camera stand. The system 100 can be configured for coordinating placement of an augmented reality/virtual world object into a scene relative to position and orientation of a marker, in accordance with the disclosed embodiments.
[0030] In an embodiment of the present disclosure, the camera of user device may be any of the well-known information communication and multimedia devices, including a tablet personal computer (PC), mobile communication terminal, mobile phone, personal digital assistant (PDA), smartphone, International Mobile Telecommunication 2000 (IMT-2000) terminal, Code Division Multiple Access (CDMA) terminal, Wideband Code Division Multiple Access (WCDMA) terminal, Global System for Mobile communication (GSM) terminal, General Packet Radio Service (GPRS) terminal, Enhanced Data GSM Environment (EDGE) terminal, Universal Mobile Communication Service (UMTS) terminal, digital broadcast terminal, and Asynchronous Transfer Mode (ATM) terminal.
[0031] The digital camera assembly of the user device can be placed in a movable camera stand. The assembly can include an imaging sensor, one or more optical elements, image-capturing device and image data generation circuits, adapted to convert image information acquired from a surrounding of the device into one or more digital image frames indicative of the acquired image information. Processing circuitry, including image processing unit 104 (also referred to as processor 104, herein) may generate a set of display instructions for displaying a display image, which is at least partially based on information with an image frame indicative of an acquired image and one or more processing circuit rendered virtual objects.
[0032] In an embodiment, image capturing device 102 can be configured to obtain one or more images of a physical scene. In an exemplary embodiment, the physical scene may include a wooden kit pasted with the graph. The physical scene may include one or more elements configured for interaction with each other. Each of the elements provided with a position reference in any of x, y, and z axes. The processor 10 4operatively coupled with a memory 114. The processor104can be configured to receive, from the image capturing device, one or more images of the physical scene. The marker tracking unit 106can be configured to analyse, the received one or more images to track a position of the one or more elements to extract a vector attribute of the one or more elements.
[0033] In another embodiment, the interaction unit 110 can analyse, upon interaction of any two or more of the elements, a resultant attribute of interaction of the corresponding any two or more vector attributes and can render, onto a rendering platform, a three-dimensional (3D) virtual overlay corresponding to the one or more images of the physical scene. The first three-dimensional virtual overlay of the one or more three-dimensional virtual overlay may be different from the second three-dimensional virtual overlay.
[0034] In another embodiment, based on the extracted vector attributes of the one or more elements, and the extracted resultant attribute of the any two or more interacting elements, the display device 112 can display, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements. The final outcome can be seen on the display screen, which shows the resultant of the real scene along with some virtual objects. Thus, system 100 can enable young children to read books in more interactive and realistic ways by superimposing 3D rendered models onto books.
[0035] For instance, a user can capture the physical scene e.g., graph or any other mathematical objects provided with markers. The markers include one or more vector attributes e.g., origin, vector A, vector B. The captured images are processed to track the position of the one or more markers to extract a vector attribute and can analyse the resultant attribute e.g., vector R based on interaction of the vector attributes. The virtual objects or 3D objects can be rendered corresponding to the images of the graph. The shape formed on the graph and can be controlled by moving the vectors attributes in any direction. The user can move the vectors in all three directions by using graph. This can also be used in trigonometry to find the missing values. Thus, the young children can learn in more interactive way.
[0036] In an exemplary embodiment, peltier units can be used for cooling objects to below the ambient temperature or maintaining objects at a specific temperature by controlled heating or cooling. In another embodiment, the system includes source of electrical power, such as a rechargeable battery. The battery can be intended to encompass all energy storage devices which deliver electricity. These energy storage devices may be rechargeable or single-use. This includes but is not limited to batteries using lead-acid, zinc-carbon, alkaline, nickel cadmium, lithium, and lithium-ion technologies, capacitors, generators powered by springs or compressed gas or other mechanical energy storage mechanisms, and fuel cells. Further, system 100 gives life to textbook contents by displaying objects at a distance. Therefore, students can read book in interactive and realistic ways by superimposing 3D rendered models over the books with the assistance of augmented reality-based technology.
[0037] Referring to FIG. 1B, the system 100 includes the image-capturing device 108 configured to capture an image of an object used as an instructional tool. In practice, the image-capturing device 102 in the user device can capture an external object image. The image-capturing device 102 can capture the image of the physical scene. In an exemplary embodiment, the physical scene can be a wooden kit pasted with a graph. The physical scene/object represents the instructional tool of an object e.g., mathematical and geometrical objects. Since the mathematical objects are complicated, the object, which has a specific shape or a special pattern, may be utilized to represent the mathematical objects.
[0038] The image processing unit 104can receive and analyse the image information from the image-capturing device 102, and can obtain, the simulated object corresponding to the captured image of the object. The image processing unit 104 can be operatively coupled to the memory 114 of the user device. The memory 114 can store pattern identification information of every surface of the object in advance, and can identify the image captured by the image-capturing device 108, no matter how the object is disposed or moved. Memory 114 can store various data like system objects. The executable instructions may also be stored in the memory 114 that can direct a computer or other programmable data processing circuitry to function in a particular manner.
[0039] In reality, the object may be used as an instructional tool and corresponds to a simulated object. The object may be a two-dimensional (2D) object or a three-dimensional object (3D) object. The object can have a variety of shapes, can have its shape be designed to comply with various requirements, and can be adjusted according to design demands of the instructional tool. The surfaces of the object may have different patterns for identification.
[0040] The user device can include processor that can be in communication with each of a memory, and input/output devices.The processor may include a microprocessor or other devices capable of being programmed or configured to perform computations and instruction processing in accordance with the disclosure. In an exemplary embodiment, the processor may be Arduino processor. Such other devices may include microcontrollers, digital signal processors (DSP), complex programmable logic device (CPLD), field programmable gate arrays (FPGA), application-specific assimilated circuits (ASIC), discrete gate logic, and/or other assimilated circuits, hardware or firmware in lieu of or in addition to a microprocessor.
[0041] The memory 114 includes programmable software instructions that are executed by the processor. The processor may be embodied as a single processor or a number of processors. The processor and a memory may each be, for example located entirely within a single computer or other computing device. The memory, which enables storage of data and programs, may include random-access memory (RAM), read-only memory (ROM), flash memory and any other form of readable and writable storage medium.
[0042] The marker tracking unit 106can include multiple markers that can be two dimensional objects (2D) such as symbols, images, etc or three dimensional (3D) objects such as models, toys, fingers, faces, heat emitting, light emitting sources such as LED's etc. In one embodiment, the user can define his/her own marker. The marker tracking unit 106 can determine the rotation state of the marker according to the vector attributes of the marker image.The markers include vector attributes such as origin (‘O’), vector A (‘A’), vector B (‘B’) and the resultant vector (‘R’).For instance, four or more markers are placed on the graph, wherein shape can be formed on the graph and can be controlled by moving the vectors in any direction e.g., X, Y, Z directions.
[0043] In an exemplary embodiment, wooden kit can be provided with graphs pasted on the kit. The markers can be placed on the graph, which can be permanently augmented in the application. Then the camera, which can be mounted on the tabletop can be used to scan the markers and images overlaying in the real environment. Four makers may be used for symbolizing the origin (‘O’), vector A (‘A’), vector B (‘B’) and the resultant vector (‘R’). The object(s) can track the marker(s) in a lightly coupled fashion thereby permitting an intuitive connection between the motion of the markers and the objects. The interaction unit 110 configured to perform interaction between the one or more elements to produce the resultant vector.
[0044] The rendering unit 108 can be configured to render virtual objects for detected multiple markers. By using image processing directly based upon the vision-based system so as to target the image is used to find the resultant scalar product of two vectors. Rendering unit 108can render, onto a rendering platform, a 3D virtual overlay corresponding to the one or more images of the physical scene, wherein, based on the extracted vector attributes of the elements, and the extracted resultant attribute of the any two or more interacting elements, the processor configured to display, on the rendered 3D visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
[0045] In an embodiment, the movement of all the markers which can be done in any of the three-axis (axis X, Y, Z) can be traced by the algorithm and the resultant depending upon the position may diverge. The rendering unit 108can render virtual world object into the scene relative to position and orientation of the marker. The processing unit can display the simulated object on the display interface of the user device. The final outcome can be seen on the display screen of the tabletop which shows the resultant of the real scene along with some virtual objects.
[0046] The display 112may be implemented with one of a liquid crystal display (LCD), organic light emitting diode (OLED), and an active matrix OLED (AMOLED).The learner can be allowed to control a status of the object via the object operation instructions on the display interface. The simulated object operation instructions indicate a control interface or a graphical interface such as a graph that can be shown on the display interface. Learner can be allowed to touch or click the control interface or the graphical interface to trigger the operations of the simulated object.
[0047] FIG. 2 illustrates an exemplary flow diagram for a method for augmented reality-based learning, in accordance with an embodiment of the present disclosure.
[0048] Referring to FIG.2, the method includes obtaining 202, from an image capturing device 108, one or more images of a physical scene. The physical scene including one or more elements configured for interaction with each other. Each of the one or more elements provided with a position reference in any of x, y, and z axes. The one or more images of the physical scene can be received 204 by a computing device. The method includes analysing 206, at the computing device, the received one or more images to track a position of the one or more elements to extract a vector attribute of the one or more elements
[0049] In an embodiment, the method further includes analysing 208, at the computing device, upon interaction of any two or more of the elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; and rendering 210, at the computing device, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene; wherein, based on the extracted vector attributes of the elements, and the extracted resultant attribute of the any two or more interacting elements, the computing device configured to display 212, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
[0050] In an embodiment, the camera can be mounted on a tabletop to scan the markers and images overlaying in the real environment. For instance, when video e.g., live video feed is streamed by camera, image is captured and processed by the image processing unit to find the resultant scalar product of two vectors. The marker tracking unit tracks marker and calculate position. The movement of the markers is traced in the three -axis (X, Y, and Z), then finally virtual object is combined with real object and displayed on the display unit.
[0051] In an exemplary embodiment, shapes of objects are constructed on the graph paper with the help of markers. For instance, if the user wants to draw a sine wave, the user can calculate, width and amplitude easily, and can change the vector directions accordingly. The user can move the vectors in all three directions by using graph.AR displays enable a user to merge real world experiences with a virtual world via a visual overlay to supplement what the user views.
[0052] The computing device can include processor that can be in communication with each of a memory, and input/output devices.The processor may include a microprocessor or other devices capable of being programmed or configured to perform computations and instruction processing in accordance with the disclosure. Such other devices may include microcontrollers, digital signal processors (DSP), complex programmable logic device (CPLD), field programmable gate arrays (FPGA), application-specific assimilated circuits (ASIC), discrete gate logic, and/or other assimilated circuits, hardware or firmware in lieu of or in addition to a microprocessor.
[0053] The memory 114includes programmable software instructions that are executed by the processor. The processor may be embodied as a single processor or a number of processors. The processor and a memory may each be, for example located entirely within a single computer or other computing device. The memory, which enables storage of data and programs, may include random-access memory (RAM), read-only memory (ROM), flash memory and any other form of readable and writable storage medium.
[0054] FIG. 3 illustrates an exemplary computer system in which or with which embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
[0055] As shown in FIG. 3, computer system 300 includes an external storage device 310, a bus 320, a main memory 330, a read only memory 340, a mass storage device 350, communication port 360, and a processor 370. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 370 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 370 may include various units associated with embodiments of the present invention. Communication port 360 can be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fibre, a serial port, a parallel port, or other existing or future ports. Communication port 360 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[0056] Memory 330 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 340 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 370. Mass storage 350 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7200 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[0057] Bus 320 communicatively couples processor(s) 370 with the other memory, storage, and communication blocks. Bus 320 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 370 to software system.
[0058] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 320 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 360. External storage device 310 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0059] The present disclosure provides a system that can enable young children to read books in more interactive and realistic ways by superimposing 3D rendered models onto books.
[0060] The present disclosure provides a cost-effective and light weight system that can operate in the way which produces a desired result.
[0061] The present disclosure provides a system that can work on sustainable development and more intuitive and user-friendly to a user.
[0062] The present disclosure provides a system in which one or more markers are placed on the graph, wherein shape is formed on the graph and can be controlled effectively by moving the vectors in any direction.
[0063] The present disclosure provides a system that can enable students to do mathematics and geometry with ease.
Claims:1. A system (100) for augmented reality-based learning, the system comprising:
an image capturing device (108) configured to obtain one or more images of a physical scene, the physical scene comprising a plurality of elements configured for interaction with each other, each of the plurality of elements provided with a position reference in any of x, y, and z axes; and
a processor (104) operatively coupled with a memory, said memory storing instructions executable by the processor to:
receive, from the image capturing device, one or more images of the physical scene;
analyse, the received one or more images to track a position of the plurality of elements to extract a vector attribute of the plurality of elements;
analyse, upon interaction of any two or more of the plurality of elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; and
render, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene,
wherein, based on the extracted vector attributes of the plurality of elements, and the extracted resultant attribute of the any two or more interacting elements, the processor is configured to display, on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
2. The system as claimed in claim 1, wherein the plurality of elements comprise markers.
3. The system as claimed in claim 2, wherein the markers comprises vector attributes such as origin, vector A, vector B, resultant vector and any combination thereof.
4. The system as claimed in claim 1, wherein the interaction of any two or more of the plurality of elements is controlled by moving the vector attributes in any direction.
5. The system as claimed in claim 1, wherein a power source is installed to supply power to a user device.
6. The system as claimed in claim 1, wherein a peltier unit is configured for cooling objects at a specific temperature.
7. The system as claimed in claim 1, wherein the image capturing device is installed in a movable table-top stand.
8. The system as claimed in claim 1, wherein the first three-dimensional virtual overlay of the one or more three-dimensional virtual overlay is different from the second three-dimensional virtual overlay.
9. A method (200) for augmented reality-based learning, the method comprising:
obtaining (202), from an image capturing device, one or more images of a physical scene, the physical scene comprising a plurality of elements configured for interaction with each other, each of the plurality of elements provided with a position reference in any of x, y, and z axes;
receiving (204), at a computing device, one or more images of the physical scene;
analysing (206), at the computing device, the received one or more images to track a position of the plurality of elements to extract a vector attribute of the plurality of elements;
analysing (208), at the computing device, upon interaction of any two or more of the plurality of elements, a resultant attribute of interaction of the corresponding any two or more vector attributes; and
rendering (210), at the computing device, onto a rendering platform, a three-dimensional virtual overlay corresponding to the one or more images of the physical scene,
wherein, based on the extracted vector attributes of the plurality of elements, and the extracted resultant attribute of the any two or more interacting elements, the computing device is configured to display (212), on the rendered three-dimensional visual overlay, an augmented image pertaining to the interaction of the any two or more elements.
| # | Name | Date |
|---|---|---|
| 1 | 202011033693-CLAIMS [24-01-2023(online)].pdf | 2023-01-24 |
| 1 | 202011033693-STATEMENT OF UNDERTAKING (FORM 3) [06-08-2020(online)].pdf | 2020-08-06 |
| 2 | 202011033693-CORRESPONDENCE [24-01-2023(online)].pdf | 2023-01-24 |
| 2 | 202011033693-FORM FOR STARTUP [06-08-2020(online)].pdf | 2020-08-06 |
| 3 | 202011033693-FORM FOR SMALL ENTITY(FORM-28) [06-08-2020(online)].pdf | 2020-08-06 |
| 3 | 202011033693-FER_SER_REPLY [24-01-2023(online)].pdf | 2023-01-24 |
| 4 | 202011033693-FORM-26 [24-01-2023(online)].pdf | 2023-01-24 |
| 4 | 202011033693-FORM 1 [06-08-2020(online)].pdf | 2020-08-06 |
| 5 | 202011033693-FER.pdf | 2022-07-25 |
| 5 | 202011033693-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-08-2020(online)].pdf | 2020-08-06 |
| 6 | 202011033693-FORM 18 [15-03-2022(online)].pdf | 2022-03-15 |
| 6 | 202011033693-EVIDENCE FOR REGISTRATION UNDER SSI [06-08-2020(online)].pdf | 2020-08-06 |
| 7 | 202011033693-FORM-26 [12-09-2020(online)].pdf | 2020-09-12 |
| 7 | 202011033693-DRAWINGS [06-08-2020(online)].pdf | 2020-08-06 |
| 8 | 202011033693-DECLARATION OF INVENTORSHIP (FORM 5) [06-08-2020(online)].pdf | 2020-08-06 |
| 8 | 202011033693-Proof of Right [12-09-2020(online)].pdf | 2020-09-12 |
| 9 | 202011033693-COMPLETE SPECIFICATION [06-08-2020(online)].pdf | 2020-08-06 |
| 10 | 202011033693-DECLARATION OF INVENTORSHIP (FORM 5) [06-08-2020(online)].pdf | 2020-08-06 |
| 10 | 202011033693-Proof of Right [12-09-2020(online)].pdf | 2020-09-12 |
| 11 | 202011033693-DRAWINGS [06-08-2020(online)].pdf | 2020-08-06 |
| 11 | 202011033693-FORM-26 [12-09-2020(online)].pdf | 2020-09-12 |
| 12 | 202011033693-EVIDENCE FOR REGISTRATION UNDER SSI [06-08-2020(online)].pdf | 2020-08-06 |
| 12 | 202011033693-FORM 18 [15-03-2022(online)].pdf | 2022-03-15 |
| 13 | 202011033693-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-08-2020(online)].pdf | 2020-08-06 |
| 13 | 202011033693-FER.pdf | 2022-07-25 |
| 14 | 202011033693-FORM 1 [06-08-2020(online)].pdf | 2020-08-06 |
| 14 | 202011033693-FORM-26 [24-01-2023(online)].pdf | 2023-01-24 |
| 15 | 202011033693-FER_SER_REPLY [24-01-2023(online)].pdf | 2023-01-24 |
| 15 | 202011033693-FORM FOR SMALL ENTITY(FORM-28) [06-08-2020(online)].pdf | 2020-08-06 |
| 16 | 202011033693-CORRESPONDENCE [24-01-2023(online)].pdf | 2023-01-24 |
| 16 | 202011033693-FORM FOR STARTUP [06-08-2020(online)].pdf | 2020-08-06 |
| 17 | 202011033693-CLAIMS [24-01-2023(online)].pdf | 2023-01-24 |
| 17 | 202011033693-STATEMENT OF UNDERTAKING (FORM 3) [06-08-2020(online)].pdf | 2020-08-06 |
| 18 | 202011033693-US(14)-HearingNotice-(HearingDate-09-10-2025).pdf | 2025-09-17 |
| 19 | 202011033693-FORM-26 [03-10-2025(online)].pdf | 2025-10-03 |
| 20 | 202011033693-Correspondence to notify the Controller [03-10-2025(online)].pdf | 2025-10-03 |
| 21 | 202011033693-Written submissions and relevant documents [24-10-2025(online)].pdf | 2025-10-24 |
| 22 | 202011033693-Annexure [24-10-2025(online)].pdf | 2025-10-24 |
| 1 | SearchHistoryE_22-07-2022.pdf |