Sign In to Follow Application
View All Documents & Correspondence

A Method And A System For Perceptual Learning With Immersive And Interactive Content

Abstract: A method (200) for perceptual learning with Immersive and interactive content, comprising steps of: receiving (210) an image of an object or a text, classifying (220) the received image as the object or the text, extracting (230) one or more features from the received image of the object or the text to perform cosine similarity with prestored data, providing (240) one or more related contents to the image in case the image is the object, based on the extracted one or more feature of the image, selecting (250) any content of the one or more related contents, providing (260) one or more immersive and experiential contents based on the selected content, and displaying (270) the one or more immersive and experiential contents.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 December 2021
Publication Number
53/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
vivek@boudhikip.com
Parent Application
Patent Number
Legal Status
Grant Date
2022-07-29
Renewal Date

Applicants

3RDFLIX VISUAL EFFECTS PRIVATE LIMITED
SY NO 37/A 37P, PLOT NO. 6P, 2-91/77/2/ST/2 2ND FLOOR, SIGNATURE TOWERS, KONDAPUR, HYDERABAD, TELANGANA INDIA

Inventors

1. Charu Noheria
Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India
2. Rahul Rudraraju
Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India
3. Subbarao Siddabattula
Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India
4. Bhanupriya Vaddiparthi
Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India
5. Ilango Thulsimani
Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India

Specification

Claims:We Claim:

1. A method (200) for perceptual learning with Immersive and interactive content, the method (200) comprising steps of:
receiving (210) an image of an object or a text;
classifying (220) the received image as the object or the text;
extracting (230) one or more features from the received image of the object or the text to perform cosine similarity with prestored data;
providing (240) one or more related contents to the image in case the image is the object, based on the extracted one or more feature of the image;
selecting (250) any content of the one or more related contents;
providing (260) one or more immersive and experiential contents based on the selected content; and
displaying (270) the one or more immersive and experiential contents.
2. The method (200) as claimed in claim 1, wherein the one or more related contents include index images or related images to the received image.
3. The method (200) as claimed in claim 2, wherein the related images are determined based on cosine distance.
4. The method (200) as claimed in claim 1, wherein the received image is the text, extracting the text in the received image using optical character recognition (OCR); and
generating a string using the extracted text and embedding the string into a vector space for performing a search.
5. The method (200) as claimed in claim 4, wherein the text is a question, determining an appropriate solution to the question using Bidirectional Encoder Representation from Transformers (BERT) based semantic search.
6. The method (200) as claimed in claim 1, wherein the one or more immersive and experiential contents is a curriculum linked content selected from a group comprising 3D model, knowledge tree, videos, audio, simulations, practice test or a combination thereof.
7. The method (200) as claimed in claim 1, wherein extracting the one or more features from the received image of the object or the text using reverse image search machine learning model or sentence transformers based BERT model.
8. A system (100) for perceptual learning with Immersive and interactive content, the system (100) comprising:
a user device (106) configured to:
receive an image of an object or a text using an image capturing device (104);
select any content of one or more related contents;
display one or more immersive and experiential contents; and
a processing module (102) in communication with the user device, configured to:
classify the received image as the object or the text;
extract one or more features from the received image of the object or the text to perform cosine similarity with prestored data;
provide one or more related contents to the image if the image is the object, based on the extracted one or more feature of the image; and
provide one or more immersive and experiential contents based on the selected content.
9. The system (100) as claimed in claim 8, wherein the one or more related contents include index images or related images to the received image.
10. The system (100) as claimed in claim 9, wherein the related images are determined based on cosine distance.
11. The system (100) as claimed in claim 8, wherein the received image is the text, the processing module (102) is configured to:
extract the text in the received image using optical character recognition (OCR); and
generate a string using the extracted text and embed the string into a vector space to perform a search.
12. The system (100) as claimed in claim 11, wherein the text is a question, determining an appropriate solution to the question using Bidirectional Encoder Representation from Transformers (BERT) based semantic search.
13. The system (100) as claimed in claim 8, wherein the one or more immersive and experiential contents is a curriculum linked content selected from a group comprising 3D model, knowledge tree, videos, audio, simulations, practice test or a combination thereof.
14. The system (100) as claimed in claim 8, wherein the processing module (102) is configured to extract the one or more features from the received image of the object or the text using reverse image search machine learning model.

, Description:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10 and rule 13]

A METHOD AND A SYSTEM FOR PERCEPTUAL LEARNING WITH IMMERSIVE AND INTERACTIVE CONTENT

We, 3rdflix Visual Effects Private Limited, an Indian company, having address at Sy No 37/A 37p, Plot No. 6p, 2-91/77/2/St/2 2nd Floor, Signature Towers, Kondapur, Hyderabad, Telangana 500084 India

The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
Embodiments of the present invention generally relates to educational technologies. More particularly the invention relates to a method and a system for perceptual learning from the surroundings using Image processing and Natural Language Processing laced with immersive and interactive content.
BACKGROUND OF THE INVENTION
Education system has constantly been evolved with the new development in technology. In ancient times, people used to watch and learn but with increase in complexity of studies, the method became obsolete. In order to convey present available knowledge, it is important to visualize every detail of the educational concept. There are products that can take a string or image and respond back with questions and solutions related to the search criteria. There are solutions in the market that can take image or string as search input and bring information about them. One such solution provides partial solution to the problem. Such solutions provide matches to the input image and search and some relevant text and images but doesn’t go a step further connecting the search intent with curriculum specific adaptive data. But there is no solution in the market that is tailor made to EdTech usage that can take image or string as input and return curriculum linked immersive and experiential content. Moreover, all existing solutions require massive amounts of training data whereas our invention requires only a few thousand images to become smart and meaningful. The existing technologies only bring relevant unidirectional data about the search.
The problem for which the present invention aims to provide a solution is to provide perception learning to users. A user can learn many ways such as visual, verbal, logical etc., but perceptual learning is the best way to learn with experience and practice. Studies have shown that children have incredible ability to essentially rewire their brain in response to the stimulation they receive and show rapid improvement in perceptual ability and there are studies that show that the neurons carry more information about an attribute when performing a perceptual task related to aforementioned attribute.
Therefore, there is a need for an improved method and a system for perceptual learning from the surroundings using Image processing and Natural Language Processing laced with immersive and interactive content, which is a leap forward in learning and does not suffer from the above-mentioned limitations. Such method and system should be far more efficient than prior art.
OBJECT OF THE INVENTION
An object of the present invention is to provide a method for perceptual learning with immersive and interactive content.
Another object of the present invention is to provide a system for perceptual learning with immersive and interactive content.
Yet another object of the present invention is to provide curriculum relevant immersive and experiential content using 3D model, Knowledge Tree, Videos, Simulations and Practice Test.
Yet another object of the present invention is to provide perceptual learning from the surroundings using Image processing and Natural Language Processing laced with immersive and interactive content.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the subject matter, nor to determine the scope of the invention.
According to a first aspect of the present invention, there is a method provided for perceptual learning with Immersive and interactive content. The method comprises steps of receiving an image of an object or a text, classifying the received image as the object or the text, extracting one or more features from the received image of the object or the text or a simple string to perform cosine similarity with prestored data, providing one or more related contents to the image in case the image is the object, based on the extracted one or more feature of the image, selecting any content of the one or more related contents, providing one or more immersive and experiential contents based on the selected content, and displaying the one or more immersive and experiential contents.
In accordance with an embodiment of the present invention, the one or more related contents include index images or related images to the received image.
In accordance with an embodiment of the present invention, the related images are determined based on cosine distance.
In accordance with an embodiment of the present invention, the received image is the text, extracting the text in the received image using optical character recognition (OCR), generating a string using the extracted text and embedding the string into a vector space to performing a search.
In accordance with an embodiment of the present invention, the text is a question, determining an appropriate solution to the question using Bidirectional Encoder Representation from Transformers (BERT) based semantic search.
In accordance with an embodiment of the present invention, the one or more immersive and experiential contents is a curriculum linked content selected from a group comprising 3D model, knowledge tree, videos, audio, simulations, practice test or a combination thereof.
In accordance with an embodiment of the present invention, extracting the one or more features from the received image of the object or the text using reverse image search machine learning model or sentence transformer based BERT model.
According to a second aspect of the present invention, there is provided a system for perceptual learning with Immersive and interactive content. The system comprises a user device configured to receive an image of an object or a text using an image capturing device, select any content of one or more related contents, display one or more immersive and experiential contents, and a processing module in communication with the user device, configured to classifying the received image as the object or the text, extract one or more features from the received image of the object or the text to perform cosine similarity with prestored data, provide one or more related contents to the image if the image is the object, based on the extracted one or more feature of the image, and provide one or more immersive and experiential contents based on the selected content.
In accordance with an embodiment of the present invention, the one or more related contents include index images or related images to the received image.
In accordance with an embodiment of the present invention, the related images are determined based on cosine distance.
In accordance with an embodiment of the present invention, the received image is the text, the processing module is configured to extract the text in the received image using optical character recognition (OCR), generate a string using the extracted text and embedding the string into a vector space to perform a search.
In accordance with an embodiment of the present invention, the text is a question, determining an appropriate solution to the question using Bidirectional Encoder Representation from Transformers (BERT) based semantic search.
In accordance with an embodiment of the present invention, the one or more immersive and experiential contents is a curriculum linked content selected from a group comprising 3D model, knowledge tree, videos, audio, simulations, practice test or a combination thereof.
In accordance with an embodiment of the present invention, the processing module is configured to extract the one or more features from the received image of the object or the text using reverse image search machine learning model.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figures, with like reference numbers referring to like structures across the views, wherein:
Fig. 1 illustrates a system for perceptual learning with immersive and interactive content, in accordance with an embodiment of the present invention;
Fig. 2 illustrates a method for perceptual learning with immersive and interactive content, in accordance with an embodiment of the present invention; and
Fig. 3A-3B illustrate information flow and an exemplary implementation of system and method shown Fig. 1 and Fig. 2, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and is not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It is implied that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense, (i.e., meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all these matters form a part of the prior art base or were common general knowledge in the field relevant to the present invention.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
Referring to the drawings, Figure 1 illustrates a system (100) for perceptual learning with immersive and interactive content, in accordance with an embodiment of the present invention. The perceptual learning is the best way to learn with experience and practice. Studies have shown that children have incredible ability to essentially rewire their brain in response to the stimulation they receive and show rapid improvement in perceptual ability. Further, there are studies that show that the neurons carry more information about an attribute when performing a perceptual task related to that attribute. As shown in figure 1, the system (100) comprises image capturing device (104), one or more user devices (106) associated with one or more users and a processing module (102) connected with the image capturing device (104) and the one or more user devices (106). In an embodiment of the present invention, the one or more user devices (106) may have the image capturing device (104) incorporated and function as a single device performing functions of both devices.
The image capturing device (104) may include, but will not be limited to, camera, video/ audio recorder, 3D scanner, 3D Video sensor and AI camera. The image capturing device (104) may envisage to include communication capabilities with other devices through wired or wireless connections. This enables the image capturing device (104) to connect with the processing module (102) and send captured images to the processing module (102).
Additionally, the processing module (102) is envisaged to include computing capabilities such as a memory unit (1022) configured to store machine readable instructions. The machine-readable instructions may be loaded into the memory unit (1022) from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit (1022). The memory unit (1022) in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Then, the processing module (102) includes a processor (1024) operably connected with the memory unit (1022). In various embodiments, the processor (1024) may be a microprocessor selected from one of, but not limited to an ARM based or Intel based processor or in the form of field-programmable gate array (FPGA), a general-purpose processor and an application specific integrated circuit (ASIC).
Further, the processing module (102) comprises a communication module (1028) configured for enabling connection of the image capturing device (104) and the one or more user devices (106). The connection may be wired or wireless. In that sense, the communication module (1028) may include Power over Ethernet Switch, USB ports etc. These may allow transferring of data from image capturing device (104) to the processing module (102) and data from the processing module (102) to the one or more user devices (106) via Ethernet cable, USB cable etc. Additionally, or alternately, the communication module (1028) may be an Internet of Things (IOT) module, Wi-Fi module, Bluetooth module, RF module etc. adapted to enable a wireless communication between the image capturing device (104), the processor (1024) and the one or more user devices (106) via a wireless communication network (110).
The wireless communication network (110) may be, but not limited to, Bluetooth network, RF network, NFC, WIFI network, Local Area Network (LAN) or a Wide Area Network (WAN). The wireless communication network (110) may be implemented using a number of protocols, such as, but not limited to, TCP/IP, 3GPP, 3GPP2, LTE, IEEE 802.x, etc. In one embodiment, all the components of the system (100) are connected with each other via the communication network (100).
In accordance with an additional or alternative embodiment of the present invention, the processing module (102) may also include a registration module (1026) adapted to receive one or more user (302) details and register the one or more users on the system (100). The details may include, but not limited to, username, contact number, email ID, age, gender, areas of interest, a study curriculum etc. Besides, the registration module (1026) may also have a biometrics receiver configured to register the one or more users via his/her biometrics. The biometrics may include, fingerprint recognition, face recognition, iris recognition or a combination thereof. In one embodiment, the processing module (102) may be a stand-alone device where a user of the one or more users may come and register himself/herself for their studies or other related activities.
Further, the processing module (102) may also include a user interface. The user interface may include a display envisaged to show the data received from the image capturing device (104) and the results of the image and/or video analysis. The display may be, but not limited to, Light-emitting diode display (LED), electroluminescent display (ELD), liquid crystal display (LCD), Organic light-emitting diode (OLED) & AMOLED display. Furthermore, the user interface may include accessories like keyboard, mouse etc. envisaged to provide input capability to enable a user to enter his/her details mentioned above or captured image. In another embodiment, the user interface may be a touch input-based display that integrates the input-output functionalities.
Additionally, the one or more user devices (106) are connected with the processing module (102) via a wired or wireless connection. Herein, the one or more user devices (106) may be selected from computing devices such as desktop PC, laptop, PDA or hand-held computing device such as smartphones and tablets. It has already been mentioned, the one or more user devices (106) are associated with one or more users. In another embodiment, instead of a processing module (102) and image capturing device (104) being a stand-alone device, the one or more user devices (106) may house the processing module (102) and the image capturing device (104) along with their functionalities. The one or more user device (106) already includes microprocessor (1024) for processing and communication capabilities via wired or wireless connections, so that the one or more user devices (106) may be provided with image capturing device (104) such as camera, AI camera and/or multiple sensors for capturing images, recording audios/videos.
In yet another embodiment, the system (100) could be implemented as a distributed system (100) where the image capturing device (104) and the processing module (102) may be at disposed at a different locations from each other and/or the processing module (102) could be implemented in a server side computing device or cloud computing environment. It will be appreciated by a skilled addressee that there are multiple arrangement in which the present invention can be implemented, without departing from the scope of the present invention. The processing module (102) is also envisaged to implement Artificial Intelligence, Machine Learning and deep learning for data collation and processing.
In accordance with an embodiment of the present invention, the system (100) may also include a data repository (108). The data repository (108) may be a local storage (such as SSD, eMMC, Flash, SD card, etc.) or a cloud-based storage. In any manner, the data repository (108) is envisaged to be capable of providing the data to the processing module (102), when the data is queried appropriately using applicable security and other data transfer protocols. The data repository (108) may store, but not limited to, previous and/or live images, videos, audios, 3D immersive content, equations, and solutions. It is also envisaged to store various charts, tables, learning contents such as practical videos, manipulatable 3D content, prepared for the users. In one embodiment of the present invention, the processing module (102) may include AI and deep learning-based trained models using the above data, to compare and assess and update the database based on the received images from the image capturing device (104) or the internet in real-time.
Figure 2 illustrates a method (200) for perceptual learning with immersive and interactive content, in accordance with an embodiment of the present invention. The present invention is focused upon perceptual learning, delivering amazing user experience with immersive and interactive content. This method (200) would be understood more clearly with the help of an exemplary implementation and information flow shown in Figures 3A-3B. The Figures 3A-3B show an exemplary implementation to use the present invention for displaying immersive and interactive content.
As shown in figure 2, the method (200) starts at step 210, by receiving an image of an object or a text. In this step, as shown in figure 3A, a user device (106) of the one or more user devices (106) may receive the image of the object or the text captured from the image capturing device (104). The image may be captured on the discretion of a user of the one or more users. The user at any time, or place can take an image of the object or the text which may be, but not limited to, a toy, a vehicle, a building, a machine, or type of a question that they come across in their daily life. Further, in accordance with an additional or alternative embodiment, the image capturing device (104) may capture a video stream. The user device (106) may receive the video stream from the image capturing device (104).
Next, at step 220, the processing module (102) is configured to classifying (230) the received image as the object or the text and determine whether the received image is the object, or the text. Again, as shown in figure 3A, for example, the processing module (102) by comparison of the captured image with prestored images or using the AI and ML may determine whether the image is of an object of a text. The prestored data may include, but not limited to, any defined shape or a shape memorized by the processing module (102). On comparison, the processing module (102) may classify the received image into the object or the text. Further, in an additional or alternative embodiment, the processing module (102) may identify the object, or the text in the received video stream.
Next at step 230, as shown in figure 3A, the processing module (102) is configured to extract one or more features from the received image of the object, or the text to perform cosine similarity with prestored data. In accordance with an embodiment of the present invention, the processing module (102) may extract the one or more features from the received image of the object or the text using reverse image search machine learning model or NLP based sentence transformer. The one or more features or feature vector are list of numbers used to abstractly represent and quantify the received images. For example, once the received image is classified as image with an object, the features of the image that could be specific structures in the image such as points, edges or objects may be extracted as a part of dimensionality reduction process, in which, an initial set of the raw data is divided and reduced to more manageable groups to make the process easier using but not limited to VGG16. The VGG16 is a convolutional neural network model.
After extraction, a distance (cosine score) is calculated using cosine similarity with pickle files consisting of features of indexed images in the form of embeddings. The embeddings are low-dimensional representations of discrete data as continuous vectors. The cosine similarity measures the similarity between two vectors of an inner product space. It is measured by the cosine of the angle between two vectors and determines whether two vectors are pointing in roughly the same direction. Further, NumPy is used for scientific computing. It provides a multidimensional array object, as well as variations such as masks and matrices, which can be used for various math operations. Furthermore, in an additional or alternative embodiment, the processing module (102) may extract the one or more features of object, or the text from the received video or image.
Later, at step 240, as shown in figure 3B, the processing module (102) may be configured to provide one or more related contents to the image if the image is the object, based on the extracted one or more feature of the image or the video stream. The one or more related contents may include index images or related images to the received image. As shown in figure 3B, for example, assuming the image to be an image with an object, the image may be sent to the reverse image search AI or ML model which extracts its features and through cosine distance identifies closest neighbors. User will be provided with the related content based on the image picked from displayed closest neighbors to the searched image.
In accordance with an embodiment of the present invention, in case, the received image is the text, the processing module (102) may be configured to extract the text in the received image using optical character recognition (OCR). Further, the processing module (102) may be configured to generate a string using the extracted text and embed into a vector space using sentence transformer based BERT for performing a search. The embedding is used to calculate the cosine distance again the indexed data to determine a closest neigbour. However, in case the text is a question, the processing module (102) is configured to determine an appropriate solution to the question using Bidirectional Encoder Representation from Transformers (BERT) based semantic search. For example, assuming the received image is classified as an image with a question. The processing module (102) may extract data in the image using may be, but not limited to, MathPix OCR and find for the appropriate solution using BERT based semantic search by encoding the text converted by OCR system and finding distance (cosine score) to the data priorly encoded. Further, embed all extracted data in the data repository (108), which can be sentences, paragraphs, or documents, into a vector space. During the search, the string generated or query is embedded into the same vector space and the closest embedding from the data repository (108) are found using the cosine similarity.
Next, at step 250, as shown in figure 3B, the one or more related contents may be sent and displayed on the user device. As shown in figure 3B, the user, using the user device (106) may select any content of the one or more related contents. The one or more related contents may include index images or related images to the received image. The related images may be determined based on cosine distance. For example, in accordance with an embodiment of the present invention, after classifying the image as an object or a text, running the image through search algorithm to find the similar image with in the data set and display appropriate curriculum linked immersive and experiential content, i.e., after the one or more related contents are obtained, the top 5 related contents which may be, but not limited to, curriculum linked content, is extracted from the database, and presented to the user. Information displayed includes videos, hands on learning experiences (simulations), 3D model of the object, Knowledge tags about the model and relevant Questions. The user may select any one of the 5 related contents using the user device.
Further, at step 260, as shown in figure 3B, the processing module (102) is configured to providing one or more immersive and experiential contents based on the selected content. The one or more immersive and experiential contents is a curriculum linked content selected from a group comprising 3D model, knowledge tree, videos, audio, simulations, practice test or a combination thereof. For example, the user may be provided with the related content based on the image picked from displayed closest neighbors to the searched image. Information displayed includes videos, hands on learning experiences (simulations), 3D model of the object, Knowledge tags about the model and relevant Questions.
Later, at step 270, as shown in figure 3B, the user device (106) is configured to receive the one or more immersive and experiential contents based on the selected content and display the one or more immersive and experiential contents. The one or more immersive and experiential contents may be the curriculum relevant immersive and experiential content using 3D model, Knowledge Tree, Videos, Simulations and Practice Test. In an additional or alternative embodiment, the user may manipulate the immersive and experiential content using the user device. The user may, through haptic feedback may manipulate the immersive and experiential content such as 3D models to change its characteristics such as size color, properties, position etc.
In this manner the present invention has ability to find nearest neighbors using regression analysis and suggest relevant results which in turn are connected to user’s curriculum bringing up information encompassing the search intent. The information being served is immersive and experiential in nature stimulating perceptive learning. The present invention provides curriculum relevant immersive and experiential content using 3D model, Knowledge Tree, Videos, Simulations and Practice Test. The present invention can take image or string as input and return curriculum linked immersive and experiential content. Moreover, the present invention requires only a few thousand images to become smart and meaningful.
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It is implied that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g., RAM) and/or non-volatile (e.g., ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and the appended claims.

Documents

Application Documents

# Name Date
1 202141057969-STATEMENT OF UNDERTAKING (FORM 3) [13-12-2021(online)].pdf 2021-12-13
2 202141057969-FORM FOR STARTUP [13-12-2021(online)].pdf 2021-12-13
3 202141057969-FORM FOR SMALL ENTITY(FORM-28) [13-12-2021(online)].pdf 2021-12-13
4 202141057969-FORM 1 [13-12-2021(online)].pdf 2021-12-13
5 202141057969-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-12-2021(online)].pdf 2021-12-13
6 202141057969-EVIDENCE FOR REGISTRATION UNDER SSI [13-12-2021(online)].pdf 2021-12-13
7 202141057969-DRAWINGS [13-12-2021(online)].pdf 2021-12-13
8 202141057969-DECLARATION OF INVENTORSHIP (FORM 5) [13-12-2021(online)].pdf 2021-12-13
9 202141057969-COMPLETE SPECIFICATION [13-12-2021(online)].pdf 2021-12-13
10 202141057969-FORM-26 [17-12-2021(online)].pdf 2021-12-17
11 202141057969-FORM-9 [22-12-2021(online)].pdf 2021-12-22
12 202141057969-FORM FOR STARTUP [22-12-2021(online)].pdf 2021-12-22
13 202141057969-EVIDENCE FOR REGISTRATION UNDER SSI [22-12-2021(online)].pdf 2021-12-22
14 202141057969-STARTUP [28-12-2021(online)].pdf 2021-12-28
15 202141057969-FORM28 [28-12-2021(online)].pdf 2021-12-28
16 202141057969-FORM 18A [28-12-2021(online)].pdf 2021-12-28
17 202141057969-FER.pdf 2022-01-18
18 202141057969-RELEVANT DOCUMENTS [14-06-2022(online)].pdf 2022-06-14
19 202141057969-Proof of Right [14-06-2022(online)].pdf 2022-06-14
20 202141057969-OTHERS [14-06-2022(online)].pdf 2022-06-14
21 202141057969-FORM 13 [14-06-2022(online)].pdf 2022-06-14
22 202141057969-FER_SER_REPLY [14-06-2022(online)].pdf 2022-06-14
23 202141057969-DRAWING [14-06-2022(online)].pdf 2022-06-14
24 202141057969-Correspondence_Form-1_22-06-2022.pdf 2022-06-22
25 202141057969-PatentCertificate29-07-2022.pdf 2022-07-29
26 202141057969-IntimationOfGrant29-07-2022.pdf 2022-07-29

Search Strategy

1 202141057969E_04-01-2022.pdf

ERegister / Renewals

3rd: 08 Dec 2023

From 13/12/2023 - To 13/12/2024

4th: 12 Dec 2024

From 13/12/2024 - To 13/12/2025