Sign In to Follow Application
View All Documents & Correspondence

System And Method For Providing Augmented Reality Based Location Service

Abstract: An interactive learning management and training system for assessing a training activity performed by a user in an interactive immersive environment representative of a real-time wide area airport environment, the interactive immersive environment simulating a virtual element, the training system comprising: a wearable communication device having atleast one media capturing unit, wherein the media capturing unit is capable of acquiring auditory and/or visual media in any specific format, a map generation engine for generating a three dimensional (3D) map comprising of a tangible instrument module wherein, the map generation unit is capable of: receiving virtual image data from an area database associated with a virtual map of a virtual scene, receiving physical image data from the media capturing unit, the physical image data associated with the wide area airport environment, determining perimeter information of the wide areaairport environment, determining an orientation of the image data based on the data received from motion sensors, determining boundary information of an obstacle associated with the wide area airport environment, and using the perimeter information and boundary information to generate a real scene map associated with the physical environment including an obstacle and one or more free space areas, an extended reality (XR) rendering enginecomprising of an immersive audio-visual module configured to record one or more immersive audio-visual scenes, an immersive audio/video scene comprising one or more users and immersion tools, wherein, the immersive audio-visual module is communicatively coupled to the tangible instrument module, the user interacting with the tangible instrument module for controlling the virtual element in the interactive immersive environment, and generating a progressive representation to the wearable communication device, the representation associated with virtual scene pixels of the virtual map corresponding to real scene points of the real scene map in a specific time interval; and a learning management modulethatobtains and evaluates a plurality of performance metric datasets related to the virtual element being simulated the interactive computer simulation station, the plurality of performance metric datasets representing results of the interactions between the user and the tangible instrument module. Ref. Fig. 1, 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 February 2022
Publication Number
06/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-02-29
Renewal Date

Applicants

SmartGC Pro Edutech Private Limited
19/753, Puthussery Road, East Fort Gate, Tripunithura, Ernakulam, Kerala, India-682301

Inventors

1. Sunish M S
Maliakkal House Puthussery road Tripunithura - 682301 Ernakulam Kerala

Specification

DESC:In order to provide a solution to above drawbacks, the present invention providesan apparatus, system and method in which content rendered in Virtual Reality (VR), Augmented Reality (AR) or mixed reality (MR) – which and other similar technologies will be individually and collectively referred to as extended reality (XR), is provided to one or more user devices for use by users, individually or collectively based on at least one of the parameters of location identification and object recognition. Additionally and as an exemplary embodiment of the present invention, the user device could be auto-populated with XR content objects based on a location, and the XR content objects could be instantiated based on object recognition within the location.

In an embodiment of the present invention is disclosed a content management system comprising a content management engine coupled with an area database and ancontent database. The content management engine can be configured to communicate with databases and perform various steps in order to provide content objects to a device(s) for modification or instantiation in an immersive environment.

In an embodiment of the present invention, the area database could be configured to store area data related to an area of interest such as within the airport. Non-limiting examples of such area data could be image data, video image data, real-time image data, still image data, signal data, audio data, map data. Area data could also include any other suitable data related to a layout of an area.The content database could be configured to store augmented reality or other digital content objects of various modalities, including for example, image content objects, video content objects, or audio content objects. Needless to say that the said content objects could be associated with one or more real world objects viewable from an area of interest.

In an embodiment of the present invention, thecontent management engine of the present invention comprises an XR rendering engine that is configured to obtain an initial map of an area of interest from the area data within the airport area database. An initial map data could comprise obtaining CADs, blueprints, 3-D models or even in exemplary embodiments, robot or drone created maps, or other representation from the area database itself, or could comprise obtaining area data such as image data, signal data, video data, audio data, views data, viewable object data, points of interest data, field of view data, etc. to generate the initial map.

In an embodiment of the present invention, the XR rendering engine could then derive a set of views of interest from at least one of the initial map and other area data, such views of interest being representative of where users wouldlook while navigating through various portions of the area of interest. The views of interest could be derived by the map generation engine, or via recommendations, requests or other inputs of one or more users such as potential viewer, advertiser, manager, developer, etc., or could be modeled based on some or all of the area data. The views of interest could comprise various information including but not limited to a view-point origin, a field of interest, an owner, metadata, a direction (e.g., a vector, an angle, etc.), an orientation (e.g., pitch, yaw, roll, etc.), a cost, a search attribute, a descriptor set, an object of interest, or any combination or multiples thereof.Once the views of interest have been derived, the XR rendering engine could obtain a set of XR content objects such as virtual objects, key content, digital images, audio-video data, application, script, promotion, advertisement, gamification elements, workflow, lesson plan, calibrations for kinesthetic, tactile inputs etc. from the XR content database. Each of the XR content objects will preferably be related to one or more of the derived views of interest. The XR content objects could be selected for obtaining based on one or more of the following: a search query, an assignment of content objects to a view of interest or object of interest within the view, one or more characteristics of the initial map, a context of an intended user of a user or a recommendation, selection or request of a user.

In an exemplary embodiment of the present invention, the XR rendering engine is set to establish XR experience clusters within the initial map as a function of the XR content objects obtained and views of interest derived, representing a subset of the derived view of interest and the associated XR content objects.Based on the XR experience clusters or information related thereto, the XR rendering engine could generate a maps such as tile maps comprising tessellated tiles that cover at least a portion of the area of interest. Some or all of the tiles could be associated with one or more of an identification, an owner, an object of interest, a set of descriptors, an advertiser, a cost, or a time. Additionally, the tiles could be dynamic in nature based on events which include, among other things, a sale, a news event, a publication, a change in inventory, a disaster, a change in advertiser, or any other suitable event. Additionally, a view-point origin, a field of interest, a view or an object of interest could also be dynamic in nature.

Referring to Fig. 1 describing a schematic of an exemplary system of an interactive learning management and training system of the present invention. The System 100 comprises a map generation engine 101 capable of capturing, generating, or otherwise obtaining area data related to an area of interest. By way of an example, such area data could comprise image data (such as still image data, real-time image data, etc.), video data, signal data (such as CSS data, RSS data, WiFi signal data, beacon signal data, etc.), and/or initial maps that could be transmitted to and stored in area database 102communicably coupled via a network. The XR rendering engine 103, coupled with area database 102 can be configured to obtain an initial map related to an area of interest from area database 102, or could be configured to obtain other area data and generate initial map based on the obtained data.

A non-limiting definition of an area of interest can be considered as real-world spaces, areas or settings selected within which the processes and functions of the present invention will be carried out. Such areas of interest can be an a priori, user-defined area or an ad-hoc area generated by the system.For a priori defined areas, an area of interest can correspond to existing, predefined boundaries that can be physical (e.g., the physical boundaries or the structural boundaries inside an airport), non-physical (e.g., a geographical boundary or territorial boundary of an airport) or a combination of both (e.g., a section of a room inside an airport building defined by some of the walls in the room and also a user-defined boundary such as ticket counter, luggage checkin counter bisecting the room). By inference, it is understood that areas of interest can be large or small.

In an embodiment of the present invention, a user can set an area of interest by selecting a pre-existing area from a map, blueprint, designated portion etc. For example, selecting a designated area as an area of interest would incorporate the boundaries of the landmark as denoted on a map. Likewise, selecting a floor-plan inside a flight as an area of interest would include the floorplan as denoted in the official floor plan or blueprints for the flight. The user can also set an area of interest by manually setting and/or adjusting the desired boundaries of the area of interest on a graphical user interface. In one example, the user can select a point or coordinate on a rendered digital map and extend the area of interest radially outward from the point. In another example, the user could denote the area of interest on a map, blueprint, floor plan, etc., by manually drawing the line segments corresponding to the boundary or as a bounding box. A user can access map generation engine 101 via a user interface that allows the user to manually generate, via the graphical user interface, and/or adjust the area of interest. Suitable user interfaces include computing devices (such as smartphones, tablets, desktop computers, servers, laptop computers, gaming consoles, etc.) connected to other input devices such as mouse, stylus etc. and output devices such as screens, audio output, sensory feedback devices and communicably coupled to the map generation engine 101 and other system components. Such contemplated areas of interest include all suitable interior and outdoor settings rendered to scale in the immersive environment to provide a real-world feel to the user.

In an embodiment of the present invention, the map generation engine 101 of system 100 can generate an ad-hoc area of interest based on a number of devices detected in a particular area at a particular time. To do so, the map generation engine 101 can receive position data corresponding to a plurality of user devices, determine that a threshold number of devices are within a certain distance of one another and/or within or passing through a monitored space or point within a designated area (e.g., multiple users clustered in a point in an airport terminal hallway). Satisfying the threshold leads the the map generation engine 101to generate the area of interest such that the area encompasses the cluster and, optionally, an additional distance from the cluster. In an exemplary embodiment, the ad-hoc area of interest can be an apriori area of interest based on modifications to the real-world area or structure, modifications to user clusters, etc. A person with ordinary skill in the art will appreciate that the area of interest can be considered a digital model or object of the area of interest in a form processable by the disclosed computing devices.

In an embodiment of the present invention, the map generation engine is exactly able to mimic the real-world environment in an immersive space. For instance, while the user perceives a virtual wall which is generally matched by the immersive reality environment to where the actual physical wall exists or another physical obstacle in the physical environment, the user is thereby discouraged and is prevented from hitting the wall of the physical environment within which the user navigates. This is particularly important in wide-area environments specifically, an airport. In other disclosed embodiments a wall corner may be displayed where an obstacle resides, for example, a table or chair or a counter. The present invention applies exterior boundary constraints and an interior obstacle barriers to prevent the user from navigating into an obstacle. To do that the map generation engine is capable of is capable of:receiving virtual image data from the area database associated with a virtual map of a virtual scene,receiving physical image data from the media capturing unit, the physical image data associated with the wide area airport environment,
determining perimeter information of the wide area airport environment,determining an orientation of the image data based on the data received from motion sensors,determining boundary information of an obstacle associated with the wide area airport environment, andusing the perimeter information and boundary information to generate a real scene map associated with the physical environment including an obstacle and one or more free space areas.

In yet another embodiment of the present invention, the map generation engine 101 is capable of performing a stratified sampling of delineated sample points of the virtual map. The map generation engine 101 of the present invention may further add optimization constraints to the delineated sample points of the virtual map as folded in the real scene map. The stratified sampling may further including a stratum. The stratified sampling may further include a stratum comprising 0.025% pixels of the virtual map. In yet another embodiment, the stratified sampling may further include a set of stratified samples associated in which the stratified samples are located at a distance of at least 5× an average stratified sampling distance.

In anexemplary embodiment, the interactive learning management and training system100 comprises of adequate controls for controlling at least one of a virtual simulated elements from ainteractive simulation executed on the interactive simulation system. The interactive learning management and training system 100 may typically comprise of multiple simulation stations that each allow one or more users to interact to control a virtual simulated element in one of the interactive simulation(s) of the interactive simulation system 100. The interactive learning management and training system 100 also comprises monitoring stations for allowing various management tasks (not shown) to be performed in the interactive learning management and training system 100. The tasks associated with the monitoring station allow for control and/or monitoring of one or more ongoing interactive computer simulations. Persons having ordinary skill in the art will appreciate that the monitoring station may be used for allowing an instructor to participate to the interactive simulation and possibly additional interactive simulation(s).

A person having ordinary skill will appreciate thatwithin wide area where there is a vast amount of area data available, the XR rendering engine 103 could be configured to obtain or otherwise utilize area data comprising different modalities and different views of every portion of the area of interest. This data could be used to obtain an initial map having increased accuracy, and to generate a tile map having increased real-time accuracy and instantiate XR content objects at the precise location, precise positioning of the user device.

In an embodiment of the present invention, XR content objects,contemplated in the likes of a virtual object, chroma key content, digital image, digital video, audio data, application, script, promotion, advertisements, games, workflows, kinesthetic, tactile, lesson plan, etc. can be data objects including content that is to be presented via a suitable user held device (e.g., smartphone, XR goggles,head mounted devices, oculus, smart glasses, tablet, etc.) to generate an augmented-reality or mixed-reality environment. This can involve overlaying the content on real-world imagery (preferably in real-time) via the computing device, such that the user of the computing device sees a combination of the real-world imagery with the XR content seamlessly. In another embodiment of the present invention, XR content objects can include multi-dimensional gamification elements, graphic sprites and animations, flash objects, can range from an HTML window and anything contained therein to 2D/3D sprites rendered either in scripted animation or for an interactive game experience. A person with ordinary skill in the art would appreciate that sprites in a rendered format interact with the physical elements of the space whose geometry has been reconstructed either in advance, or in real-time in the background of the XR experience.In an embodiment of the present invention, XR content objects could be instantiated based on object recognition and motion estimation within an area or interest or movement to or from areas of interest, for example through control of camera and sensors such as gyroscope or accelerometer.

An initial map can comprise a CAD drawing, a digital blueprint, a three-dimensional digital model, a two-dimensional digital model or any other suitable digital representation of a layout of an area of interest. In some embodiments, such initial map could comprise a digital or virtual construct in memory that is generated by the map generation engine 101 of system 100, by combining some or all of the image data, video data, signal data, orientation data, existing map data (e.g., a directory map of a shopping center already operating, etc.) and other data.

In an embodiment of the present invention, the system 100 can also comprise an object generation engine 104, which could obtain a plurality of content objects (for representation, shown in the figure are image content objects, video content objects, audio content objects, etc.) from one or more users devices, and transmit the objects to XR content database 105. For example, audio-visual cues with respect to an airport operation could be obtained including virtual billboards from advertisers wishing to advertise a good or service to userstransiting the airport. Including image, audio-visual cues, other ancillary information such as advertiser preferences, costs, fees, priority or any other suitable information may also be displayed to the user.

In an embodiment of the present invention, upon generating the views of interestand the XR content objects the XR rendering engine 103 could obtain a set of XR content objects (e.g., from the XR content database 105 based on a search query of XR content database 105through searches for XR content objects in database 105 that are associated with one or more descriptors retreived from a descriptor database 106 that are associated with one or more views of interest or based on a characteristic of the initial map factoring into account dimensions, layout, an indication of the type of area or based on customizable user selection or upon recommendation or request for example, by an advertiser, merchant or based on a context of an intended use of a user based on user sought activities.

In an embodiment of the present invention and as a function of at least one of the XR content objects and the set of views of interest, the XR rendering engine 103 establishes XR experience clusters within an initial map or in other instances, as a new map. Such XR experience clusters can be established to include one or more point of view origins from which objects of interest could be viewed based on a density of XR content objects associated with the point of view origins of the various views of interest. In an exemplary embodiment of the present invention, viewed from another perspective, each experience cluster can include point of view origins such that the point of view origin(s) in each cluster correspond to a substantially equal percentage (e.g., factoring in standard deviations from each of the other clusters) of the total XR content objects as well as based on video content objects, image content objects, and audio content objects. Based on the established XR experience clusters, the XR rendering engineis able to generate an area tile map of the area of interest. The tile map could comprise a plurality of tessellated tiles covering the area of interest or portion(s) thereof, as would be appreciated by a person having ordinary skill in the art.

In an exemplary embodiment of the present invention, in respect of audio data, the XR rendering engine 103 can employ audio recognition and analysis techniques to identify the acoustic characteristics of the environment, locations of sources of sound such as location of speaker, audio-output devices or that of environmental noise as well as for identification of audio such as music, announcements or the source of the auditory input.Sensor data can include temperature sensor data, air pressure sensor data, light sensor data, location-sensor data (e.g., GPS or other location- or position-determination system data) and other data such asdata from gyroscope, accelerometers etc. The XR rendering engine 103 is able to identify basic characteristics within the environment such as temperature, air flow, lighting, smell as well as other environmental characteristics for various locations within the area of interest.Signal data can correspond to data pertaining to signals from routers, cellular transmitters, computing devices, broadcasts (such asOTAs or radio broadcasts), near-field communication devices, or other emitters of wireless data carrier signals. One having ordinary skill in the art will appreciate that the signal data itself can include information such as identification of emitting device, identification of standard(s)/protocol(s), network location information, physical location information of emitter, etc. The recognized objects and characteristics of the environment can be associated with particular locations within the area of interest by correlating the area data with the initial map based on location information such as GPS or other location information including that associated with image data.

In yet another embodiment of the present invention, the XR rendering engine 103 generates a progressive representation to the wearable communication device, the representation associated with virtual scene pixels of the virtual map corresponding to real scene points of the real scene map in a specific time interval. For example, ifa user mounted with the wearable communication device is walking or navigating typically via walking, in an embodiment, the disclosed system tracks the user's physical position generally as (x,y) coordinate pairs and collects up to an nth value of maximum (x,y) coordinate data points for a given predetermined time interval t. The system and method projects the user's physical position onto virtual space

Fig. 2 is a high-level block diagram illustrating a functional view of an immersive audio-visual module 104coupled to a map generation engine 101, according to one embodiment of the invention. The illustrated embodiment of the immersive audio-visual moduleis communicatively coupled with the area database and the XR Content Database andmay also be communicatively coupled via a network to device 107. The environment in Fig. 2 is used only by way of example.

Turning now to the individual entities illustrated in Fig. 2, the device 107 is used by an user to interact with the immersive audio-visual module104. In one embodiment, the client 107 is a handheld device that captures media through a media capturing unit 108 and displays multiple views of an immersive audio-visual recording from the immersive audio-visual module104. In other embodiments, the device 107 is a wearable device including but not limited to mobile telephone, personal digital assistant, or other wearable or mountable smart electronic devices that has computing resources for remote live previewing of an immersive audio-visual recording.

The immersive audio-visual module 120 creates an enhanced interactive and immersive audio-visual environment where users can enjoy true interactive, immersive audio-visual mixed reality experience in a variety of applications. In the illustrated embodiment, the audio-visual system 120 comprises an immersive video system 201, an immersive audio system 202, an interaction manager 203 and an audio-visual production system 204. The immersive video system 201, the immersive audio system 202 and the interaction manager 203 are communicatively coupled with the audio-video production system 204. The immersive audio-visual module 120 also comprises of a motion tracking module 207 for tracking and calibrating movements of the users in the immersive environment. The immersive audio-visual module 120 in FIG. 2 is used exemplarily and may include other subsystems and/or functional modules.

In an exemplary embodiment of the present invention, the motion tracking module 207 is further configured to track at least one of a group of movement of objects in the immersive video scenes, movement of one or more users, and movement of the immersion tools. In another exemplary embodiment, the motion tracking module is further configured to track the movements of the participant's arms and hands. In yet another exemplary embodiment, the motion tracking module is further configured to track the facial expressions as well as movement of retina or pupil of the participant.

The immersive video system 201 creates immersive stereoscopic videos that mix live videos, computer generated graphic images and interactions between a user and recorded video scenes. The immersive videos created by the video system 201 are further processed by the audio-visual production system 204.

The immersive audio system 202 creates immersive sounds with sound resources positioned correctly relative to the position of a user. The immersive sounds created by the audio system 202 are further processed by the audio-visual system 204.

The interaction manager 204 typically monitors the interactions between a user and created immersive audio-video scenes in one embodiment. In another embodiment, the interaction manager 204 creates interaction commands for further processing the immersive sounds and videos by the audio-visual production system 204. In yet another embodiment, the interaction manager 204 processes service requests from the device 107 and determines types of applications for example specific to the geo-location request and their respective simulation environment for the audio-visual production system 204.

The audio-visual system 204 receives immersive videos from the immersive video system 201, the immersive sounds from the immersive audio system 202 and the interaction commands from the interaction manager 203 and produces an enhanced immersive audio and video environment, with which users can enjoy true interactive, immersive audio-visual mixed reality experience in a variety of applications. The audio-visual production system 204 includes a video scene texture map module 2041, a sound texture map module 2042, an audio-visual production engine 2043 and an application engine 2044. The video scene texture map module 2041 creates a video texture map where video objects in an immersive video scene are represented with better resolution and quality than, for example, typical CGI or CGV of faces, areas etc. The sound texture map module 2042 accurately calculates sound location in an immersive sound recording. The audio-visual production engine 2043 reconciles the immersive videos and audios to accurately match the video and audio sources in the recorded audio-visual scenes. The application engine 2044 enables post-production viewing and editing with respect to the type of application and other factors for a variety of applications, in an exemplary embodiment such application could include areas forsecurity checking, baggage checking, amenities, and waiting areas such as lounges and creating applications thereof such asonline intelligent gaming, training simulations, cultural-awareness training, or interactive gaming.

The map generation engine 101 obtains dynamic data related to the virtual element being simulated in an interactive simulation station comprising a processor module 2061 and a tangible instrument module 2062. The dynamic data captures actions performed by the user during the training activity on one or more tangible instruments of the tangible instrument module. Tangible instruments may comprise of real world instruments available within an airport area.The map generation engine 101 constructs a dataset corresponding to the plurality of performance metric values from the dynamic data having a target time step by synchronizing dynamic data from at least two of the dynamic subsystems into the dataset considering the target time step, the at least two of the dynamic subsystems being associated to at least one common performance metric values from the plurality of performance metric values and by inferring, for at least one missing dynamic subsystems of the plurality of dynamic subsystems missing from the dynamic data, a new set of data into the dataset from dynamic data associated to one or more co-related dynamic subsystems, the co-related dynamic subsystems and the at least one missing dynamic subsystems impacting at least one common performance metric values from the plurality of performance metric values.The processor module 2061 may optionally obtain dynamic data from a plurality of interactive computer simulation stations and constructs the dataset having the target time step for the plurality of interactive simulation stations.

The processor module 2061 obtains a plurality of performance metric datasets related to the virtual element being simulated, the plurality of performance metric datasets representing results of the interactions between the user and the tangible instrument module 2062 and, during execution of the interactive simulation, detects, in the plurality of performance metric datasets, a plurality of actual maneuvers of the virtual element during the training activity, identifies one or more standard operating procedures (SOP) for the training activity from a plurality of the individually detected actual trajectories in the virtual space taken by the user and provides, in real-time upon detection of the SOPs, information for display in the interactive computer simulation related the SOPs.

Fig. 3 illustrates a schematic block diagram of an embodiment of a learning management system (LMS) 109. The LMS 109is communicably coupled to the immersive audio-visual module 104 and the map generation engine 101 and may additionally comprise of a user device 301 (e.g., a tablet device) having a LMS application operatively coupled throughwired or wireless communication infrastructure network, a database server system 302. The database server system 302 may be local or may be located in the cloud, in accordance with an optional embodiment.

User training dataset303 is generated and stored on the map generation engine 101 during a virtual training process and may be downloaded to the device or may be copied to a computer disk or a computer flash drive from the map generation engine and transferred to the device. The performance datasets obtained from the map generation engine may be used by a learning management processor module 304 to view and analyze the user training data, to grade the user, and to generate reports (e.g., a user report card). Traceable reports may be generated that compare the performance of students to each other, or compare the performance of classes to each other, or compare in any other manner, for instance the performance of this year's user cohort to last year's user cohort.

The learning management processor module304 is coupled to a performance module 305 that provides analysis tools which determine how users are performing and identify any skills that a user is having trouble mastering. In an optional embodiment, the performance module 305 summarizes user performance in the form of tables, charts, and graphs that are easily readable and understandable. An instructor can meet with an individual user and review the user’s performance by viewing the various tables, charts, and graphs, on the display of the device, that summarize the user's performance.

Referring to Fig. 4 and 5 which discloses an illustrative example of an exemplary process of the interactive learning management and training method for assessing a training activity performed by a user in an interactive immersive environment representative of a real-time wide area airport environment, comprising the steps of:
- generating a progressive representation associated with virtual and physical reality image data using a wearable communication device (401);
- generating a corresponding virtual barrier in the real scene map, the virtual barrier associated with the boundary information of the obstacle in the real scene map (402);
- generating a folding of the virtual scene map into the one or more free space areas of the real scene map, the free space areas surrounding or adjacent the generated virtual barrier;
- performing a stratified sampling of delineated sample points of the virtual map (403);
- generating a progressive representation to the wearable communication device, the representation associated with virtual scene pixels of the virtual map corresponding to real scene points of the real scene map in a time interval (404); and
- obtaining and evaluating a plurality of performance metric datasets (405).

In another embodiment of the present invention and referring to Fig. 5, the step of generating a progressive representation associated with virtual and physical reality image data using a wearable communication device (401)further comprising the steps of:
- receiving virtual image data from an area database associated with a virtual map of a virtual scene (501);
- receiving physical image data from the media capturing unit, the physical image data associated with the wide area airport environment (502);
- determining perimeter information of the wide area airport environment (503);
- determining an orientation of the image data based on the data received from motion sensors (504);
- determining boundary information of an obstacle associated with the wide area airport environment (505); and
- using the perimeter information and boundary information to generate a real scene map associated with the physical environment including an obstacle and one or more free space areas (506).

While the embodiments discussed herein have been related to the systems and methods discussed above, these embodiments are intended to be exemplary and are not intended to limit the applicability of these embodiments to only those discussions set forth herein. The control systems and methodologies discussed herein are equally applicable to, and can be utilized in, systems and methods related to arc welding, laser welding, brazing, soldering, plasma cutting, waterjet cutting, laser cutting, and any other systems or methods using similar control methodology, without departing from the spirit of scope of the above discussed inventions. The embodiments and discussions herein can be readily incorporated into any of these systems and methodologies by those of skill in the art.

While the invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
,CLAIMS:1. An interactive learning management and training system for assessing a training activity performed by a user in an interactive immersive environment representative of a real-time wide area airport environment, the interactive immersive environment simulating a virtual element, the training system comprising:
a wearable communication device having atleast one media capturing unit, wherein the media capturing unit is capable of acquiring auditory and/or visual media in any specific format;
a map generation engine for generating a three dimensional (3D) map comprising of a tangible instrument module, wherein, the map generation unit is capable of:
receiving virtual image data from an area database associated with a virtual map of a virtual scene,
receiving physical image data from the media capturing unit, the physical image data associated with the wide area airport environment,
determining perimeter information of the wide areaairport environment,
determining an orientation of the image data based on the data received from motion sensors,
determining boundary information of an obstacle associated with the wide area airport environment, and
using the perimeter information and boundary information to generate a real scene map associated with the physical environment including an obstacle and one or more free space areas,
an extended reality (XR) rendering enginecomprising of an immersive audio-visual module configured to record one or more immersive audio-visual scenes, an immersive audio/video scene comprising one or more users and immersion tools, wherein, the immersive audio-visual module is communicatively coupled to the tangible instrument module, the user interacting with the tangible instrument module for controlling the virtual element in the interactive immersive environment, and generating a progressive representation to the wearable communication device, the representation associated with virtual scene pixels of the virtual map corresponding to real scene points of the real scene map in a specific time interval; and
a learning management modulethatobtains and evaluates a plurality of performance metric datasets related to the virtual element being simulated the interactive computer simulation station, the plurality of performance metric datasets representing results of the interactions between the user and the tangible instrument module.
2. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 1, wherein the XR rendering engine comprises of:
an immersive audio-visual module configured to record one or more immersive audio-visual scenes, an immersive audio/video scene comprising one or more users and immersion tools;
a motion tracking module configured to track movement of the immersive video scenes;
anaudio visualproduction module configured to edit the recorded immersive video scenes; and
an application engine configured to create the interactive immersive simulation program based on the edited immersive video scenes.
3. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 2, wherein motion tracking of the immersive audio and video scenes comprises tracking of at least one of a group of movement of objects in the immersive video scenes, movement of one or more users, and movement of the immersion tools.
4. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 1, wherein the map generation module is capable of generating a corresponding virtual barrier in the real scene map, the virtual barrier associated with the boundary information of the obstacle in the real scene map, generating a folding of the virtual scene map into the one or more free space areas of the real scene map, the free space areas surrounding or adjacent the generated virtual barrier and performing a stratified sampling of delineated sample points of the virtual map.
5. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 1, wherein the learning management module comprises of:
a database server system configured to receive and store the user training data;
a learning management processor configured to perform one or more of:
downloading user training data from the interactive immersive rendering unit to auser device in at least one of a wired or wireless manner,
uploading the user training data, analysis results, and reports from the user device to the database server system via an external communication infrastructure, and
downloading the user training data from the database server system to the user device via an external communication infrastructure, and
a performance analysis module configured to analyze performance of the users.
6. The interactive learning management and training system for assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 4, which further comprises the device adding optimization constraints to the delineated sample points of the virtual map as folded in the real scene map.
7. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 5, the performance analysis module is further configured to analyze the performance of the wearable communication device.
8. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 2, wherein the immersive audio-visual module comprises of an immersive video module and an immersive audio module, wherein the immersive video module and the immersive audio module is further configured to extend one or more recording sets used in the recording of the immersive audiovisual scenes.
9. The interactive learning management and training systemfor assessing a training activity performed by a user in an interactive immersive environment as claimed in claim 2, wherein the XR rendering engine surjectively maps each virtual scene pixel of the virtual scene map to each real scene point of the real scene map.
10. A interactive learning management and training method for assessing a training activity performed by a user in an interactive immersive environment representative of a real-time wide area airport environment, comprising the steps of:
generating a progressive representation associated with virtual and physical reality image data using a wearable communication device, further comprising the steps of:
receiving virtual image data from an area database associated with a virtual map of a virtual scene,
receiving physical image data from the media capturing unit, the physical image data associated with the wide area airport environment,
determining perimeter information of the wide areaairport environment,
determining an orientation of the image data based on the data received from motion sensors,
determining boundary information of an obstacle associated with the wide area airport environment,
using the perimeter information and boundary information to generate a real scene map associated with the physical environment including an obstacle and one or more free space areas;
generating a corresponding virtual barrier in the real scene map, the virtual barrier associated with the boundary information of the obstacle in the real scene map;
generating a folding of the virtual scene map into the one or more free space areas of the real scene map, the free space areas surrounding or adjacent the generated virtual barrier;
performing a stratified sampling of delineated sample points of the virtual map;
generating a progressive representation to the wearable communication device, the representation associated with virtual scene pixels of the virtual map corresponding to real scene points of the real scene map in a time interval; and
obtaining and evaluating a plurality of performance metric datasets.

Documents

Application Documents

# Name Date
1 202241007562-PROVISIONAL SPECIFICATION [12-02-2022(online)].pdf 2022-02-12
2 202241007562-FORM FOR SMALL ENTITY(FORM-28) [12-02-2022(online)].pdf 2022-02-12
3 202241007562-FORM 1 [12-02-2022(online)].pdf 2022-02-12
4 202241007562-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-02-2022(online)].pdf 2022-02-12
5 202241007562-EVIDENCE FOR REGISTRATION UNDER SSI [12-02-2022(online)].pdf 2022-02-12
6 202241007562-DRAWINGS [12-02-2022(online)].pdf 2022-02-12
7 202241007562-FORM FOR STARTUP [26-01-2023(online)].pdf 2023-01-26
8 202241007562-FORM FOR SMALL ENTITY [26-01-2023(online)].pdf 2023-01-26
9 202241007562-ENDORSEMENT BY INVENTORS [26-01-2023(online)].pdf 2023-01-26
10 202241007562-DRAWING [26-01-2023(online)].pdf 2023-01-26
11 202241007562-COMPLETE SPECIFICATION [26-01-2023(online)].pdf 2023-01-26
12 202241007562-STARTUP [04-02-2023(online)].pdf 2023-02-04
13 202241007562-FORM28 [04-02-2023(online)].pdf 2023-02-04
14 202241007562-FORM-9 [04-02-2023(online)].pdf 2023-02-04
15 202241007562-FORM 18A [04-02-2023(online)].pdf 2023-02-04
16 202241007562-FER.pdf 2023-03-21
17 202241007562-Proof of Right [20-09-2023(online)].pdf 2023-09-20
18 202241007562-OTHERS [20-09-2023(online)].pdf 2023-09-20
19 202241007562-FORM-26 [20-09-2023(online)].pdf 2023-09-20
20 202241007562-FER_SER_REPLY [20-09-2023(online)].pdf 2023-09-20
21 202241007562-ENDORSEMENT BY INVENTORS [20-09-2023(online)].pdf 2023-09-20
22 202241007562-DRAWING [20-09-2023(online)].pdf 2023-09-20
23 202241007562-CORRESPONDENCE [20-09-2023(online)].pdf 2023-09-20
24 202241007562-COMPLETE SPECIFICATION [20-09-2023(online)].pdf 2023-09-20
25 202241007562-CLAIMS [20-09-2023(online)].pdf 2023-09-20
26 202241007562-US(14)-HearingNotice-(HearingDate-12-02-2024).pdf 2024-01-23
27 202241007562-Correspondence to notify the Controller [08-02-2024(online)].pdf 2024-02-08
28 202241007562-FORM-26 [10-02-2024(online)].pdf 2024-02-10
29 202241007562-Written submissions and relevant documents [25-02-2024(online)].pdf 2024-02-25
30 202241007562-RELEVANT DOCUMENTS [25-02-2024(online)].pdf 2024-02-25
31 202241007562-RELEVANT DOCUMENTS [25-02-2024(online)]-1.pdf 2024-02-25
32 202241007562-Proof of Right [25-02-2024(online)].pdf 2024-02-25
33 202241007562-PETITION UNDER RULE 137 [25-02-2024(online)].pdf 2024-02-25
34 202241007562-PETITION UNDER RULE 137 [25-02-2024(online)]-1.pdf 2024-02-25
35 202241007562-PatentCertificate29-02-2024.pdf 2024-02-29
36 202241007562-IntimationOfGrant29-02-2024.pdf 2024-02-29

Search Strategy

1 SearchHistory-202241007562E_21-03-2023.pdf

ERegister / Renewals

3rd: 22 May 2024

From 12/02/2024 - To 12/02/2025

4th: 22 May 2024

From 12/02/2025 - To 12/02/2026

5th: 22 May 2024

From 12/02/2026 - To 12/02/2027

6th: 22 May 2024

From 12/02/2027 - To 12/02/2028