Sign In to Follow Application
View All Documents & Correspondence

“A System And A Method For Navigating A User Within A Premises”

Abstract: Systems and methods for navigating a user within a premises are disclosed. The system (101) receives a plurality of navigation requests from a plurality of user devices indicating user’s intention to navigate to a user’s desired location. The system (101) further receives one or more images, from the plurality of user devices, corresponding to one or more surrounding locations of the users. The system (101) generates a self-learning 3D map of the premises based on the received travel data and images. The system (101) receives a new navigation request, from a new user device, indicating user’s intention to navigate from a user’s current location to a user’s desired location. Finally, the system (101) determines a path for navigating the user from the user’s current location to the user’s desired location using the self-learning 3D map. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 June 2020
Publication Number
50/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@knspartners.com
Parent Application

Applicants

ZENSAR TECHNOLOGIES LIMITED
Zensar Knowledge Park, Plot # 4, MIDC, Kharadi, Off Nagar Road, Pune-411014, Maharashtra, India

Inventors

1. Garvita Jain
Zensar Technologies Ltd., Zensar Knowledge Park, Plot#4, MIDC, Kharadi, Off Nagar Road, Pune – 411014, Maharashtra, India
2. Mukul Tiwari
Zensar Technologies Ltd., Zensar Knowledge Park, Plot#4, MIDC, Kharadi, Off Nagar Road, Pune – 411014, Maharashtra, India
3. Sumant Kulkarni
Zensar Technologies Ltd., Zensar Knowledge Park, Plot#4, MIDC, Kharadi, Off Nagar Road, Pune – 411014, Maharashtra, India

Specification

FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10; Rule 13]
“A SYSTEM AND A METHOD FOR NAVIGATING A USER WITHIN A
PREMISES”
ZENSAR TECHNOLOGIES LIMITED of Zensar Knowledge Park, Plot # 4, MIDC, Kharadi, Off Nagar Road, Pune-411014, Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD
The present disclosure relates in general to a navigation system. More particularly, but not exclusively, to a method and a system for facilitating navigation of a user from a user’s current location to a user’s desired location within a premises.
BACKGROUND
Navigation systems help users locate routes or paths from one location to another. However, a conventional navigation system uses Global Positioning System (GPS) to locate routes or paths. A problem exists with the conventional navigation systems when the user is in a building or premises, where the GPS signals are either weak or not available.
Consider a scenario in a large premises, when a user parks his/her vehicle in a parking lot of the premises and goes away from the parking lot, it becomes difficult for the user to track where the vehicle was parked. This situation is mainly observed when the parking lot is big, crowded with vehicles, and confusing for the user. Similarly, consider a scenario when a new user visits the premises and he/she does not have knowledge of the premises, and therefore, it becomes difficult for the user to navigate within the premises by travelling minimum distance. Even if the user is familiar with the premises, it is difficult for the user to remember exact location of each place of the user’s interest (for example, location of a showroom in a shopping mall) within the premises and best path of that place from a current location of the user.
The technical challenge faced here is tracking the user’s movement within the premises. In some techniques, maps are predefined and stored corresponding to the premises. However, such techniques lack learning about new points, new locations within the premises because they work on predefined models. Another challenge faced is in identifying a best possible route for guiding the user to reach his/her desired location.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.

SUMMARY
In one non-limiting embodiment of the present disclosure, a method for guiding a user to navigate from a current location to a desired location within a premises is disclosed. The method comprises receiving, by a navigation system, a plurality of navigation requests from a plurality of user devices such that each of the plurality of navigation requests indicates user’s intention to navigate to a user’s desired location from a user’s current location within the premises. The plurality of user devices is configured to capture plurality of travel data corresponding to the plurality of navigation requests upon detecting user’s movement within the premise. The method further comprises receiving, by the navigation system, one or more images, captured by the plurality of user devices, corresponding to one or more surrounding locations of the users while the users travel within the premises such that each of the one or more images comprises at least one object’s image to be used for identifying the one or more surrounding locations. The method further comprises generating, by the navigation system, a self-learning three-dimensional (3D) map of the premises based on the plurality of travel data and the one or more images. Further, the method comprises receiving, by the navigation system, a new navigation request in a real-time, from a new user device, indicating the user’s intention to navigate from the user’s current location to the user’s desired location. Further, the method comprises determining, by the navigation system, a path for navigating the user from the user’s current location to the user’s desired location using the self-learning 3D map. Further, the desired location is one among a starting location of the user and the one or more surrounding locations.
In another non-limiting embodiment of the present disclosure, a system for guiding a user to navigate from a current location to a desired location within a premises is disclosed. The navigation system comprises a receiving unit that receives a plurality of navigation requests from a plurality of user devices such that each of the plurality of navigation requests indicates user’s intention to navigate to a user’s desired location from a user’s current location within the premises. The plurality of user devices is configured to capture plurality of travel data corresponding to the plurality of navigation requests upon detecting user’s movement within the premises. The receiving unit further receives one or more images, captured by the plurality of user devices, corresponding to one or more surrounding locations of the users while the users travel within the premises such that each of the one or more images comprises at least one object’s image to be used

for identifying the one or more surrounding locations. The system further comprises a generating unit that generates a self-learning three-dimensional (3D) map of the premises based on the plurality of travel data and the one or more images. The receiving unit further receives a new navigation request in a real-time, from a new user device, indicating the user’s intention to navigate from the user’s current location to the user’s desired location. The system further comprises a determining unit that determines a path for navigating the user from the user’s current location to the user’s desired location using the self-learning 3D map. Further, the desired location is one among a starting location of the user and the one or more surrounding locations.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
Figure 1 shows an exemplary environment 100 for guiding a user to navigate within a premises, in accordance with some embodiments of the present disclosure;
Figure 2 shows a block diagram 200 illustrating a system for guiding a user to navigate within a premises, in accordance with some embodiments of the present disclosure;
Figure 3 depicts a flowchart 300 of a method for guiding a user to navigate within a premises, in accordance with some embodiments of the present disclosure; and

Figure 4 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
The terms “comprises”, “comprising”, “includes”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
Disclosed herein is a system and a method for guiding a user to navigate from a current location of the user to a desired location of the user within a premises. The premises may be a shopping mall or a stadium, a railway station, an airport, for example. Consider a scenario where

a user visits the premises that has many floors and stores and GPS signals are either very weak or not available within the premises. The user parks his/her vehicle in the parking lot of the premises and starts navigating within the premises. As the premises is large, it becomes difficult for the user to reach to a desired location/store within the premises by travelling a minimum possible distance, especially when the premises is totally unknown to the user or the user has never been to that premises. After roaming within the premises, the user might forget the exact parking location of his/her vehicle. Even if the user remembers the exact parking location of the vehicle, another challenge faced by the user is to reach to the parking location in a minimum possible time by travelling a shortest possible distance. It may be understood to the skilled person that desired location is not limited to the parking location, and it may be any desired location within the premises that the user wishes to reach.
The present disclosure addresses this issue by enabling the user to navigate in a secure, convenient, and efficient way from one location to another location within the premises. The present disclosure tracks the user’s movement as soon as the user starts travelling within the premises and stores relevant travelling information for processing, for example, GPS data, distance travelled by user, altitude information (number of floors travelled). Along with the travelling information, system may also process one or more images, captured by user device, of different locations within the premises while the user travels within the premises. The system processes all this information collectively to generate a self-learning three-dimensional (3D) map which further helps in providing the shortest path to users for guiding them to navigate from their current location to their desired location within the premises.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

Figure 1 shows an exemplary environment 100 for guiding a user 102 to navigate from a current location C to a desired location D within a premises in accordance with some embodiments of the present disclosure.
According to an embodiment, the premises may be a shopping mall, a college campus, a railway station, a bus stand, an airport, an IT park, or a manufacturing plant having various buildings, floors, showrooms/shops, and the like. Finding a shortest path between a current location of a user to another location (desired location) is a tedious task for a person (user) who may not be familiar to that premises. Figure 1 shows a shopping mall as an example of the premises where the user 102 wants to roam around. However, it may be understood to a person skilled in art that the present invention is not limited to the environment shown in figure 1 and may be implemented in various environments/premises as well, other than as shown in Figure 1.
The premises may comprise various buildings, multiple floors, multi-level parking lots, for example. Each floor/building may have various locations such as shops, restaurants, showrooms, and the like. Few locations are indicated in the figure 1 as C, D, S1, S2, S3, S4, and S5, where C indicates a current location of a user 102 and D indicates a desired location that the user 102 wants to reach. It may be understood to the skilled person that, the number of locations and their placement as shown in figure 1 is just an example, and hence the scope of the present disclosure is not limited by this arrangement. Further, it may also be understood to the skilled person that, the curved lined between different locations shown in figure 1 are for illustration purpose only, and hence the scope of the present disclosure is not limited by this and the user 102 can travel in any manner and not necessarily the curved lines. The user 102 is equipped with a user device 103 such as, but not limited to, a mobile phone, tablet, or any other wearable device. The user device 103 is capable to communicate wirelessly with the navigation system 101. The user device 103 may be equipped with various components (not shown) such as, but not limited to, an altimeter, a gyroscope, a 3-axis accelerometer, and a barometer. Alternatively, these various components may be provided separately to the user such as, but not limited to, in the form of a wearable device other than the user device 103.
Now, figure 1 is explained in conjunction with figure 2 which shows a block diagram 200 illustrating the navigation system 101 for navigating the user 102 from his/her current location C

to a desired location D in the premises in accordance with some embodiments of the present disclosure. According to an embodiment of the present disclosure, the navigation system 101 may comprise input/output interface 202, a processor 204, a memory 206, and various units 210. The I/O interface 202 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, input device, output device and the like. The I/O interface 202 may allow the navigation system 101 to interact with the user devices 103 directly or through other devices. The memory 206 is communicatively coupled to the processor 204. Further, the memory 206 comprises data 208 such as travel data 212, images 214, and a 3D map 216. Further, the units 210 comprises a receiving unit 218, a generating unit 220, a determining unit 222, and other units 224. Further, the units 218-224 may be dedicated hardware units capable of performing various operations of the navigation system 101. However, according to other embodiments, the units 218-224 may a processor or an application-specific integrated circuit (ASIC) or any circuitry capable of executing instructions stored in the memory 206 of the system 101.
In the example shown in figure 1, the user 102 having the user device 103 wishes to navigate from his/her current location C to a desired location D. As can be seen from figure 1 that there are various surrounding locations S1-S5 via which various paths which can be taken by the user 102 to reach the desired location D. The technical challenge is how the user device 103 would know which path is optimal one for navigating to the desired location D. To address this technical challenge, the system first generates a self-learning 3D map of the premises and eventually make the 3D map to learn at every instance when different users travels into the premises. Once the map is generated, the system 101 then provides the shortest path to the user 102 when he/she visits the premises. The generation of the self-learning 3D map and recommending the shortest path are described here below in more detail.
When a user visits the premises, he/she turns ON an application on his/her user device 103 and sends a navigation request to the system 101. The navigation request may indicate user’s intention to roam within the premises from a starting location to a desired location or may indicate user’s intention to navigate from a user’s current location to a user’s desired location within the premises. The desired location may be any location within the premises including the starting location of the user and one or more surrounding locations travelled along by the user within the premises. If the desired location is the starting point of the user, the navigation request indicates

the user’s intention to reach back to the starting location from a current location of the user within the premises. For example, the starting location may be a location of parking lot where the user has parked his/her vehicle and now wants to reach back to that parking location.
When the user roams around within the premises, the user follows a path. The user device 103 captures travel data 212 corresponding to various locations on the path using the components provided with the user device (i.e. the altimeter, the gyroscope, the 3-axis accelerometer, and the barometer) and transmits the captured travel data 212 to the system 101. The system 101 detects the user’s movement within the premises and receives the travel data 212 corresponding to the various locations from the user device 103. As discussed above, the various locations may include the user’s starting location, user’s surrounding locations, and user’s current location.
The travel data 212 may comprise GPS data (GPS coordinates) associated with the particular location (if available), distance data indicative of distance travelled between the particular location and the user’s starting location, and altitude data associated with the particular location. The system 101 calculates the distance travelled by the user from the starting point to the particular location based on step count from the starting point. According to an embodiment, the system 101 calculates the distance using stride length as follows:
Distance Travelled = Steps Count X Stride Length
Data for steps count is obtained from the user device or from an additional device such as a wearable tracking device. Further, the stride length is determined based on height and sex of the user that are obtained at the time of installation of the application. This way the system 101 determines the distance travelled by the user from the starting point to any particular location within the premises.
The system 101 calculates the altitude data of the particular location with respect to a reference location using the altimeter and the barometer. The reference location may be the starting location of the user or any other location. In an example, an elevation of each floor of a building may be 10 feet or approximately 3 meters. Now, if the user started from a basement (considered as reference location) of the premises after parking his/her vehicle and took an elevation of 10 feet

or 3 meters. The system 101 records this elevation as an elevation of one floor with respect to the reference location (i.e. the basement).
Further, the system 101 records the trail or direction of movement of the user captured by the user device 103 using the 3-axis accelerometer and gyroscope. The 3-axis accelerometer and the gyroscope measure acceleration as well as the titling motion and the orientation of the user device 103. Thus, the trail recorded using the 3-axis accelerometer and gyroscope is more accurate than GPS.
If the particular location is the starting point, the system 101 instructs the user device 103 to reset a step count to zero and to initiate the 3-axis accelerometer to record frequency, duration, intensity and pattern of movement of the user. The system 101 further instructs the user device 103 to initiate the altimeter to know altitude, gyroscope to know direction of movement. In an embodiment, the system 101 may instruct the user device 103 to reset the step count to zero at various locations on the path to determine distance travelled by the user between two consecutive locations.
In addition to the travel data 212, the user device 103 also captures one or more images 214 of the various locations/surroundings and transmits the captured images 214 to the system 101. The system 101 stores the one or more images 214 into its memory 206 for further processing. These captured images 214 may comprise objects of interest that are used to identify one or more locations. The system 101 also trains itself for identifying objects of interest in the captured images. For example, if the system 101 recognizes that shape captured in the image is “round M shape of yellow color with red background”, then it understands over the time that the image indicate the location of “McDonald’s”. The objects of interest may include, but not limited to, text on pillars, shops, number plates of vehicles, sign boards, or any other identifiable object. These objects of interest are extracted using Optical Character Recognition (OCR) and Object Detection Techniques from the captured images and are stored in the memory to form an object knowledgebase. In an embodiment, the images 214 captured by the user device 103 are panoramic images i.e. a wide photo with a wide field of view.
For the sake of simplicity only one user is shown in figure 1 however, there may be a plurality of users and corresponding user devices 103 roaming in the premises. These plurality of

user devices 103 will capture the travel data 212 and the images 214 in the similar fashion as described above. Thus, the receiving unit 218 of the system 101 may receive a plurality of navigation requests from the plurality of user devices 103 being configured to capture plurality of travel data 212 corresponding to the plurality of navigation requests upon detecting user’s movement within the premise. The receiving unit 218 may further receive one or more images 214, captured by the plurality of user devices 103, corresponding to one or more surrounding locations of the users while the users travel within the premises.
The system 101 stores the travel data 212 of a location and its associated images in the memory 206 and creates image-based checkpoints using the travel data 212 of the location and its associated images. This way the system 101 creates several checkpoints using the plurality of travel data 212 and the one or more images 214 received from the plurality of user devices 103. The generating unit 220 of the system 101 then generates a self-learning 3D map 216 of the premises using the plurality of travel data 212 and the one or more images 214. In other words, the generating unit 220 of the system 101 generates the self-learning 3D map 216 of the premises using the checkpoints. The self-learning 3D map 216 is stored in the memory 206.
For generating the self-learning 3D map 216, the generating unit 220 correlates (matches) travel data 212 and one or more images 214 corresponding to a navigating request(s) with subsequent travel data 212 and subsequent one or more images 214 corresponding to subsequent navigating request(s). Here, the correlating means that the system 101 will compare one or more parameters such as GPS data, altitude data, reference locations, captured images of one navigation request with other navigation requests. Similarly, the system 101 may also compare the corresponding objects of interests of the two navigation requests. And based on the result of correlation, the system 101 may update the distance between various locations on the 3D map 216. This correlation will be performed each time a new navigation request is received from a user device 103. This way the system 101 dynamically updates the 3D map 216 after receipt of subsequent navigation requests. Accordingly, distances between various locations on the 3D map 216 may also get updated dynamically.
The receiving unit 218 of the system 101 then receives a navigation request from a user device indicating the user’s intention to navigate from a user’s current location to a user’s desired

location. The desired location may be the starting location of the user or it can be any location from the one or more surrounding locations. Further, the user device 103 can be a new user device of a user entering the premises or it can an existing user device of a user roaming inside the premises.
The determining unit 222 determines a path for navigating the user from the user’s current location to the user’s desired location by using the self-learning 3D map 216. This path determined by the determining unit 222 is the shortest and optimized path.
This path is then provided to the user for navigating from the user’s current location to the user’s desired location. In an embodiment, the path is highlighted on the application installed in the user device and the user starts navigating by following the highlighted path.
Example:
Consider the environment described in figure 1, where a user 102 with a user device 103 is standing at a location C within the premises. The location C is his/her current location. In an embodiment, the current location C might be the starting location of the user 102 also. The 3D map depicted in figure 1 comprises 7 checkpoints/locations. Suppose the user wants to navigate to location D i.e., user’s desired location. Considering that there are 5 surrounding checkpoints/locations between C and D and the distances between various locations are indicated in figure 1 (note that the distances indicated in figure 1 are not to scale). Suppose that 4 users visited the premises before the user 102 and navigated from C to D using below paths (without using 3D map):
P1 (User 1): C -> S1 -> S2 -> S4 -> S1 -> S2 -> D (distance= 65m)
P2 (User 2): C -> S1 -> S4 -> S5 -> D (distance= 50m)
P3 (User 3): C -> S3 -> S4 -> S2 -> D (distance= 40m)
P4 (User 4): C -> S1 -> S2 -> S4 -> D (distance= 37m)
It is clear from above paths that the shortest path from C to D based on previous visited paths is the one travelled by User 4 i.e. P4=37 meters. However, after correlating the travel data

212 of user 3 with the travel data 212 of user 4, the system 101 determines that the optimized path between C and D is P5=C -> S3 -> S4 -> D (distance= 27 meters). The system 101 may provide this path to the user 102 for navigating from C to D.
It may be understood that the shortest path between C and D keeps on changing based on subsequent travel data 212 received from subsequent users. For example, till user 3 shortest path between C and D was C -> S1 -> S2 -> D of 30meters and after arrival of user 4 the shortest path changed to P5 of 27meters. Thus, as subsequent users arrive and as the system 101 receives subsequent data, the system 101 dynamically updates the 3D map 216 and the shortest paths on the 3D map 216. Additionally, if there is any change in a location of a place (e.g., showroom, store, food stall) within the premises, the system 101 learns about the change in the location of that place based on the travel data 212 received from the user devices 103 and updates the 3D map 216 accordingly. Thereafter, when the user 102 requests a path from his/her current location to that place (which has now moved to other location within the premises), the system 101 provides the user 102 the shortest path from his/her current location C to the changed location of the place. For example, suppose there is a McDonald’s restaurant at 1st floor of the premises and this restaurant gets shifted to 2nd floor of the same premises. Now, the system 101 learns from the travel data 212 and from the images captured by user devices 103 post shifting of the McDonald’s restaurant. The system 101 accordingly updates the 3D map 216 to capture this change. In an embodiment, the system 101 may also remove the old location of the McDonald’s restaurant from the 3D map 216.
A problem might arise if, while navigating on the path, the user gets confused and wants to verify if he/she is following a correct path or not. The system 101 disclosed herein resolves this problem in a manner as described below.
The receiving unit 218 may receive a verification request from the user. Responsive to the verification request, the system 101 prompts the user to capture one or more images of the surrounding locations where he/she is currently present. Alternatively, the user can directly transmit one or more images of his/her current location along with the verification request. The system 101 receives the one or more images and compares the received images with the stored images 214 to verify correctness of the path followed by the user while navigating from the user’s current location to the user’s desired location. Particularly, the system 101 identifies objects of

interest in the received one or more images and compares the identified objects with the object knowledgebase created during inserting checkpoints. If a match occurs it signifies that user is near one of the checkpoints and the corresponding optimal path from that checkpoint to the user’s desired location is highlighted on the application. This way the system 101 verifies the correctness of the path followed while navigating.
Another problem might arise when the user loses the path in the forward direction (i.e. while navigating from the current location to the desired location) and then starts retracing. The system 101 provide the user with a shortest and optimized path for retracing to his/her current location. In an alternative embodiment, the system 101 may show a pointer on the self-learning 3D map and the pointer will indicate a direction linking the user to the correct path towards his/her desired location.
Thus, the system 101 provides an easier, efficient, and secure technique for navigating the user within the premises.
Figure 3 depicts a flowchart illustrating a method for guiding a user to navigate from a current location to a desired location within a premises in accordance with some embodiments of the present disclosure.
As illustrated in figure 3, the method 300 includes one or more blocks illustrating a method to navigate a user within the premises. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform specific functions or implement specific abstract data types.
The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 302, the receiving unit 218 receives a plurality of navigation requests from a plurality of user devices. Each of the plurality of navigation requests indicates user’s intention to navigate to a user’s desired location from a user’s current location within the premises and the

plurality of user devices is configured to capture plurality of travel data corresponding to the plurality of navigation requests upon detecting user’s movement within the premise.
At block 304, the receiving unit 218 further receives one or more images 212, captured by the plurality of user devices 103, corresponding to one or more surrounding locations of the users while the users travel within the premises. As discussed above, each of the one or more images comprises at least one object’s image to be used for identifying the one or more surrounding locations.
At block 306, the generating unit 220 generates a self-learning three-dimensional (3D) map 216 of the premises based on the plurality of travel data 212 and the one or more images 214.
At block 308, the receiving unit 218 may receive a new navigation request in a real-time, from a new user device, indicating the user’s intention to navigate from the user’s current location to the user’s desired location.
At block 310, the determining unit 222 may determine a path for navigating the user from the user’s current location to the user’s desired location using the self-learning 3D map 216. The desired location may be one among a starting location of the user and the one or more surrounding locations.
Computer System
Figure 4 illustrates a block diagram of an exemplary computer system 400 for implementing embodiments consistent with the present invention. In an embodiment, the computer system 400 can be the system 101 which is used for guiding a user to navigate from a current location to a desired location within a premises. According to an embodiment, the computer system 400 may receive navigating request 410 which may include, for example, travel data and one or more images captured by a plurality of user devices. The computer system 400 may comprise a central processing unit (“CPU” or “processor”) 402. The processor 402 may comprise at least one data processor for executing program components for executing user- or system-generated business processes. The processor 402 may include specialized processing units such as integrated

system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 402 may be disposed in communication with one or more input/output (I/O) devices (411 and 412) via I/O interface 401. The I/O interface 401 may employ communication protocols/methods such as, without limitation, audio, analog, digital, stereo, IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For Mobile Communications (GSM), Long-Term Evolution (LTE) or the like), etc.
Using the I/O interface 401, the computer system 400 may communicate with one or more I/O devices (411 and 412).
In some embodiments, the processor 402 may be disposed in communication with a communication network 409 via a network interface 403. The network interface 403 may communicate with the communication network 409. The network interface 403 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network 409 can be implemented as one of the different types of networks, such as intranet or Local Area Network (LAN) and such within the organization. The communication network 409 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the communication network 409 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 402 may be disposed in communication with a memory 405 (e.g., RAM 413, ROM 414, etc. as shown in FIG. 4) via a storage interface 404. The storage interface 404 may connect to memory 405 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as Serial Advanced Technology

Attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The memory 405 may store a collection of program or database components, including, without limitation, user/application data 406, an operating system 407, web browser 408 etc. In some embodiments, the computer system 400 may store user/application data 406, such as the data, variables, records, etc. as described in this invention. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase.
The operating system 407 may facilitate resource management and operation of the computer system 400. Examples of operating systems include, without limitation, Apple Macintosh OS X, UNIX, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, Net BSD, Open BSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, K-Ubuntu, etc.), International Business Machines (IBM) OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry Operating System (OS), or the like. I/O interface 401 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, I/O interface may provide computer interaction interface elements on a display system operatively connected to the computer system 400, such as cursors, icons, check boxes, menus, windows, widgets, etc. Graphical User Interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh operating systems’ Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g., ActiveX, Java, JavaScript, AJAX, HTML, Adobe Flash, etc.), or the like.
In some embodiments, the computer system 400 may implement a web browser 408 stored program component. The web browser 408 may be a hypertext viewing application, such as Microsoft™ Internet Explorer, Google™ Chrome, Mozilla™ Firefox, Apple™ Safari™, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS) secure sockets layer (SSL), Transport Layer Security (TLS), etc. Web browsers may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system 400 may implement a mail server stored program component. The mail server 416 may be an Internet mail server such as

Microsoft Exchange, or the like. The mail server 416 may utilize facilities such as Active Server Pages (ASP), ActiveX, American National Standards Institute (ANSI) C++/C#, Microsoft .NET, CGI scripts, Java, JavaScript, PERL, PHP, Python, WebObjects, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 400 may implement a mail client 415 stored program component. The mail client 415 may be a mail viewing application, such as Apple™ Mail, Microsoft™ Entourage, Microsoft™ Outlook, Mozilla™ Thunderbird, etc.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
Advantages of the embodiment of the present disclosure are illustrated herein.
In an embodiment, the present disclosure provides a method and system for efficiently and securely navigating from a user’s current location to a user’s desired location within a premises where GPS is either weak or not available.
In an embodiment, the system of present disclosure provides an optimized path for navigating within the premises.
In an embodiment, the system of present disclosure provides a facility to a user for verifying whether the user is following a correct path or not.

The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Referral Numerals:

Reference Number Description
100 ENVIRONMENT
101 SYSTEM
102 USER
103 USER DEVICE(S)
202 I/O INTERFACE
204 PROCESSOR
206 MEMORY
208 DATA
210 UNITS
212 TRAVEL DATA
214 IMAGES
216 3D MAP
218 RECEIVING UNIT
220 GENERATING UNIT
222 DETERMINING UNIT
224 OTHER UNITS
400 EXEMPLARY COMPUTER SYSTEM
401 I/O INTERFACE OF THE EXEMPLARY COMPUTER SYSTEM
402 PROCESSOR OF THE EXEMPLARY COMPUTER SYSTEM
403 NETWORK INTERFACE
404 STORAGE INTERFACE
405 MEMORY OF THE EXEMPLARY COMPUTER SYSTEM
406 USER/APPLICATION
407 OPERATING SYSTEM
408 WEB BROWSER
409 COMMUNICATION NETWORK

410 NAVIGATION REQUEST
411 INPUT DEVICES
412 OUTPUT DEVICES
413 RAM
414 ROM
415 MAIL CLIENT
416 MAIL SERVER
417 WEB SERVER

WE CLAIM:
1. A method (300) for guiding a user to navigate from a current location to a desired location
within a premises, the method (300) comprising:
receiving (302, 304), by a navigation system (101):
a plurality of navigation requests from a plurality of user devices, wherein each of the plurality of navigation requests indicates user’s intention to navigate to a user’s desired location from a user’s current location within the premises, and wherein the plurality of user devices is configured to capture plurality of travel data corresponding to the plurality of navigation requests upon detecting user’s movement within the premise; and
one or more images, captured by the plurality of user devices, corresponding to one or more surrounding locations of the users while the users travel within the premises, wherein each of the one or more images comprises at least one object’s image to be used for identifying the one or more surrounding locations; generating (306), by the navigation system (101), a self-learning three-dimensional (3D) map of the premises based on the plurality of travel data and the one or more images; receiving (308), by the navigation system (101), a new navigation request in a real-time, from a new user device, indicating the user’s intention to navigate from the user’s current location to the user’s desired location; and
determining (310), by the navigation system (101), a path for navigating the user from the user’s current location to the user’s desired location using the self-learning 3D map, wherein the desired location is one among a starting location of the user and the one or more surrounding locations.
2. The method (300) as claimed in claim 1, wherein each of the plurality of travel data
comprises at least one of:
GPS data associated with the user’s starting location, the one or more surrounding locations, and the user’s current location,

distance data indicative of distance travelled between the user’s starting location and the user’s current location, and the user’s one or more surrounding locations and user’s current location, and
altitude data associated with the user’s starting location, the one or more surrounding locations, and the user’s current location.
3. The method (300) as claimed in claim 1, wherein the self-learning 3D map is generated by:
correlating at least one of the travel data and the one or more images associated with one navigation request with at least one of subsequent travel data and subsequent one or more images associated with subsequent navigation requests of the plurality of navigation requests;
determining a plurality of paths between the user’s desired location and the user’s current location based on the correlation; and
updating the self-learning 3D map with the path, amongst the plurality of paths, having a shortest distance between user’s desired location and the user’s current location.
4. The method (300) as claimed in claim 1, further comprising verifying correctness of the path followed by the user while navigating from the user’s current location to the user’s desired location by comparing at least one image, captured by the user device at the time of navigating, with previously received one or more images.
5. The method (300) as claimed in claim 1, further comprising:
when the user deviates from the path while navigating from the user’s current location to the user’s desired location,
retracing the user to the path by generating a pointer, on the self-learning 3D map, linking user’s movement towards the path.
6. A navigation system (101) for guiding a user to navigate from a current location to a desired
location within a premises, the navigation system (101) comprising:
a receiving unit (218) to receive:

a plurality of navigation requests from a plurality of user devices, wherein each of the plurality of navigation requests indicates user’s intention to navigate to a user’s desired location from a user’s current location within the premises, and wherein the plurality of user devices is configured to capture plurality of travel data corresponding to the plurality of navigation requests upon detecting user’s movement within the premises; and
one or more images, captured by the plurality of user devices, corresponding to one or more surrounding locations of the users while the users travel within the premises, wherein each of the one or more images comprises at least one object’s image to be used for identifying the one or more surrounding locations; a generating unit (220) to generate a self-learning three-dimensional (3D) map of
the premises based on the plurality of travel data and the one or more images;
the receiving unit (218) to receive a new navigation request in a real-time, from a
new user device, indicating the user’s intention to navigate from the user’s current location
to the user’s desired location; and
a determining unit (222) to determine a path for navigating the user from the user’s
current location to the user’s desired location using the self-learning 3D map, wherein the
desired location is one among a starting location of the user and the one or more
surrounding locations.
7. The navigation system (101) as claimed in claim 6, wherein each of the plurality of travel data comprises at least one of:
GPS data associated with the user’s starting location, the one or more surrounding locations, and the user’s current location,
distance data indicative of distance travelled between the user’s starting location and the user’s current location, and the user’s one or more surrounding locations and user’s current location, and
altitude data associated with the user’s starting location, the one or more surrounding locations, and the user’s current location.

8. The navigation system (101) as claimed in claim 6, wherein the navigation system (101)
generates the self-learning 3D map by:
correlating at least one of the travel data and the one or more images associated with one navigation request with at least one of subsequent travel data and subsequent one or more images associated with subsequent navigation requests of the plurality of navigation requests;
determining a plurality of paths between the user’s desired location and the user’s current location based on the correlation; and
updating the self-learning 3D map with the path, amongst the plurality of paths, having a shortest distance between user’s desired location and the user’s current location.
9. The navigation system (101) as claimed in claim 6, wherein the navigation system (101) verifies correctness of the path followed by the user while navigating back from the user’s current location to the user’s desired location by comparing at least one image, captured by the user device at the time of navigating, with previously received one or more images.
10. The navigation system (101) as claimed in claim 6, wherein when the user deviates from the path while navigating from the user’s current location to the user’s desired location,
the navigation system (101) retraces the user to the path by generating a pointer, on the self-learning 3D map, linking user’s movement towards the path.

Documents

Application Documents

# Name Date
1 202021023582-FORM-26 [06-01-2025(online)].pdf 2025-01-06
1 202021023582-STATEMENT OF UNDERTAKING (FORM 3) [05-06-2020(online)].pdf 2020-06-05
1 202021023582-US(14)-HearingNotice-(HearingDate-12-12-2024).pdf 2024-11-11
1 202021023582-Written submissions and relevant documents [23-01-2025(online)].pdf 2025-01-23
2 202021023582-CLAIMS [08-06-2022(online)].pdf 2022-06-08
2 202021023582-Correspondence to notify the Controller [05-01-2025(online)].pdf 2025-01-05
2 202021023582-FORM-26 [06-01-2025(online)].pdf 2025-01-06
2 202021023582-REQUEST FOR EXAMINATION (FORM-18) [05-06-2020(online)].pdf 2020-06-05
3 202021023582-US(14)-ExtendedHearingNotice-(HearingDate-08-01-2025)-1200.pdf 2024-12-11
3 202021023582-POWER OF AUTHORITY [05-06-2020(online)].pdf 2020-06-05
3 202021023582-Correspondence to notify the Controller [05-01-2025(online)].pdf 2025-01-05
3 202021023582-FER_SER_REPLY [08-06-2022(online)].pdf 2022-06-08
4 202021023582-FORM 18 [05-06-2020(online)].pdf 2020-06-05
4 202021023582-OTHERS [08-06-2022(online)].pdf 2022-06-08
4 202021023582-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [06-12-2024(online)].pdf 2024-12-06
4 202021023582-US(14)-ExtendedHearingNotice-(HearingDate-08-01-2025)-1200.pdf 2024-12-11
5 202021023582-FER.pdf 2021-12-22
5 202021023582-FORM 1 [05-06-2020(online)].pdf 2020-06-05
5 202021023582-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [06-12-2024(online)].pdf 2024-12-06
5 202021023582-US(14)-HearingNotice-(HearingDate-12-12-2024).pdf 2024-11-11
6 202021023582-CLAIMS [08-06-2022(online)].pdf 2022-06-08
6 202021023582-DRAWINGS [05-06-2020(online)].pdf 2020-06-05
6 202021023582-US(14)-HearingNotice-(HearingDate-12-12-2024).pdf 2024-11-11
6 Abstract1.jpg 2020-08-21
7 202021023582-CLAIMS [08-06-2022(online)].pdf 2022-06-08
7 202021023582-DECLARATION OF INVENTORSHIP (FORM 5) [05-06-2020(online)].pdf 2020-06-05
7 202021023582-FER_SER_REPLY [08-06-2022(online)].pdf 2022-06-08
7 202021023582-Proof of Right [20-07-2020(online)].pdf 2020-07-20
8 202021023582-COMPLETE SPECIFICATION [05-06-2020(online)].pdf 2020-06-05
8 202021023582-FER_SER_REPLY [08-06-2022(online)].pdf 2022-06-08
8 202021023582-OTHERS [08-06-2022(online)].pdf 2022-06-08
9 202021023582-DECLARATION OF INVENTORSHIP (FORM 5) [05-06-2020(online)].pdf 2020-06-05
9 202021023582-FER.pdf 2021-12-22
9 202021023582-OTHERS [08-06-2022(online)].pdf 2022-06-08
9 202021023582-Proof of Right [20-07-2020(online)].pdf 2020-07-20
10 202021023582-DRAWINGS [05-06-2020(online)].pdf 2020-06-05
10 202021023582-FER.pdf 2021-12-22
10 Abstract1.jpg 2020-08-21
11 202021023582-FER.pdf 2021-12-22
11 202021023582-FORM 1 [05-06-2020(online)].pdf 2020-06-05
11 202021023582-Proof of Right [20-07-2020(online)].pdf 2020-07-20
11 Abstract1.jpg 2020-08-21
12 202021023582-Proof of Right [20-07-2020(online)].pdf 2020-07-20
12 202021023582-OTHERS [08-06-2022(online)].pdf 2022-06-08
12 202021023582-FORM 18 [05-06-2020(online)].pdf 2020-06-05
12 202021023582-COMPLETE SPECIFICATION [05-06-2020(online)].pdf 2020-06-05
13 202021023582-COMPLETE SPECIFICATION [05-06-2020(online)].pdf 2020-06-05
13 202021023582-DECLARATION OF INVENTORSHIP (FORM 5) [05-06-2020(online)].pdf 2020-06-05
13 202021023582-FER_SER_REPLY [08-06-2022(online)].pdf 2022-06-08
13 202021023582-POWER OF AUTHORITY [05-06-2020(online)].pdf 2020-06-05
14 202021023582-CLAIMS [08-06-2022(online)].pdf 2022-06-08
14 202021023582-DECLARATION OF INVENTORSHIP (FORM 5) [05-06-2020(online)].pdf 2020-06-05
14 202021023582-DRAWINGS [05-06-2020(online)].pdf 2020-06-05
14 202021023582-REQUEST FOR EXAMINATION (FORM-18) [05-06-2020(online)].pdf 2020-06-05
15 202021023582-DRAWINGS [05-06-2020(online)].pdf 2020-06-05
15 202021023582-FORM 1 [05-06-2020(online)].pdf 2020-06-05
15 202021023582-STATEMENT OF UNDERTAKING (FORM 3) [05-06-2020(online)].pdf 2020-06-05
15 202021023582-US(14)-HearingNotice-(HearingDate-12-12-2024).pdf 2024-11-11
16 202021023582-FORM 1 [05-06-2020(online)].pdf 2020-06-05
16 202021023582-FORM 18 [05-06-2020(online)].pdf 2020-06-05
16 202021023582-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [06-12-2024(online)].pdf 2024-12-06
17 202021023582-FORM 18 [05-06-2020(online)].pdf 2020-06-05
17 202021023582-US(14)-ExtendedHearingNotice-(HearingDate-08-01-2025)-1200.pdf 2024-12-11
17 202021023582-POWER OF AUTHORITY [05-06-2020(online)].pdf 2020-06-05
18 202021023582-REQUEST FOR EXAMINATION (FORM-18) [05-06-2020(online)].pdf 2020-06-05
18 202021023582-POWER OF AUTHORITY [05-06-2020(online)].pdf 2020-06-05
18 202021023582-Correspondence to notify the Controller [05-01-2025(online)].pdf 2025-01-05
19 202021023582-STATEMENT OF UNDERTAKING (FORM 3) [05-06-2020(online)].pdf 2020-06-05
19 202021023582-REQUEST FOR EXAMINATION (FORM-18) [05-06-2020(online)].pdf 2020-06-05
19 202021023582-FORM-26 [06-01-2025(online)].pdf 2025-01-06
20 202021023582-STATEMENT OF UNDERTAKING (FORM 3) [05-06-2020(online)].pdf 2020-06-05
20 202021023582-Written submissions and relevant documents [23-01-2025(online)].pdf 2025-01-23
21 202021023582-US(14)-ExtendedHearingNotice-(HearingDate-07-10-2025)-1100.pdf 2025-09-26
22 202021023582-Correspondence to notify the Controller [06-10-2025(online)].pdf 2025-10-06
23 202021023582-Correspondence to notify the Controller [07-10-2025(online)].pdf 2025-10-07

Search Strategy

1 SearchStrategy23582E_14-12-2021.pdf