Sign In to Follow Application
View All Documents & Correspondence

An Unmanned Aerial Vehicle And A Method Thereof

Abstract: ABSTRACT AN UNMANNED AERIAL VEHICLE AND A METHOD THEREOF Disclosed is an unmanned aerial vehicle (101) comprising an image capturing means (104), a navigation means (105) and a processor (201). The processor (201) receives a surveillance service request comprising geographical coordinates of a user of the user device (103). The navigation means (105) navigates the unmanned aerial vehicle (101), to a user location, based on the geographical coordinates. The image capturing means (104) captures feature points of a user. The processor (201) receives real-time sensor data from sensors associated with the user device (103). The image capturing means (104) monitors a path followed by the user and other people in the vicinity of the user. The processor (201) determines at least one anomaly corresponding to the user or other people in the vicinity of the user, wherein the anomaly corresponds to threats to the user. The processor (201) transmits message, to third party, indicating threats to the user. [To be published with Figure 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 June 2019
Publication Number
51/2020
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
ip@stratjuris.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-08-20
Renewal Date

Applicants

Zensar Technologies Limited
Zensar Knowledge Park, Plot # 4, MIDC, Kharadi, Off Nagar Road, Pune-411014, Maharashtra, India

Inventors

1. Hardik Munjal
H.No-17/769,Gali No.3, Dharampura, Bahadrugarh,Haryana
2. Himanshu Pahadia
23/694, D.D.A Flats Madangir, New Delhi - 110062

Specification

Claims:WE CLAIM:
1. An unmanned aerial vehicle (101) comprising:
an image capturing means (104);
a navigation means (105); and
a memory (205) coupled with a processor (201), wherein the processor (201) is configured to execute programmed instructions stored in the memory (205) for:
receiving a surveillance service request from a user device (103), wherein the surveillance service request comprises geographical coordinates of a user of the user device (103);
operating the navigation means (105) for navigating the unmanned aerial vehicle (101), to a user location, based on the geographical coordinates;
capturing, a plurality of feature points of a user carrying the user device (103), wherein the image capturing means (104) is configured to perform a three-dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points;
receiving real-time sensor data from a plurality of sensors associated with the user device;
capturing real-time surveillance data from the image capturing means (104), wherein the image capturing means (104) is configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data;
determining at least one anomaly corresponding to the user or other people in the vicinity of the user by analyzing the real-time sensor data and the real-time surveillance data, based on the plurality of feature points, wherein the at least one anomaly corresponds to one or more threats to the user; and
transmitting a message, to a third party, indicating the one or more threats to the user of the user device.

2. The unmanned aerial vehicle (101) as claimed in claim 1, wherein the processor (201) is configured to predict and relocate, the path followed by the user based on one or more sensor inputs, when the image capturing means (104) is unable to capture the real-time surveillance data.

3. The unmanned aerial vehicle (101) as claimed in claim 1, wherein at least one anomaly is detected by the trained neural network, wherein the trained neural network is configured to pass a current frame or batch of frames.

4. The unmanned aerial vehicle (101) as claimed in claim 2, wherein the anomaly corresponds to at least one or more of posture and behaviour of the user and other people in the real-time surveillance data, objects carried by the other people, actions of the other people.

5. The unmanned aerial vehicle (101) as claimed in claim 1, wherein the plurality of feature points comprises at least a set of digital facial points of the user, colour and type of the user’s attire, user’s body movement.

6. The unmanned aerial vehicle (101) as claimed in claim 1, wherein the plurality of sensors comprises one or more of accelerometer, gyro-meter, emotion identification system, magnetometer, and GPS device.

7. The unmanned aerial vehicle 101 as claimed in claim 1, wherein the notification to the third party is provided through wireless messages, an audio alarm, a video alarm or combinations thereof.

8. The unmanned aerial vehicle (101) as claimed in claim 1, wherein the unmanned aerial vehicle (101) comprises a 3-axis accelerometer for stabilizing the unmanned aerial vehicle (101) and a 3-axis gyroscope in order to provide angular motion to the unmanned aerial vehicle (101).

9. The unmanned aerial vehicle (101) as claimed in claim 1, wherein the trained neural network is configured to re-identify the user, when the feature points of the user are changed during travelling, wherein the trained neural network is configured to learn the changes in the feature points and make encodings of the feature points, wherein the unmanned aerial vehicle (101) hovers around the user in a circular fashion for analysing the feature points for re-identification of the user.

10. A method (300) for surveilling a user by an unmanned aerial vehicle (101), comprising:
receiving, via a processor (201), a surveillance service request from a user device (103), wherein the surveillance service request comprises geographical coordinates of a user of the user device (103);
operating, via the processor (201), a navigation means (105) for navigating the unmanned aerial vehicle (101), to a user location, based on the geographical coordinates;
capturing, via the processor (201), a plurality of feature points of a user carrying the user device (103), wherein an image capturing means (104) is configured to perform a three-dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points;
receiving, via the processor (201), real-time sensor data from a plurality of sensors associated with the user device (103);
capturing, via the processor (201), real-time surveillance data from the image capturing means, wherein the image capturing means (104) is configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data;
determining, via the processor (201), at least one anomaly corresponding to the user or other people in the vicinity of the user by analysis of the real-time sensor data and the real-time surveillance data, based on the plurality of feature points, wherein the at least one anomaly corresponds to one or more threats to the user; and
transmitting, via the processor (201), a message to a third party, indicating the one or more threats to the user of the user device (103).

Dated this 14th day of June 2019
, Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION

(See Section 10 and Rule 13)

Title of invention:
AN UNMANNED AERIAL VEHICLE AND A METHOD THEREOF

APPLICANT

Zensar Technologies Limited,
an Indian Entity, having address as:
Zensar Knowledge Park, Plot # 4, MIDC, Kharadi, Off Nagar Road,
Pune-411014, Maharashtra, India

The following specification describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present subject matter described herein, in general, relates to a monitoring system. More particularly, the present subject matter is related to an unmanned aerial vehicle and a method thereof.

BACKGROUND
The cases of robbery and mishaps with people, usually females, travelling at odd hours are increasing all over the world. To curb such situations, it is not always possible to provide personal security to each individual for protection. The mishaps that may happen to a person walking may include health emergencies such as heart attacks, decrease in the pulse rates, fainting situations as well as situations of threats like a theft, or attack by using blunt or sharp objects like knifes or guns etc. Such situations occurring to any person should be avoided.

Thus, there is a long standing need, of a monitoring system which will constantly monitor a person walking and alert the person as well as the contact personals of the person about the threats that occur during his/her travel.

SUMMARY

This summary is provided to introduce the concepts related to an unmanned aerial vehicle and a method thereof and the concepts are further described in the detail description. This summary is not intended to identify essential features of the claimed subject matter nor it is intended to use in determining or limiting the scope of claimed subject matter.

In one implementation, the present subject matter describes an unmanned aerial vehicle for surveilling a user. The unmanned aerial vehicle may include an image capturing means, a navigation means, and a memory coupled with a processor. The processor may be configured to execute a plurality of programmed instructions stored in the memory. The processor may execute one or more programmed instructions for receiving a surveillance service request from a user device. The surveillance service request may comprise geographical coordinates of a user of the user device. Further, the processor may execute one or more programmed instructions for operating the navigation means for navigating the unmanned aerial vehicle, to a user location, based on the geographical coordinates. The processor may further execute one or more programmed instructions for capturing a plurality of feature points of a user carrying the user device, wherein the image capturing means is configured to perform a three-dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points. Further, the processor may execute one or more programmed instructions for receiving, real-time sensor data from a plurality of sensors associated with the user device. The processor may further execute one or more programmed instructions for capturing real-time surveillance data from the image capturing means. The image capturing means may be configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data. The processor may further execute one or more programmed instructions for determining at least one anomaly corresponding to the user or other people in the vicinity of the user by analyzing the real-time sensor data and the real-time surveillance data, based on the plurality of feature points. The at least one anomaly corresponds to one or more threats to the user. Furthermore, the processor may execute one or more programmed instructions for transmitting a message, to a third party, indicating the one or more threats to the user of the user device.

In another implementation, the present subject matter describes a method for surveilling a user by an unmanned aerial vehicle. The method may include receiving a surveillance service request from a user device. The surveillance service request comprises geographical coordinates of a user of the user device. The method may further include operating the navigation means for navigating the unmanned aerial vehicle, to a user location, based on the geographical coordinates. The method may further include capturing, a plurality of feature points of a user carrying the user device. The image capturing means is configured to perform a three dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points. The method may further include receiving, real-time sensor data from a plurality of sensors associated with the user device. The method may further include capturing real-time surveillance data from the image capturing means. The image capturing means may be configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data. Further, the method may include determining at least one anomaly corresponding to the user or other people in the vicinity of the user by analysing the real-time sensor data and the real-time surveillance data, based on the plurality of feature points. The at least one anomaly may correspond to one or more threats to the user. Furthermore, the method may include transmitting a message, to a third party, indicating the one or more threats to the user of the user device.

BRIEF DESCRIPTION OF DRAWINGS

The detailed description is described with reference to the accompanying figures. In the Figures, the left-most digit(s) of a reference number identifies the Figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.

Figure 1 illustrates a network implementation (100) of an unmanned aerial vehicle (101), in accordance with an embodiment of a present subject matter.

Figure 2 illustrates components of the unmanned aerial vehicle, in accordance with an embodiment of a present subject matter.

Figure 3 illustrates a method (300) for surveilling a user by the unmanned aerial vehicle (101), in accordance with an embodiment of a present subject matter.

Figure 4 illustrates a method (400) depicting a scenario wherein the user features associated with the user being tracked are updated, in accordance with an embodiment of a present subject matter.

Figure 5 illustrates, a machine learning model (500), in accordance with an embodiment of a present subject matter.

DETAILED DESCRIPTION
Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” or “in an embodiment” in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.

Figure 1 illustrates a network implementation 100 of a system enabled over an unmanned aerial vehicle (101), in accordance with an embodiment of a present subject matter. In one embodiment, the unmanned aerial vehicle (101) may be a surveillance drone (101). Hereinafter the unmanned aerial vehicle (101) is interchangeably referred as system (101) or the surveillance drone (101). In an embodiment, the surveillance drone (101) may be connected to a user device (103) over a network (102). It may be understood that the surveillance drone (101) may be accessed by multiple users through one or more user devices (103-1), (103-2), (103-3)…(103-n), collectively referred to as user device (103) hereinafter, or user (103), or applications residing on the user device (103). The user (103) may be any person, machine, software, automated computer program, a robot or a combination thereof. In one embodiment, the user device (103-1) may be used by a user managing the unmanned aerial vehicle. In one embodiment, user devices (103-2), (103-3) may be used by the contact personals of the user.

In an embodiment, though the present subject matter is explained considering that the system (101) is implemented as a surveillance drone, it may be understood that the system (101) may also be implemented in a variety of user devices, such as a but are not limited to, server, a portable computer, a personal digital assistant, a handheld device, a mobile, a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, and the like. In one embodiment, the system (101) may be implemented in a cloud-computing environment. In an embodiment, the network (102) may be a wireless network such as Bluetooth, Wi-Fi, LTE and such like, a wired network or a combination thereof. The network (102) can be accessed by the user device (103) using wired or wireless network connectivity means including updated communications technology.

In one embodiment, the network (102) can be implemented as one of the different types of networks, cellular communication network, local area network (LAN), wide area network (WAN), the internet, and the like. The network (102) may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network (102) may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.

In one embodiment, the surveillance drone (101) may facilitate surveilling the user (103) when the user is walking towards a destination. In one embodiment, the surveillance drone (101) may monitor the user and the vicinity of the user continuously, while the user is walking, to identify an emergency situation or threat to the user. The emergency situation may include health emergencies such as heart attacks, decrease in the pulse rates, loss of consciousness of the user, whereas threat to the user may be in the form of theft, attack or shadowing by another person. In one embodiment, the user device (103)-1 may comprise a plurality of sensors comprising inertial measurement units. The sensors may also comprise emotion identification system. In one embodiment, the surveillance drone (101) may comprise an image capturing means (104) comprising an IR camera. The IR camera is used for obtaining images during the night. In one embodiment, an infrared and RGB fusion camera may be used in order to obtain RGB output. The resolution of the infrared and RGB fusion camera may be 1080 pixels and range of the infrared and RGB fusion camera may be 4-5 meters. In one embodiment, the infrared and RGB fusion camera may also provide depth information. Though ‘n’ number of cameras can be installed on the surveillance drone (101), the surveillance drone (101), in a preferred embodiment, may include three cameras.

In one embodiment, the surveillance drone (101) may comprise a 360-degree rotating LIDAR used to avoid obstacles and ensure safe passage while following the user. LiDAR stands for Light Detection and Ranging technology that is based on laser beams. LiDAR shoots outs laser and measures the time it takes for the light to return. LiDAR is so called active sensor as it emits its energy source rather than detects energy emitted from objects on the ground. This concept is used to map the environment and create a 3-D model of surroundings so that the surveillance drone (101) can easily navigate through any obstacles.

In one embodiment, the surveillance drone (101) may comprise a navigation means (105). The navigation means (105) may comprise GPS device and electric motors for driving the surveillance drone (101). In one embodiment, the surveillance drone (101) may comprise a 3-axis accelerometer which may stabilize the drone. The surveillance drone (101) may also comprise a 3-axis gyroscope in order to provide angular motion to the surveillance drone (101). In one embodiment, the surveillance drone (101) may be wirelessly connected to a server (not shown in the figure) in order to store a real time captured data.
In one embodiment, the user device (103) may be connected to the surveillance drone (101) via wireless communication such as Wi-Fi. When the user (103)-1 may summon the surveillance drone (101) via user device, the surveillance drone (101) may arrive at the user’s location and further continuously monitor the user and the other people in the vicinity of the user in real-time. If at least one anomaly in the behaviour of the user or other people is determined, then a message may be transmitted to a third party, indicating a threat to the user.

Referring to figure 2, illustrates components of the unmanned aerial vehicle (101), in accordance with an embodiment of a present subject matter. Referring to Figure 2, components of the system (101), comprises at least one processor (201), an input/output (I/O) interface (203), a memory (205), modules (207) and data (219). In one embodiment, the at least one processor (201) is configured to fetch and execute computer-readable instructions stored in the memory (205). In one embodiment, the processor (201) may be an EDGE TPU microcontroller which is an ASIC that provides high performance machine learning interference for low-power devices.

In one embodiment, the I/O interface (203) implemented as a mobile application or a web-based application may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface (203) may allow the system (101) to interact with the user devices (103). Further, the I/O interface (203) may enable the user device (103) to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface (203) can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface (203) may include one or more ports for connecting to another server. In an exemplary embodiment, the I/O interface (203) is an interaction platform which may provide a connection between users and system (101).

In an implementation, the memory (205) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and memory cards. The memory (205) may include modules (207) and data (219).

In one embodiment, the modules (207) include routines, programs, objects, components, data structures, etc., which perform particular tasks, functions or implement particular abstract data types. In one implementation, the modules (207) may include a data receiving module (209), a navigation module (211), a capturing module (213), a determination module (215), and a transmitting module (217). The data (219) may comprise a data store (221), and other data (223). The data, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules.

The aforementioned computing devices may support communication over one or more types of networks in accordance with the described embodiments. For example, some computing devices and networks may support communications over a Wide Area Network (WAN), the Internet, a telephone network (e.g., analog, digital, POTS, PSTN, ISDN, xDSL), a mobile telephone network (e.g., CDMA, GSM, NDAC, TDMA, E-TDMA, NAMPS, WCDMA, CDMA-2000, UMTS, 3G, 4G), a radio network, a television network, a cable network, an optical network (e.g., PON), a satellite network (e.g., VSAT), a packet-switched network, a circuit-switched network, a public network, a private network, and/or other wired or wireless communications network configured to carry data. Computing devices and networks also may support wireless wide area network (WWAN) communications services including Internet access such as EV-DO, EV-DV, CDMA/1×RTT, GSM/GPRS, EDGE, HSDPA, HSUPA, and others.

The aforementioned computing devices and networks may support wireless local area network (WLAN) and/or wireless metropolitan area network (WMAN) data communications functionality in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards, protocols, and variants such as IEEE 802.11 (“WiFi”), IEEE 802.16 (“WiMAX”), IEEE 802.20x (“Mobile-Fi”), and others. Computing devices and networks also may support short range communication such as a wireless personal area network (WPAN) communication, Bluetooth® data communication, infrared (IR) communication, near-field communication, electromagnetic induction (EMI) communication, passive or active RFID communication, micro-impulse radar (MIR), ultra-wide band (UWB) communication, automatic identification and data capture (AIDC) communication, and others.

In one embodiment, before starting the travel, the user may user the user device (103) to generate a surveillance service request and send it to the surveillance drone (101) via wireless communication such as Wi-Fi. The surveillance service request may be generated using a mobile application installed over the user device (103). The surveillance service request is a message/ instruction sent from the mobile application to the surveillance drone (101) for initiating the user surveillance. Further, the data receiving module (209) may receive a surveillance service request from a user device (103). The surveillance service request may comprise geographical coordinates of a user of the user device (103). The data receiving module (209) may be configured to receive a real-time sensor data from a plurality of sensors associated with the user device (103).

In one embodiment, the processor (201) may be configured to operate the navigation means (105) for navigating the surveillance drone (101), to a user location, based on the geographical coordinates. The navigation means (105) may correspond to GPS module.

The capturing module (213) may capture a plurality of feature points of a user carrying the user device (103). The image capturing means (104) is configured to perform a three-dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points. In one embodiment, a plurality of feature points comprises a set of digital facial points of the user, colour and type of the user’s attire, user’s body movement. In one embodiment, the capturing module (213) may also capture real-time surveillance data from the image capturing means (104). The image capturing means (104) is configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data.
In one embodiment, a neural network may be trained on a person re-identification dataset. The person re-identification dataset is a collection of pedestrian videos from overhead cameras, where each pedestrian is tagged with a single ID across a series of frames. When the neural network is trained on person re-identification dataset, the neural network tries to learn all features necessary to track a person in consecutive frames and also the change in those features. Once the neural network is trained, the neural network may remember the features and may be able to make encodings of those features, once the neural network is allowed to learn.

It must be noted that neural networks have multiple hidden layers in it, and each layer has multiple neurons which try to learn a feature of the person. In a preferred embodiment, assuming a t-shirt pattern of the user is the feature, when a frame is sent to neural network, one layer identifies the t-shirt, a next layer identifies whether it has a collar or not, further next layer maps the pattern on the t-shirt, and then the last layer identifies whether it’s the same t-shirt or not. This process is done for the entire body of the user, which may include features like person’s hairs, t-shirts, hands, jeans etc.

Further, the determination module (215) may determine at least one anomaly corresponding to the user or other people in the vicinity of the user by analysing the real-time sensor data and the real-time surveillance data, based on the plurality of feature points. The at least one anomaly corresponds to one or more threats to the user. In one embodiment, one or more threats are determined based on predefined weightage allotted to each anomaly. In one embodiment, determination of anomaly may be performed by a supervised learning technique.

In one embodiment, the anomaly corresponds to posture and behaviour of the user and other people in the real-time surveillance data, objects carried by the other people, actions of the other people. The objects may include blunt, sharp, heavy objects or targeting objects. The objects may comprise knife, gun, bat, hockey stick etc.

In one embodiment, the neural networks are trained with all the features and then those features are given weightage as per for a specific output while learning. One or more weights are adjusted as to have the least loss function (i.e. the difference between actual data and a prediction). For a proactive approach, the neural network is trained for harmful objects, behavioural analysis, and emotion analysis. They all constitute for one output which is whether a person is a threat or not. For the reactive approach the neural network is trained for IMU Data. The IMU (Inertial Measurements Unit) comprise accelerometer, gyro-meter, and magnetometer. In one embodiment, any one or all of the mentioned IMU may be used.

The transmitting module (217) may transmit a message, to a third party, indicating the one or more threats to the user of the user device. In one embodiment, the notification to the third party is given through various alerts comprising, wireless messages, alarms. The third party may comprise hospitals, police stations, saved contacts in the user device (103).

In one embodiment, when the user sends a surveillance service request to the surveillance drone (101), the surveillance drone (101) identifies the user using facial recognition, wherein the user is registered into the system (101). To make a plurality of encodings of the user, the surveillance drone (101) hovers around the user in a circular fashion, analysing for the feature said surveillance drone (101) had learnt during the training phase. Once these encodings are stored in the memory (205), in one or more consecutive frames the neural network uses these encodings to track the user during navigation of the user. It must be noted that the encodings may change over time, as the surrounding’s ambience lighting change the appearance of the user.

In one embodiment, three cameras may be mounted on said surveillance drone (101). One of the camera out of the three cameras always focuses on the ROI (region of interest) in direction of the user (103). The other two cameras are placed such that they cover most of the area around the user and people in said area.

In one embodiment, in behaviour analysis, the neural network is trained with videos containing fighting scenes from hockey matches, thefts, robberies and videos not containing such violent incidences. The surveillance drone (101) takes the features from the images such as postures, objects if any of the person who may be a threat to the user in the user’s vicinity. A dataset is labelled as if the video contains any anomaly or not and the surveillance drone (101) picks up the features of the user from the video captured by the camera during surveilling the user. Thus, once the neural network is trained the current frame or batch of frames is passed and the neural network detects if any anomaly or violent behaviour is being exhibited by any person in the vicinity of the user.

Figure 3 illustrates a method (300) for surveilling a by the unmanned aerial vehicle (101), in accordance with the embodiment of the present subject matter. The order in which the method (300) is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method (300) or alternate methods. Furthermore, the method (300) can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method (300) may be considered to be implemented in the above described system (100).

At step (301), the data receiving module (209) may be configured for receiving a surveillance service request from a user device, wherein the surveillance service request comprises geographical coordinates of a user of the user device.

At step (303), the processor (201) may be configured for operating the navigation means (105) for navigating the unmanned aerial vehicle (101), to a user location, based on the geographical coordinates.

At step (305), the capturing module (213) may be configured for capturing a plurality of feature points of a user carrying the user device. The image capturing means (104) is configured to perform a three-dimensional scan of the user, based on a trained neural network, to capture the plurality of feature points.

At step (307), the data receiving module (209) may be configured for receiving a real-time sensor data from a plurality of sensors associated with the user device (103).

At step (309), the capturing module (213) may be configured for capturing real-time surveillance data from the image capturing means (104). The image capturing means (104) is configured to monitor a path followed by the user and other people in the vicinity of the user to capture the real-time surveillance data.

At step (311), the determination module (215) may be configured for determining at least one anomaly corresponding to the user or other people in the vicinity of the user by analysing the real-time sensor data and the real-time surveillance data, based on the plurality of feature points. The at least one anomaly corresponds to one or more threats to the user.

At step (313), the transmitting module (217) may be configured for transmitting a message, to a third party, indicating the one or more threats to the user of the user device (103).

In one embodiment, the processor (201) may be configured to predict and relocate, the path followed by the user based on one or more sensor inputs, when the image capturing (104) means is unable to capture the real-time surveillance data. The encodings are the main parameter for tracking the user. The face data of the user is essential to identify the user for the first time when the surveillance drone (101) arrives at the user’s location.

If the user goes out of sight of the surveillance drone (101), the surveillance drone may predict user's trajectory behind walls, pillars etc. and try to relocate the user. If the surveillance drone (101) is unable to identify the user's next position such as that the user goes inside a restaurant or a building, the surveillance drone (101) may operate in a standby mode and notify the user on his user device such as mobile phone, that the surveillance drone (101) is unable to locate the user. By using user's GPS data and IMU data, the surveillance drone (101) may try to relocate the user.

In terms of vector mapping, the processor (201) may compute the probability of direction in which user might have travelled using the previous path followed and the image data. Now if surveillance drone (101) enters in the stand-by state as user is not visible, a prompt is given to user as soon as user comes out of building. By using the mobile GPS data, the user is located up to centi-meter level accuracy and a facial recognition is performed and if in case facial recognition is not being able to be performed by the surveillance drone (101), the last observed encodings are matched and further, even if said encodings are unable to give any results, user is prompted to confirm user location in order for the surveillance drone (101) to perform in a normal way.

Referring now to figure 4, a method (400) depicting a scenario wherein the user features associated with the user being tracked are updated illustrated in accordance with the present subject matter. In one embodiment, the data of the user’s face is essentially identified for the first time when the surveillance drone (101) arrives at the user’s location. In a situation wherein while travelling, the user removes or changes his jacket which was identified by the surveillance drone (101) previously, the processor (201) may identify such changes in the user features.

In one embodiment, the processor (201) may be configured to pass a visual feed (400) to a user identification block (402) and activity recognition system 406. In one embodiment, the behavioural analysis may only result in identifying that the user feels threatened while the surveillance drone (101) is unable to detect any threat. The activity recognition system 406 identifies that the user is trying to change clothes, and said activity recognition system 406 may notify the user identification system (402). The user identification system (402) may change the encodings accordingly, wherein the user identification system (402) is aware of the previous encodings and when the user has changed clothes, the user identification system (402) stores the new encodings while the user walks. The new features extracted are stored as encodings. A photometric and a geometric transformation may be performed on the newly identified features. Amplification on the features may be performed for a better recognition. Further, a similarity estimation may be performed with the stored encodings against said amplified features.

In one embodiment, if the similarity estimation for re-identifying of the user obtained is below 70%, the surveillance drone (101) may notify the user on his/her device to send a rescan request as the surveillance drone (101) is unable to track the user. Thus, the surveillance drone (101) may hover around the user in circular fashion to capture and store all the features available from different angles. These features are identified by the neural network and stores as encodings for re-identification in the subsequent frames.

Referring to figure 5, a machine learning model (500) is illustrated in accordance with the present subject matter. In one embodiment, the input to the machine learning model may be a current frame (501) in the form of an image from cameras in any given scenario. In one embodiment, one camera may be dedicated to only track a particular user and capture the emotions of that user. An emotion analysis model (505) may comprise inputs of the features (502) which are landmarks of the face such as nose, mouth, cheeks, eye-brows etc. The features (502) may be used to find if the user is feeling fear or anxiety, happy, sad, disgust or such like. The emotion analysis model (505) may identify that the user has encountered any personal threat.

In one embodiment, for behaviour analysis (506), the features (503) may include a body posture, the objects a person in the user’s vicinity is carrying wherein the objects may be gun, knife etc. A R-CNN (Recurrent Neural Network) may be used for such analysis, wherein few previous frames may be taken into account to classify the behaviour of the person in the vicinity of the user.

In one embodiment, for IMU data analysis (507), the neural network may be trained with accelerometer, gyro-meter and magnetometer data (504) to predict if a user is being faced with abnormal conditions like falling or being pushed and such like.

In one embodiment, for avoidance of false triggering of alarms, at least three parameters are required to predict if a situation is dangerous or not for the user wherein these parameters have a certain weightage in predicting if a situation may be threat or not.

In one embodiment, weight computation may depend on various scenarios. Preferred scenarios are mentioned below:
? Case-1: When the user is visible and encounters a non-personal threat and the behavioural analysis reports situation as a threat to user, however, the emotion analysis model (505) doesn’t respond with any negative emotions of person, then in this case more weightage is assigned to the prediction of the behavioural analysis (506). In such a case, initially a prompt may be provided to the user on the user’s mobile device and if the person feels that a determined person in the vicinity of the user is a threat to user then a notification may be transmitted to the third party.

? Case-2: When the user is visible and encounters a personal threat, in this case the person causing the threat might not be carrying any objects or might not show any postures which constitute as abnormal but as user knows the person, emotions of the user will depict if the person is a threat to the user. Hence, emotion analysis model (505) may be assigned with more weightage. In this case also, initially a prompt may be provided on the mobile device of the user and if the person feels that the determined person is a threat to user a notification may be transmitted to the third party.

? Case-3: Although there is a dedicated camera to only track the user all the time but in any case user goes out of sight of the surveillance drone (101) and the surveillance drone (101) is in process of finding the user, the IMU data analysis (507) states that a mishap is being done to user using user’s IMU phone data, the drone raises an alarm while parallelly locating the user. Then the highest weightage may be assigned to IMU data analysis (507).

In one embodiment, the threshold depends on the accuracy and performance of each model. Threshold is set as to have the most accurate system wherein the threshold may be set a bit low than usual.

The embodiments, examples and alternatives of the preceding paragraphs or the description and drawings, including any of their various aspects or respective individual features, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.

Although implementations for the computer implemented an unmanned aerial vehicle (101) and a method (300) thereof have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for the unmanned aerial vehicle (101) and a method (300) thereof.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 201921023696-IntimationOfGrant20-08-2024.pdf 2024-08-20
1 201921023696-STATEMENT OF UNDERTAKING (FORM 3) [14-06-2019(online)].pdf 2019-06-14
2 201921023696-PatentCertificate20-08-2024.pdf 2024-08-20
2 201921023696-FORM 1 [14-06-2019(online)].pdf 2019-06-14
3 201921023696-FIGURE OF ABSTRACT [14-06-2019(online)].pdf 2019-06-14
3 201921023696-AMMENDED DOCUMENTS [02-08-2024(online)].pdf 2024-08-02
4 201921023696-FORM 13 [02-08-2024(online)].pdf 2024-08-02
4 201921023696-DRAWINGS [14-06-2019(online)].pdf 2019-06-14
5 201921023696-MARKED COPIES OF AMENDEMENTS [02-08-2024(online)].pdf 2024-08-02
5 201921023696-COMPLETE SPECIFICATION [14-06-2019(online)].pdf 2019-06-14
6 201921023696-Written submissions and relevant documents [02-08-2024(online)].pdf 2024-08-02
6 201921023696-FORM 18 [01-07-2019(online)].pdf 2019-07-01
7 Abstract1.jpg 2019-09-25
7 201921023696-Correspondence to notify the Controller [18-07-2024(online)].pdf 2024-07-18
8 201921023696-US(14)-HearingNotice-(HearingDate-24-07-2024).pdf 2024-07-04
8 201921023696- ORIGINAL UR 6(1A) FORM 1 & FORM 26-080719.pdf 2019-09-25
9 201921023696-OTHERS [30-09-2021(online)].pdf 2021-09-30
9 201921023696-FER.pdf 2021-10-19
10 201921023696-ABSTRACT [30-09-2021(online)].pdf 2021-09-30
10 201921023696-FER_SER_REPLY [30-09-2021(online)].pdf 2021-09-30
11 201921023696-CLAIMS [30-09-2021(online)].pdf 2021-09-30
12 201921023696-ABSTRACT [30-09-2021(online)].pdf 2021-09-30
12 201921023696-FER_SER_REPLY [30-09-2021(online)].pdf 2021-09-30
13 201921023696-FER.pdf 2021-10-19
13 201921023696-OTHERS [30-09-2021(online)].pdf 2021-09-30
14 201921023696- ORIGINAL UR 6(1A) FORM 1 & FORM 26-080719.pdf 2019-09-25
14 201921023696-US(14)-HearingNotice-(HearingDate-24-07-2024).pdf 2024-07-04
15 201921023696-Correspondence to notify the Controller [18-07-2024(online)].pdf 2024-07-18
15 Abstract1.jpg 2019-09-25
16 201921023696-FORM 18 [01-07-2019(online)].pdf 2019-07-01
16 201921023696-Written submissions and relevant documents [02-08-2024(online)].pdf 2024-08-02
17 201921023696-COMPLETE SPECIFICATION [14-06-2019(online)].pdf 2019-06-14
17 201921023696-MARKED COPIES OF AMENDEMENTS [02-08-2024(online)].pdf 2024-08-02
18 201921023696-DRAWINGS [14-06-2019(online)].pdf 2019-06-14
18 201921023696-FORM 13 [02-08-2024(online)].pdf 2024-08-02
19 201921023696-AMMENDED DOCUMENTS [02-08-2024(online)].pdf 2024-08-02
19 201921023696-FIGURE OF ABSTRACT [14-06-2019(online)].pdf 2019-06-14
20 201921023696-PatentCertificate20-08-2024.pdf 2024-08-20
20 201921023696-FORM 1 [14-06-2019(online)].pdf 2019-06-14
21 201921023696-STATEMENT OF UNDERTAKING (FORM 3) [14-06-2019(online)].pdf 2019-06-14
21 201921023696-IntimationOfGrant20-08-2024.pdf 2024-08-20
22 201921023696-RENEWAL OF PATENTS [13-06-2025(online)].pdf 2025-06-13

Search Strategy

1 2021-04-2011-18-25E_21-04-2021.pdf

ERegister / Renewals

3rd: 21 Oct 2024

From 14/06/2021 - To 14/06/2022

4th: 21 Oct 2024

From 14/06/2022 - To 14/06/2023

5th: 21 Oct 2024

From 14/06/2023 - To 14/06/2024

6th: 21 Oct 2024

From 14/06/2024 - To 14/06/2025

7th: 13 Jun 2025

From 14/06/2025 - To 14/06/2026