Abstract: A system (101) and a method (400) for alerting a user to prevent free fall of an object that is placed on an elevated surface is disclosed. The system (101) may receive a set of surveillance images from a surveillance system that is installed at a geographical location. The system (101) may processes a target surveillance image, from the set of received surveillance images by using a computer vision processing technique for detecting a candidate object withing the surveillance image. The system (101) may segment the target surveillance image for extracting the pixels corresponding to the candidate object and an elevated surface on which the candidate object is placed. Further, the system (101) may determine a position of the candidate object with respect to the elevated surface based on centre of mass of the candidate object and transmits a primary alert to a user device (103). [To be published with figure 1]
Claims:We Claim:
1. A system (101) for alerting a user to prevent free fall of an object, placed on an elevated surface, the system (101) comprising:
a processor (201); and
a memory (205) coupled to the processor (201), wherein the processor (201) is configured to execute programmed instructions (207) stored in the memory (205) for:
receiving a set of surveillance images from a surveillance system at a geographical location, wherein the surveillance system is configured to monitor a set of objects at the geographical location;
processing a target surveillance image, from the set of surveillance images, using a vision processing technique for detecting a candidate object, from the set of objects, in the target surveillance image;
segmenting the target surveillance image to extract the pixels corresponding to the candidate object and an elevated surface on which the candidate object is placed;
determining a position, of the candidate object, with respect to the elevated surface, based on centre of mass of the candidate object and overlap between the pixels corresponding to the candidate object and the elevated surface; and
transmitting a primary alert to a user device (103) when the position, of the candidate object, is in a non-equilibrium state.
2. The system (101) as claimed in claim 1, wherein the centre of mass of the candidate object is calculated by using a plurality of image moments of the surveillance images captured by the surveillance system, wherein the centre of mass calculation comprises steps of:
converting the target surveillance image to a grayscale image;
binarization of the grayscale image to convert the pixels into black and white; and
calculating the image moments of the black and white image by using weighted average of image pixel intensities to determine the centre of mass.
3. The system (101) as claimed in claim 1, wherein the segmentation of the target surveillance image is carried out by using a Mask Regional-Convolutional Neural Network (Mask R-CNN) technique.
4. The system (101) as claimed in claim 1, wherein the system is further configured for,
receiving proximity data from the candidate object, wherein the proximity data corresponds to the data associated with a living or non-living object that that is approaching the candidate object, wherein the candidate object is equipped with a proximity sensor for collecting the proximity data; and
transmitting a secondary alert to the user device (103), based on the proximity data received from the candidate object.
5. The system (101) as claimed in claim 1, wherein the non-equilibrium state is determined based on angle between a perpendicular from the centre of mass of the candidate object to the elevated surface and line joining the centre of mass of the candidate object and centre point of overlapping pixels between the candidate object and the elevated surface.
6. A method (400) for alerting a user to prevent free fall of an object, placed on an elevated surface, the method comprising:
receiving, by a processor, a set of surveillance images, wherein the surveillance images are received from a surveillance system at a geographic location, wherein the surveillance system is configured to monitor a set of objects at the geographic location;
processing, by the processor, a target surveillance image from the set of surveillance images using vision processing technique for detecting a candidate object, from the set of objects, in the target surveillance image;
segmenting, by the processor, the target surveillance image to extract the pixels corresponding to of the candidate object and an elevated surface on which the candidate object is placed;
determining, by the processor, a position, of the candidate object, with respect to the elevated surface, based on centre of mass of the candidate object and overlap between the pixels corresponding to the candidate object and the elevated surface; and
transmitting, by the processor, a primary alert to a user device (103) when the position, of the candidate object, is in a non-equilibrium state.
7. The method (400) as claimed in claim 6, wherein the centre of mass of the candidate object is calculated by using a plurality of image moments of the surveillance images captured by the surveillance system, wherein the centre of mass calculation comprises steps of:
converting the target surveillance image to a grayscale image; binarization of the grayscale image to convert the pixels into black and white; and
calculating the image moments of the black and white image by using weighted average of image pixel intensities to determine the centre of mass.
8. The method (400) as claimed in claim 6, wherein the segmentation of the target surveillance image is carried out by using a Mask Regional-Convolutional Neural Network (Mask R-CNN) technique.
9. The method (400) as claimed in claim 6, wherein the method further comprises steps of,
receiving proximity data from the candidate object, wherein the proximity data corresponds to the data associated with a living or non-living object that that is approaching the candidate object, wherein the candidate object is equipped with a proximity sensor for collecting the proximity data; and
transmitting a secondary alert to the user device (103), based on the proximity data received from the candidate object.
10. The method (400) as claimed in claim 6, wherein the non-equilibrium state is determined based on angle between a perpendicular from the centre of mass of the candidate object to the elevated surface and line joining the centre of mass of the candidate object and centre point of overlapping pixels between the candidate object and the elevated surface.
Dated this 20th Day of May 2020
Priyank Gupta
Agent for Applicant
IN-PA-1454
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
TITLE OF INVENTION:
A SYSTEM FOR ALERTING A USER TO PREVENT FREE FALL OF AN OBJECT
APPLICANT:
Zensar Technologies Limited, an Indian Entity,
having address as:
ZENSAR KNOWLEDGE PARK,
PLOT # 4, MIDC, KHARADI, OFF
NAGAR ROAD, PUNE-411014,
MAHARASHTRA, INDIA
The following specification describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present subject matter described herein, in general relates to a system and method for preventing free fall of an object by using sensor and computer vision ensemble techniques. In particular, the present subject matter relates to a computer vision technique-based system and method for determining the equilibrium of the object, to evaluate the susceptibility of the object to free fall risks.
BACKGROUND
Now-a-days human life to a large extent is dependent on man-made items such as laptops, watches, smartphones, etc. Life without these smart devices seems to be unimaginable. On the other hand, to avail the benefits from these devices, one needs to protect them from any physical damage.
Insurance coverage of household goods/ objects is not mandatory, yet most of the people would find it difficult to re-purchase or replace these precious and expensive household goods if stolen or damaged due to free fall. Therefore, it becomes indispensable to insure these household goods along with your home insurance. Insurance companies have come up with many customized plans for home product insurance. They are often susceptible to losses. Each item must be individually insured, which increases the cost of premiums. It also means that Insurance companies have to pay a hefty amount as compensation when those household goods are damaged by some unforeseen events. In events such as free fall, of the household goods from an elevation, the electronic can be permanently damaged. Further, the household goods are also susceptible to potential risk of getting damaged unknowingly when exposed to house pets, animals, or even adults. The house pets can play with such household goods and can render them useless. Hence, the household goods need to be protected from any such potential risks in scenarios wherein the household goods are under insurance coverage.
In the current scenario, sensor-based fall detection technologies are being used for detecting fall of the wearable devices. These wearable devices are essentially worn by the elderly people. Hence, the sensor-based technology is used for human fall detection, wherein the ambulance services are notified in case of fall to provide necessary medical treatments.
Further, existing vision based technologies also focus on the human fall detection by using deep learning models. Hence, these technologies elaborate on the after-effects of fall and notify the concerned faculty when the fall has already occurred by continuously analysing and monitoring the frames captured by the wearable device.
However, there are no means to tackle the problem of detecting the potential fall (before the fall) event of any household object.
SUMMARY
This summary is provided to introduce concepts related to a system and method for preventing free fall of an object. This summary is not intended to identify essential features of the claimed subject matter not is not it intended for use on determining or limiting the scope of the claimed subject matter.
In one embodiment, a system for alerting a user to prevent free fall of an object that is placed on an elevated surface is disclosed. The system comprises of a processor and a memory that is coupled to the processor. The processor is configured to execute programmed instructions stored in the memory performing the task of alerting a user to prevent free fall of the object that is placed on an elevated surface. The system initially may receive a set of surveillance images from a surveillance system that is installed at a geographical location. The surveillance system is configured to monitor a set of objects at the geographical location. The system may process a target surveillance image, from the set of received surveillance images by using a computer vision processing technique. The computer vision technique enables detection of a candidate object, from the set of objects that are present in the target surveillance image. The system may segment the target surveillance image for extracting the pixels corresponding to the candidate object and an elevated surface on which the candidate object is placed. The system may determine a position of the candidate object with respect to the elevated surface based on centre of mass of the candidate object and overlap between the pixels corresponding to the candidate object and the elevated surface. Lastly, the system may transmit a primary alert to a user device when the position of the candidate object evaluates to a non-equilibrium state.
In another embodiment, a method for alerting a user to prevent free fall of an object that is placed on an elevated surface is disclosed. The method may comprise one or more steps for receiving a set of surveillance images from a surveillance system that is installed at a geographical location. The surveillance system may be configured to monitor a set of objects at the geographical location. The method may comprise one or more steps for processing a target surveillance image, from the set of received surveillance images by using a computer vision processing technique. The computer vision technique is used for detecting a candidate object, from the set of objects that are present in the target surveillance image. The method may comprise one or more steps for segmenting the target surveillance image for extracting the pixels corresponding to the candidate object and an elevated surface on which the candidate object is placed. The method may comprise one or more steps for determining a position of the candidate object with respect to the elevated surface based on centre of mass of the candidate object and overlap between the pixels corresponding to the candidate object and the elevated surface. Lastly, the method may comprise one or more steps for transmitting a primary alert to a user device when the position of the candidate object evaluates to a non-equilibrium state.
BRIEF DESCRIPTION OF DRAWINGS
The detailed description is described with reference to the accompanying Figures. In the Figures, the left-most digit(s) of a reference number identifies the Figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
Figure 1 illustrates an implementation (100) of a system (101) for preventing free fall of an object by using sensor and computer vision ensemble techniques, in accordance with an embodiment of the present subject matter.
Figure 2 illustrates a procedural flow followed to achieve the automated risk surveillance of the object to mitigate fall risk, in accordance with an embodiment of the present disclosure.
Figure 3A illustrates the non-equilibrium state of the object that is susceptible to a free fall.
Figure 3B illustrates the equilibrium state of the object that is not so susceptible to a free fall.
Figure 4 illustrates a method (400) for alerting a user to prevent free fall of an object, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” or “in an embodiment” in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
Referring to Figure 1, a network implementation (100) of a system (101) for preventing free fall of an object by using sensor and computer vision ensemble techniques is illustrated, in accordance with an embodiment of the present subject matter.
In an embodiment, the system (101) for preventing free fall of an object (hereinafter referred as system (101) interchangeably) may be connected to a user device (103) over a network (102). It may be understood that the system (101) for preventing free fall of an object (101) may be accessed by multiple users through one or more user devices, collectively referred to as a user device (103). The user device (103) may be any electronic device, communication device, image capturing device, machine, software, automated computer program, a robot or a combination thereof.
In an embodiment, through the present subject matter is explained considering that the system (101) is implemented (as an system (101) for preventing free fall of an object) on a server, it may be understood that the system (101) may also be implemented in a variety of user devices, such as, but not limited to, a portable computer, a personal digital assistance, a handheld device, a mobile, a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a mobile device, and the like. In one embodiment, system (101) may be implemented in a cloud-computing environment. In an embodiment, the network (102) may be a wireless network such as Bluetooth, Wi-Fi, 3G, 4G/LTE and alike, a wired network or a combination thereof. The network (102) can be accessed by the user device (103) using wired or wireless network connectivity means including updated communications technology.
In one embodiment, the network (102) can be implemented as one of the different types of networks, cellular communication network, local area network (LAN), wide area network (WAN), the internet, and the like. The network (102) may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network (102) may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
Further, referring to Figure 1, various components of the system (101) for preventing free fall of an object are illustrated, in accordance with an embodiment of the present subject matter. As shown, the system (101) may include at least one processor (201), an input/output interface (203), a memory (205), programmed instructions (207) and data (209). In one embodiment, the at least one processor (201) is configured to fetch and execute computer-readable instructions stored in the memory (205).
In one embodiment, the I/O interface (203) implemented as a mobile application or a web-based application and may further include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface (203) may allow the system (101) to interact with the user devices (103). Further, the I/O interface (203) may enable the user device (103) to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface (203) can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface (203) may include one or more ports for connecting to another server. In an exemplary embodiment, the I/O interface (203) is an interaction platform which may provide a connection between users and system (101).
In an implementation, the memory (205) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and memory cards. The memory (205) may include modules (207) and data (209).
In one embodiment, the programmed instructions (207) may include, routines, programmes, objects, components, data structures, etc. which perform particular tasks, functions, or implement particular abstract data types. The data (209) may comprise a data repository (211), database (213) and other data (215). In one embodiment, the database (213) may comprise customizable image segments data pre-trained and customized objects data and segmentation data associated with a plurality of images. The objects and segment information may comprise at least a domain specific data, a plurality of features, a text and optical character recognition (OCR) data related to the plurality of images. The other data (215) amongst other things, serves as a repository for storing data processed, received, and generated by one or more components and programmed instructions.
The aforementioned computing devices may support communication over one or more types of networks in accordance with the described embodiments. For example, some computing devices and networks may support communications over a Wide Area Network (WAN), the Internet, a telephone network (e.g., analog, digital, POTS, PSTN, ISDN, xDSL), a mobile telephone network (e.g., CDMA, GSM, NDAC, TDMA, E-TDMA, NAMPS, WCDMA, CDMA-2000, UMTS, 3G, 4G), a radio network, a television network, a cable network, an optical network (e.g., PON), a satellite network (e.g., VSAT), a packet-switched network, a circuit-switched network, a public network, a private network, and/or other wired or wireless communications network configured to carry data. Computing devices and networks also may support wireless wide area network (WWAN) communications services including Internet access such as EV-DO, EV-DV, CDMA/1×RTT, GSM/GPRS, EDGE, HSDPA, HSUPA, and others.
The aforementioned computing devices and networks may support wireless local area network (WLAN) and/or wireless metropolitan area network (WMAN) data communications functionality in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards, protocols, and variants such as IEEE 802.11 (“WiFi”), IEEE 802.16 (“WiMAX”), IEEE 802.20x (“Mobile-Fi”), and others. Computing devices and networks also may support short range communication such as a wireless personal area network (WPAN) communication, Bluetooth® data communication, infrared (IR) communication, near-field communication, electromagnetic induction (EMI) communication, passive or active RFID communication, micro-impulse radar (MIR), ultra-wide band (UWB) communication, automatic identification and data capture (AIDC) communication, and others.
The working of the system (101) in facilitating prevention of free fall of an object will now be described in detail referring to Figures 1, 2, 3A, 3B and 4 as below:
In one embodiment, the processor (201) may be configured to receive a set of surveillance images from a surveillance system that is installed at a geographical location. The surveillance system may be configured to monitor a set of objects at the geographical location. In one example, the surveillance system may comprise one or more cameras installed at the geographical location such as a house or office. The one or more cameras may capture the set of received surveillance images in real-time. Each surveillance image may correspond to one or more object placed at the geographical location. Furthermore, the surveillance system may be configured to transmit the set of surveillance images to the system (101) in real-time.
In one embodiment, the processor (201) at the system (101) may be configured to processes a target surveillance image, from the set of received surveillance images by using a computer vision processing technique. The computer vision technique may be used for detecting a candidate object, from a set of objects that are present in the target surveillance image. The candidate object may be determined based on historical data associated with a set of objects.
The processor (201) may be further configured to segment the target surveillance image for extracting the pixels corresponding to the candidate object and an elevated surface on which the candidate object is placed. For the purpose of segmentation, a dynamic image processing algorithm may be used. The image processing algorithm may convert the target surveillance image into grayscale or black and white image such that the candidate object and the elevated surface is clearly identifiable.
The processor (201) may be further configured to determine a position of the candidate object with respect to the elevated surface based on centre of mass of the candidate object and overlap between the pixels corresponding to the candidate object and the elevated surface.
The processor (201) may be further configured to transmits a primary alert to a user device when the position of the candidate object evaluates to a non-equilibrium state. Hence, the user may be notified at the on-set of the free fall of the candidate object. Thereby, the user may take respective action based on the alert received on the user device (103) before the candidate object undergoes a free fall.
Now, referring to Figure 2, 3A and 3B, the procedural flow followed to achieve the automated risk surveillance of the object to mitigate fall risk is disclosed in further detail.
In one embodiment, the images from surveillance system (202) are received by the system (101). The system (101) further detects the insured object by using vision technology of object detection. In situations where no object is detected, then the system (101) does nothing.
Further, on candidate object identification, the system (101) checks for the equilibrium position of the object and segments the object by using Mask Regional-Convolutional Neural Network (Mask R-CNN) technique to extract the shape of the object. Here, the equilibrium position of the detected object may be decided based on the centre of mass of the object and the support area on which the object is placed. Further, the centre of mass may be calculated by using plurality of image moments of the surveillance images captured by surveillance cameras. Following steps may be involved in calculation of the centre of mass of the detected object:
- Each surveillance image may be taken one at a time and the surveillance image may be converted to a grayscale image.
- Further, binarization of the grayscale image may be performed to form a black and white image.
- Further to binarization, image moments of the black and white image may be calculated by using weighted average of image pixel intensities to determine the centre of mass.
In one embodiment, the centre of mass of the candidate object may be calculated using equation (1) and (2) as disclosed below:
Mij = ?_x¦?_y¦x^(i ) y^j I (x,y)………………….. (1)
Cx = M_10/M_00 Cy = M_01/M_00 ……………………….. (2)
wherein, Cx and Cy are the x and y coordinates of the centroid and M is the moment.
Further, the structure of object may be captured by using segmentation. Similarly, the surface area may be segmented after segmenting the object. The risk associated with the object on the verge of falling from the edge may be detected based on the overlapped area between the surface on which the object may be lying and base area of the object (i.e. the common pixel identification in both segmentations of the object & the surface area).
Now referring to Figure 3A & 3B, the calculation of the equilibrium position of the object is briefly disclosed. Two lines AB and AC may be seen on the image. Following notations are given to points A, B and C:
A - Centre of mass of object
B - Point on surface such that line AB is perpendicular to surface
C - Centre of overlapping pixels.
In Figure 3A, the candidate object is not in equilibrium state. Here, line AB denotes a perpendicular from the calculated centre of mass of the object to the surface in contact. Further, line (AC) joins the centre of mass of the object and centre point of the surface in contact. The angle (alpha) between these two lines is calculated. Based on this angle the algorithm decides whether the object is in equilibrium or not.
When the object is not in equilibrium, the alerts are sent to the user’s mobile device. In one embodiment, the alarm may be sent in the form of a 1-minute ringtone or an email alert which describes the object that is at risk. Further, if it the object is in equilibrium as seen in Figure 3B, the angle alpha made between two lines AB & AC may be small, and therefore the system may decide not to send any alert, since the object is in equilibrium. Hence, the alerts are not sent to the user on detecting that the object is in equilibrium and the system (101) thereby does nothing.
In another embodiment, the candidate objects are provided with proximity sensors 204 for detecting other objects/ animals/ persons in its vicinity. The proximity sensors 204 continuously monitor and send the proximity data to the system (101). This data may be used by the classification algorithm of the system (101) to predict whether any of the other objects or animal or person is near to the candidate object. Hence, the insured candidate objects monitor the other living or non-objects that may come close to it and pose a potential risk to the candidate object.
The processor (201) may be therefore configured to receive proximity data from the candidate insured object. The proximity data may correspond to the data associated with a living or non-living object such as adult, pet, animal, etc. that is approaching the insured object. In another embodiment, the processor (201) may be configured to further transmit a secondary alert to the user device (103), if the proximity data discloses that the potentially risky object such as pet, etc. may be approaching the candidate object. Therefore, the candidate object may be secured from any such potential threat.
Further, all such risky events may be logged into the application’s database. The events may include the times when the insured product might be at risk and the alerts that were sent to the owner. The logging may also include the action taken by the user or any other member in the given amount of time. These logs may then be used by insurance companies 206 to adjust the next premiums for the given insured object owned by the user.
Now, referring to figure 4, a method (400) for alerting a user to prevent free fall of an object that is placed on an elevated surface is illustrated in accordance with the embodiments of the present disclosure. The order in which method (400) is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method (400) or alternate methods. Furthermore, the method (400) can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for the ease of explanation, in the embodiments described below, the method (400) may be implemented in the above described system (101).
At step (401), the processor (201) may be configured to receive a set of surveillance images from the surveillance system at a geographic location. In another embodiment, the surveillance system is configured to monitor a set of objects at the geographic location.
At step (403), the processor (201) may be configured to process a target surveillance image from the set of surveillance images received from the surveillance system (202) by using vision processing technique. In another embodiment, the vision processing technique may be used for detecting a candidate object, from the set of objects present in the target surveillance image.
At step (405), the processor (201) may be configured to segment the target surveillance image to extract the pixels that correspond to both the insured object and the elevated surface on which the insured object is placed.
At step (407), the processor (201) may be configured to determine a position of the insured object with respect to the elevated surface. In another embodiment, the determination may be based on centre of mass of the insured object and overlap between the pixels corresponding to the insured object and the elevated surface.
At step (409), the processor (201) may be configured to transmit a primary alert to a user device on determining that the position of the insured object is in a non-equilibrium state.
The system (101) for preventing free fall of the objects as described in present disclosure may provide multiple advantages involving but not limited to:
identification of onset of free fall of any household object lying on any elevated surface.
identification of equilibrium position of any household object only by using Computer Vision technology.
additional security to the objects potentially susceptible to free fall by using sensor capabilities, wherein the proximity sensors are utilized to detect any unnatural movements around the object and to overcome occlusions.
The embodiments, examples and alternatives of the preceding paragraphs or the description and drawings, including any of their various aspects or respective individual features, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.
Although system (101) implementations for alerting a user to prevent free fall of an object that is placed on an elevated surface and the method (400) thereof have been described in language specific to structural features and/or methods, it is to be understood that the approached claims are not necessarily limited to the specific features or methods described. Rather, the specific features and method are disclosed as examples of system (101) implementations for alerting a user to prevent free fall of an object that is placed on an elevated surface and the method (400) thereof.
The foregoing description shall be interpreted as illustrative and not in any limiting sense. A person of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For limiting the scope of the invention, a subsequent Complete Specification be filed to determine the true scope and content of this disclosure.
| # | Name | Date |
|---|---|---|
| 1 | 202021021207-STATEMENT OF UNDERTAKING (FORM 3) [20-05-2020(online)].pdf | 2020-05-20 |
| 2 | 202021021207-REQUEST FOR EXAMINATION (FORM-18) [20-05-2020(online)].pdf | 2020-05-20 |
| 3 | 202021021207-POWER OF AUTHORITY [20-05-2020(online)].pdf | 2020-05-20 |
| 4 | 202021021207-FORM 18 [20-05-2020(online)].pdf | 2020-05-20 |
| 5 | 202021021207-FORM 1 [20-05-2020(online)].pdf | 2020-05-20 |
| 6 | 202021021207-FIGURE OF ABSTRACT [20-05-2020(online)].pdf | 2020-05-20 |
| 7 | 202021021207-DRAWINGS [20-05-2020(online)].pdf | 2020-05-20 |
| 8 | 202021021207-COMPLETE SPECIFICATION [20-05-2020(online)].pdf | 2020-05-20 |
| 9 | 202021021207-Proof of Right [17-07-2020(online)].pdf | 2020-07-17 |
| 10 | Abstract1.jpg | 2020-08-06 |
| 11 | 202021021207-FER.pdf | 2021-11-30 |
| 12 | 202021021207-OTHERS [18-05-2022(online)].pdf | 2022-05-18 |
| 13 | 202021021207-FER_SER_REPLY [18-05-2022(online)].pdf | 2022-05-18 |
| 14 | 202021021207-CLAIMS [18-05-2022(online)].pdf | 2022-05-18 |
| 15 | 202021021207-US(14)-HearingNotice-(HearingDate-19-08-2024).pdf | 2024-07-25 |
| 16 | 202021021207-Correspondence to notify the Controller [30-07-2024(online)].pdf | 2024-07-30 |
| 1 | SearchStrategyMatrixE_30-11-2021.pdf |