Abstract: VIDEO SURVEILLANCE SYSTEM AND A METHOD THEREOF ABSTRACT Disclosed is a method (300) and a system (100) for video surveillance comprising a plurality of video cameras (101), a recording module (102), a server (103) with a plurality of AI models (104), user interfaces (106), personal communication devices (107), and an alert notification module (105). The video cameras (101) are deployed at multiple locations on the premises and capture events occurring at corresponding locations. The live streams of the videos are sent to the server (103) via a recording module (102). The AI models (104) identify the occurrences of abnormal events and/or recognition of a person, object, or vehicle in the premises, and communicate the same to the alert notification module (105). The system (100) and the method (300) enable the users to receive alerts in real time and act proactively for risk mitigation. Ref. Fig. 1
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2005
COMPLETE SPECIFICATION
(See section 10, rule 13)
1. TITLE OF THE INVENTION:
VIDEO SURVEILLANCE SYSTEM AND A METHOD THEREOF
2. APPLICANT
(a) Name: Pivotchain Solution Technologies Pvt. Ltd.
(b) Nationality: An Indian Company
(c) Address:
Panchshil Chambers, 3rd floor, Office No 303, Near Ganpati Chowk, Viman Nagar, Pune, Maharashtra – 411014 India
3. PREAMBLE TO THE DESCRIPTION
PROVISIONAL
The following specification describes the invention. COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The present invention relates to surveillance systems and more particularly, the present invention relates to an automated video surveillance system and a method thereof, that leverages artificial intelligence (AI) techniques for video analysis in real-time.
BACKGROUND ART
Security systems for residential, commercial, or industrial facilities deploy video cameras at various locations. These cameras capture events and live stream them to user terminals, where the security personnel or operators have to continuously observe the live streams. This approach requires consistent monitoring of the user terminal to identify potential suspicious activities or abnormal events. However, this approach may lead to fatigue in operators or security personnel, which might result in completely missing critical events or a delayed response to them. Moreover, as the number of security cameras increases, it leads to an additional effort on the operators’ part and inefficient allocation of resources that might escalate the situation.
To overcome the limitations of manual surveillance, attempts have been made to automate the process. Reference may be made to a related art WO2020235819A1 which discloses an image-based real-time intrusion detection method and surveillance camera using artificial intelligence. The invention discloses sampling of a plurality of frames. An artificial neural network is used to match an object in the frame to a target object. A probability of an intrusion is obtained from the generated travel trajectory using another artificial neural network. As the disclosure evaluates the likelihood of intrusion, the primary focus thereof is probability calculation rather than real-time detection.
Reference may be made to another related art KR20180134114A which discloses real-time video surveillance system and method. The invention deploys CCTV-type monitoring system based on a wide-angle photographing camera. The system comprises a remote-controlled device connected to master cameras, and slave cameras, each of which is connected to any one of the master cameras and acquires the concentrated monitoring image for a part of the monitoring area. The disclosure focuses on real-time processing and tracking but does not teach face recognition.
Another related art US10289824B2 discloses a security system and facility access control that includes a camera for capturing images of individuals attempting to enter or exit restricted areas. The document teaches the use of a deep learning model for multi-task learning, including liveliness detection and face recognition. It evaluates various physical spoofing materials to prevent face spoofing, ensuring the liveliness of individuals and their identity for secure facility access.
The prior art teaches object detection and face recognition. However, it does not disclose vehicle number plate and/or vehicle color recognition along with identification of potential harmful events. Accordingly, there exists a need for an automated system that can monitor particular premises, and identify potentially abnormal events of multiple types, offering a more versatile, comprehensive, and cohesive real-time surveillance solution that sends alerts to users in real-time for timely action. Moreover, there exists a need for a method that provides a proactive approach to anomaly detection.
OBJECT OF THE INVENTION
An object of the present invention is to provide a video surveillance system.
Another object of the present invention is to provide a video surveillance system that monitors particular premises by deploying a plurality of cameras that send real-time video streams to users for viewing purposes and to trained AI models for further analysis.
Another object of the present invention is to provide a video surveillance system that leverages an AI model to process the live streams received from the plurality of cameras to detect an abnormal event.
Still another object of the present invention is to provide a video surveillance method that leverages video analytics tools using artificial intelligence methodologies.
SUMMARY OF THE INVENTION
This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
The present disclosure generally relates to video surveillance systems. More particularly, the present disclosure relates to an automated video surveillance system and a method thereof, that leverages artificial intelligence (AI) techniques for video analysis in real-time.
In an aspect, the video surveillance system includes a plurality of video cameras deployed at one or more locations in a premises. The video surveillance system includes a recording module(s) communicatively coupled to the plurality of cameras and configured to receive and store the live streams from a plurality of cameras. The video surveillance system includes a server communicatively coupled to the recording module(s) via a LAN connection to receive live streams therefrom and configured with a plurality of AI models. The video surveillance system includes the plurality of AI models configured to analyze the live streams and identify an occurrence of an abnormal event(s), and/or recognition of a person, object, or vehicle in the premises. The video surveillance system includes a plurality of user interfaces communicatively coupled to the plurality of AI models. The video surveillance system includes an alert notification module communicatively coupled to the plurality of AI models and a plurality of personal communication devices communicatively coupled to the alert notification module.
In an aspect, a method of video surveillance performed by the video surveillance system includes capturing video streams by a plurality of cameras. The method includes recording the video streams by a recording module(s). The method includes transmitting the video streams to a server. The method includes identifying patterns in the received video streams indicative of abnormal events and/or recognition of a person, object, or vehicle by a plurality of AI models. The method includes generating alerts in response to the detection of abnormal events, and/or recognition of a person, object, or vehicle on the premises. The method includes communicating the alerts to an alert notification module. The method includes communicating the alerts to the users via emails, messages, and smartphone application, to a plurality of personal communication devices, by the alert notification module. The method includes communicating the recorded information by the server to a plurality of modules of the user interface.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects and advantages of the present invention will become apparent when the disclosure is read in conjunction with the following figures, wherein
Figure 1 illustrates the block diagram of a video surveillance system (100) in accordance with an embodiment of the present invention,
Figure 2 represents a block diagram of a user interface (106) in the system (100) in accordance with an embodiment of the present invention,
Figure 3 represents a flow diagram of a method of video surveillance (300) in accordance with an embodiment of the present invention.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present invention. Similarly, it will be appreciated that any flowcharts, flow diagrams, and the like represent various processes that may be substantially represented in computer-readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments herein provide a video surveillance system (hereinafter referred to as “system (100)”) and a method (300) thereof, configured to monitor premises by deploying one or more video cameras at one or more locations therein. Live streams obtained from the cameras are analyzed by artificial intelligence techniques. Alerts are generated upon anomaly detection and communicated to the users to further take risk mitigation actions.
Throughout this application, with respect to all reasonable derivatives of such terms, and unless otherwise specified (and/or unless the particular context clearly dictates otherwise), each usage of:
“a” or “an” is meant to read as “at least one”,
“the” is meant to be read as “the at least one.”
References in the specification to “one embodiment” or “an embodiment” mean that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Hereinafter, embodiments will be described in detail. For clarity of the description, known constructions and functions will be omitted.
Parts of the description may be presented in terms of operations performed by at least one processor, electrical/electronic circuit, a computer system, using terms such as data, state, link, fault, packet, and the like, consistent with the manner commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. As is well understood by those skilled in the art, these quantities take the form of data stored/transferred in the form of non-transitory, computer-readable electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipulated through mechanical and electrical components of the computer system; and the term computer system includes general purpose as well as special purpose data processing machines, switches, and the like, that are standalone, adjunct or embedded. For instance, some embodiments may be implemented by a processing system that executes program instructions so as to cause the processing system to perform operations involved in one or more of the methods described herein. The program instructions may be computer-readable code, such as compiled or non-compiled program logic and/or machine code, stored in a data storage that takes the form of a non-transitory computer-readable medium, such as a magnetic, optical, and/or flash data storage medium. Moreover, such processing systems and/or data storage may be implemented using a single computer system or may be distributed across multiple computer systems (e.g., servers) that are communicatively linked through a network to allow the computer systems to operate in a coordinated manner.
The present invention is illustrated with reference to the accompanying drawings, throughout which reference numbers indicate corresponding parts in the various figures. These reference numbers are shown in brackets in the following description and in the table below.
Ref No. Component Ref No. Component
100 Video surveillance system 201 User module
101 Camera 202 Location module
102 Recording module 203 Camera module
103 Server 204 Dashboard
104 AI models 205 Analytics module
105 Alert notification module 206 Live stream module
106 User interface 207 Map module
107 Personal communication device 300 Method of video surveillance
In one of the exemplary embodiments of the present invention, the system comprises a plurality of video cameras, a recording module(s), a plurality of AI models, one or more user interfaces, and an alert notification module.
In one of the exemplary embodiments of the present invention, the system is configured to provide real-time monitoring of a given premises by deploying one or more video cameras at one or more predetermined locations.
In one of the exemplary embodiments of the present invention, the system is configured to allow one or more users to create/delete their respective accounts or change their log-in credentials via a personal communication device.
In one of the exemplary embodiments of the present invention, the system is configured to allow new users to create their accounts in the system and authenticated users (hereinafter referred to as “users”) to view their account details.
In one of the exemplary embodiments of the present invention, the users may comprise residents, staff members of the facility, security personnel, administrators, support staff, and supervisors.
In one of the exemplary embodiments of the present invention, the system is configured to allow the users to update the location details such as the addition or deletion of a particular location under surveillance. Moreover, the users may be allowed to add the details of a newly deployed camera.
In one of the exemplary embodiments of the present invention, the system is configured to present, to the user, a live stream of recordings from all the cameras on a user interface.
In one of the exemplary embodiments of the present invention, the system is configured to generate alerts in case of abnormal event(s) and communicate, to the user, the occurrence of such event(s) via an alert notification module.
In one of the exemplary embodiments of the present invention, the abnormal events may comprise occurrences of fire, unauthorized entry of a person or a vehicle, incidents of fights, vandalism, robbery, riot, perimeter breach, loitering, explosion, presence of a weapon, camera tampering, overcrowding, unattended station, suspicious activities such as crawling, wall jumping/climbing, or any combination thereof.
In one of the exemplary embodiments of the present invention, the system is configured to generate reports in the form of graphs, charts, or tables to display the statistics of alerts and abnormal events.
In one of the exemplary embodiments of the present invention, the system is configured to allow the users to download the historical data and perform playback of earlier recordings.
In an implementation of the preferred embodiment of the present invention, the operation of the system (100) is explained by referring to Figures 1 and 2. The system (100) comprises a plurality of video cameras (101-a, 101-b, …..,101-n, collectively referred to as “cameras (101)”) deployed at one or more locations in a premises. The plurality of cameras (101) is communicatively coupled to a recording module (102). For the addition of every camera (101), the system (100) is configured to accept the exact location of the camera (101). The recording module(s) (102) may be a digital video recorder (DVR) or a network video recorder (NVR). The recording module(s) (102) is configured to receive and store the live streams from a plurality of cameras (101). The recording module(s) (102) is communicatively coupled to a server (103) via a LAN connection and is configured to transfer the received live streams thereto. The server (103) is configured with a plurality of AI models (104-a, 104-b, 104-c, 104-d, 104-e, collectively referred to as “AI models (104)”), each of which is trained for the detection of an abnormal event, person, vehicle, or object. Each of the plurality of AI models (104) is configured to analyze the live streams and identify an occurrence of an abnormal event(s), if any. Each of the plurality of AI models (104) is trained using a proprietary dataset considering a plurality of parameters, thereby enhancing the capacity thereof to discern intricate patterns, thus leading to precise identification of an abnormal event.
The AI model (104-a), that is configured for person identification and tracking has convolution layers in the range of 8-10 to extract features from face images and 2-3 fully connected layers. The AI model (104-a) is designed with activation functions to add nonlinearity to capture complex patterns in the data. Typically, at least one activation function is introduced after each convolutional and dense layer. Moreover, the AI model (104-a) is configured with 2-3 pooling operations to reduce spatial resolution while retaining important features. Additionally, the AI model (104-a) is configured with 1-2 normalization and regularization operations, at least one loss function, and at least one matching process.
The AI model (104-b) is configured for event recognition by detecting and locating objects within frames coming from the recording module (102) by generating bounding boxes and classifying detecting objects. The model (104-b) is configured with 10-12 convolutional layers, 1-2 neck layers, and 2-3 dense layers. The convolutional layers are configured for extracting object features. The neck layers are configured for combining features at different scales to detect objects of varying sizes. The dense layers are configured to output bounding boxes, confidence scores, and class labels. Moreover, the model (104-b) is designed with 2-3 downsampling operations, at least one activation function after each layer, at least one loss function, and 1-2 dropout operations. Additionally, the model (104-b) is designed with post-processing to eliminate redundant bounding boxes based on overlap thresholds.
The AI model (104-c) is configured for recognizing and extracting license plate numbers from vehicle images for automated identification and tracking of vehicles. The model (104-c) is configured with 8-10 convolutional layers, 1-2 segmentation layers, 2-3 optical character recognition layers, and 2-3 dense layers. Moreover, the model (104-c) is designed with 2-3 downsampling operations, at least one activation function after each layer, 1-2 dropout operations, at least one loss function, and at least one post-processing step.
The AI model (104-d) is configured for analyzing video data frame by frame to detect and classify actions, enabling an understanding of activities within video content and facilitating a search for a particular video in the received recorded streams. The model (104-d) is configured with 8-10 convolutional layers, 2-4 transformer layers, 2-3 temporal layers, 1-2 encoding layers for converting input text in the form of queries or descriptions into embeddings, 1-2 fusion layers for combining visual and textual features for comprehensive understanding, and 2-3 dense layers. Moreover, the model (104-d) has at least one activation function after each layer, 1-2 dropout operations, and at least one loss function.
The AI model (104-e) is configured for generating concise summaries of lengthy reports according to the query asked by the user, capturing relevant information for readable output. The model (104-e) is designed with at least one embedding layer to convert tokenized text into high-dimensional embeddings. Moreover, the model (104-e) is designed with 4-6 self-attention layers, and 4-6 transformer layers. Additionally, the model (104-e) is designed with at least one summarization head to generate summary text in the form of sentences from learned representations. Furthermore, at least one activation function, at least one loss function, and post-processing are embedded in the model (104-e).
Further, the plurality of AI models (104) is communicatively coupled to one or more user interfaces (106-a,……,106-n, collectively referred to as “user interface (106)”) and to an alert notification module (105). The system (100) is configured such that each of the plurality of AI models (104) can be configured to generate and communicate alerts to the alert notification module (105) for abnormal events that may be selected among but not limited to suspicious object/weapon detection, face recognition, vehicle number plate recognition, vehicle color detection, perimeter breach, wall jump detection, overcrowding, vandalism, camera tampering. The alert notification module (105) is communicatively coupled to a plurality of personal communication devices (107-a, ………107-n, collectively referred to as “personal communication device (107)”), that belong to each of the users of the system (100). The plurality of personal communication devices (107) can be mobile phones, laptops, or tablets. The alert notification module (105) upon receiving communication from any of the plurality of AI models (104), sends a communication that may be in the form of a text message sent via emails, a messaging application, or a smartphone application to the plurality of personal communication devices (107). Moreover, the alert notification module (105) is configured to send alerts that may be in the form of pop-ups, active video links, or sound notifications that are accessible to the users via their emails, messages, or smartphone applications for instant messaging.
Each of the plurality of user interfaces (106) is configured to receive the data from the server (103). Referring to Figure 2, each of the plurality of user interfaces (106) is configured to encompass one or more modules that are configured to facilitate the user to communicate with the system (100). The communication of the user with the system (100) may be intended to receive inputs or add and/or delete certain facilities provided by the system (100). A user module (201) is configured to enable a user to log in to the system with registered credentials or to sign in as a new user. The user module (201) allows authenticated users to access their respective accounts. Once logged in, the user can add/delete his account, or reset/update the password thereof. A location module (202) is configured to display all the locations under surveillance to the user. Moreover, it facilitates the user to add new locations under surveillance as well as delete the locations that are no more under surveillance. A camera module (203) is configured to allow the user to add newly installed camera(s) to the system and allow the server to access the live streams therefrom. A dashboard (204) is configured to display a live stream of one or more cameras (101) to the user and abnormal events at locations captured thereby. An analytics module (205) is configured to provide statistical data related to alerts in graphical and tabular format. A live stream module (206) is configured to provide live streams of one or more cameras (101) to the user, allows the user to retrieve and play back one or more desired streams, and download the historical data. A map module (207) is configured to display, on a map, all the locations under surveillance as well as locations where the occurrence of abnormal situations is identified. Moreover, each of the plurality of user interfaces (106) is configured to provide a summary report to the user for a user-selectable duration in text or voice format. The summary report may include the number and type of abnormal events detected by the system (100) at a particular location in a given premises. Additionally, each of the plurality of user interfaces (106) is configured to receive a request in voice or text format from the user and direct the same to one or more relevant AI models (104) and provide a live status of the object, person, and event, or any combination thereof requested by the user. The request for a person search can be carried out with the inputs such as a name and/or image of the person received from the user. The request for objects such as a vehicle can be carried out based on the inputs such as vehicle number, color, and/or vehicle image provided by the user. The request for an event search can be carried out with inputs such as a description and location of the event, as received from the user. The system (100) is configured with autoscaling, to cater to the larger number of requests for each of the plurality of AI models (104), thereby enabling the same to handle increased load efficiently.
In an implementation of one of the preferred embodiments of the present invention, the method (300) of video surveillance performed by the system (100) comprises steps such as capturing (301) video streams by a plurality of cameras (101), recording the video streams (302) by a recording module (102), transmitting (303) the video streams to a server (103), identifying patterns (304) in the received video streams indicative of abnormal events by a plurality of AI models (104), generating (305) alerts in response to the detection of abnormal events, and/or recognition of a person, object, or vehicle in the premises, and communicating (306) the alerts to an alert notification module (105), communicating (307) the alerts to the users via emails, messages, smartphone application, to a plurality of personal communication devices (107), by the alert notification module (105).
ADVANTAGES OF THE INVENTION
1. The system leverages AI techniques to analyze live streams from cameras to identify occurrences of abnormal events at a particular location and communicate the same to the users, allowing them to act proactively.
2. The system identifies and addresses diverse anomalies and security concerns.
3. The system enables the users to take timely actions and empowers them to work for mitigation of risks.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the precise forms disclosed, and obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present invention and its practical application, to thereby enable others skilled in the art to best utilize the present invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omission and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the scope of the present invention.
,CLAIMS:We claim:
1. Video surveillance system (100), the system (100) comprising:
a plurality of video cameras (101), the plurality of video cameras (101) deployed at one or more locations in a premises;
a recording module(s) (102), the recording module(s) (102) communicatively coupled to the plurality of cameras (101) and configured to receive and store the live streams from a plurality of cameras (101);
a server (103), the server (103) communicatively coupled to the recording module(s) (102) via a LAN connection to receive live streams therefrom and configured with a plurality of AI models (104);
the plurality of AI models (104) configured to analyze the live streams and identify an occurrence of an abnormal event(s), and/or recognition of a person, object, or vehicle in the premises;
a plurality of user interfaces (106), the plurality of user interfaces (106) communicatively coupled to the plurality of AI models (104);
an alert notification module (105), the alert notification module (105) communicatively coupled to the plurality of AI models (104); and
a plurality of personal communication devices (107), the plurality of personal communication devices (107) communicatively coupled to the alert notification module (105).
2. The video surveillance system (100) as claimed in claim 1, wherein, the recording module (102) is a digital video recorder or a network video recorder.
3. The video surveillance system (100) as claimed in claim 1, wherein, the plurality of AI models comprises :
a model (104-a) configured for person identification and tracking;
a model (104-b) configured for event recognition by detecting and locating objects within frames coming from the recording module (102) by generating bounding boxes and classifying detecting objects;
a model (104-c) configured for recognizing and extracting license plate numbers from vehicle images for automated identification and tracking of vehicles;
a model (104-d) configured for analyzing video data frame by frame to detect and classify actions, enabling an understanding of activities within video content and facilitating a search for a particular video in the received recorded streams; and
a model (104-e) configured for generating concise summaries of reports according to the query asked by the user, capturing relevant information for readable output.
4. The video surveillance system (100) as claimed in claim 1, wherein, the alert notification module (105) upon receiving communication from any of the plurality of AI models (104), sends a communication that may be in the form of a text message sent via emails, a messaging application, or a smartphone application or alerts in the form of pop-ups, active video links, or sound notifications to the plurality of personal communication devices (107) in real time.
5. The video surveillance system (100) as claimed in claim 1, wherein, each of the plurality of user interfaces (106) is configured to provide a summary report to the user for a user-selectable duration in text or voice format that may include the number and type of abnormal events detected by the system (100) at a particular location in a given premises.
6. The video surveillance system (100) as claimed in claim 1, wherein, each of the plurality of user interfaces (106) is configured with
a user module (201), the user module (201) configured to enable a user to log in to the system with registered credentials or to sign in as a new user, add/delete his account, or reset/update the password thereof;
a location module (202), the location module (202) configured to display all the locations under surveillance to the user and facilitates the user to add new locations under surveillance as well as delete the locations that are no more under surveillance;
a camera module (203), the camera module (203) configured to allow the user to add newly installed camera(s) to the system and allow the server to access the live streams therefrom;
a dashboard (204), the dashboard (204) configured to display a live stream of a plurality of cameras (101) to the user and abnormal events at locations captured thereby;
an analytics module (205), the analytics module (205) configured to provide statistical data related to alerts in graphical and tabular format;
a live stream module (206), the live stream module (206) configured to provide live streams of one or more cameras (101) to the user, allowing the user to retrieve and play back one or more desired streams, and download the historical data; and
a map module (207), the map module (207) configured to display, on a map, all the locations under surveillance as well as locations where the occurrence of abnormal situations is identified.
7. The video surveillance system (100) as claimed in claim 1, is configured with autoscaling, to cater to the larger number of requests for each of the plurality of AI models (104).
8. A method of video surveillance (300), the method (300) performed by the system (100) comprising:
capturing (301) video streams by a plurality of cameras (101);
recording the video streams (302) by a recording module (102);
transmitting (303) the video streams to a server (103);
identifying patterns (304) in the received video streams indicative of abnormal events by a plurality of AI models (104);
generating (305) alerts in response to the detection of abnormal events, and/or recognition of a person, object, or vehicle in the premises; communicating (306) the alerts to an alert notification module (105);
communicating (307) the alerts to the users via emails, messages, smartphone application, to a plurality of personal communication devices (107), by the alert notification module (105); and
communicating the recorded information by the server (103) to a plurality of modules of the user interface (106).
Dated this December 20, 2024
Prafulla Wange
(Agent for Applicant)
(IN/PA: 2058)
| # | Name | Date |
|---|---|---|
| 1 | 202421007157-PROVISIONAL SPECIFICATION [02-02-2024(online)].pdf | 2024-02-02 |
| 2 | 202421007157-PROOF OF RIGHT [02-02-2024(online)].pdf | 2024-02-02 |
| 3 | 202421007157-POWER OF AUTHORITY [02-02-2024(online)].pdf | 2024-02-02 |
| 4 | 202421007157-FORM FOR STARTUP [02-02-2024(online)].pdf | 2024-02-02 |
| 5 | 202421007157-FORM FOR SMALL ENTITY(FORM-28) [02-02-2024(online)].pdf | 2024-02-02 |
| 6 | 202421007157-FORM 1 [02-02-2024(online)].pdf | 2024-02-02 |
| 7 | 202421007157-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-02-2024(online)].pdf | 2024-02-02 |
| 8 | 202421007157-EVIDENCE FOR REGISTRATION UNDER SSI [02-02-2024(online)].pdf | 2024-02-02 |
| 9 | 202421007157-DRAWINGS [02-02-2024(online)].pdf | 2024-02-02 |
| 10 | 202421007157-FORM 3 [19-02-2024(online)].pdf | 2024-02-19 |
| 11 | 202421007157-ENDORSEMENT BY INVENTORS [19-02-2024(online)].pdf | 2024-02-19 |
| 12 | 202421007157-FORM-5 [20-12-2024(online)].pdf | 2024-12-20 |
| 13 | 202421007157-FORM-26 [20-12-2024(online)].pdf | 2024-12-20 |
| 14 | 202421007157-FORM FOR STARTUP [20-12-2024(online)].pdf | 2024-12-20 |
| 15 | 202421007157-FORM 3 [20-12-2024(online)].pdf | 2024-12-20 |
| 16 | 202421007157-EVIDENCE FOR REGISTRATION UNDER SSI [20-12-2024(online)].pdf | 2024-12-20 |
| 17 | 202421007157-DRAWING [20-12-2024(online)].pdf | 2024-12-20 |
| 18 | 202421007157-COMPLETE SPECIFICATION [20-12-2024(online)].pdf | 2024-12-20 |
| 19 | 202421007157-FORM-9 [16-01-2025(online)].pdf | 2025-01-16 |
| 20 | 202421007157-STARTUP [20-01-2025(online)].pdf | 2025-01-20 |
| 21 | 202421007157-FORM28 [20-01-2025(online)].pdf | 2025-01-20 |
| 22 | 202421007157-FORM 18A [20-01-2025(online)].pdf | 2025-01-20 |
| 23 | Abstract.jpg | 2025-02-06 |
| 24 | 202421007157-FER.pdf | 2025-04-08 |
| 25 | 202421007157-FORM 3 [09-04-2025(online)].pdf | 2025-04-09 |
| 26 | 202421007157-ORIGINAL UR 6(1A) FORM 1 & 26-280425.pdf | 2025-04-29 |
| 27 | 202421007157-FER_SER_REPLY [19-05-2025(online)].pdf | 2025-05-19 |
| 28 | 202421007157-CLAIMS [19-05-2025(online)].pdf | 2025-05-19 |
| 29 | 202421007157-US(14)-HearingNotice-(HearingDate-16-07-2025).pdf | 2025-06-30 |
| 30 | 202421007157-Correspondence to notify the Controller [08-07-2025(online)].pdf | 2025-07-08 |
| 31 | 202421007157-US(14)-ExtendedHearingNotice-(HearingDate-19-08-2025)-1100.pdf | 2025-07-10 |
| 32 | 202421007157-Correspondence to notify the Controller [12-08-2025(online)].pdf | 2025-08-12 |
| 33 | 202421007157-US(14)-ExtendedHearingNotice-(HearingDate-23-09-2025)-1200.pdf | 2025-08-13 |
| 34 | 202421007157-Correspondence to notify the Controller [18-09-2025(online)].pdf | 2025-09-18 |
| 35 | 202421007157-Response to office action [03-10-2025(online)].pdf | 2025-10-03 |
| 1 | 202421007157_SearchStrategyNew_E_202421007157E_08-04-2025.pdf |