Abstract: The present disclosure relates to a system for detecting anomaly using visual sensor and edge analysis in an the system (100) includes an image acquisition unit (102) configured to acquire a set of visual data of the Area of Interest (AoI). A computing edge device (104) coupled to the image acquisition unit (102), the computing edge device (104) is configured to receive, by a detection component (106), the acquired visual data to detect presence of objects pertaining to human activities within the area of interest. Analyse the detected data to establish a pre-defined threshold, distinguishing between anomaly patterns and non-anomaly patterns of the human activities. Generate an alert signal, upon detection of the anomaly patterns of the human activities and transmit the generated alert signal to a decision support system (DSS) (122) to take appropriate actions.
Description:TECHNICAL FIELD
[0001] The present disclosure relates generally to the technical field of sur-veillance systems. In particular, it pertains to a simple and efficient system and method for detecting anomalies using edge analysis on visual sensors.
BACKGROUND
[0002] The conventional surveillance systems available in the market heavily rely on CCTV cameras to monitor and record activities in several corporations and residential houses. However, the effectiveness of these systems is constrained by the need for continuous human monitoring to detect suspicious or illegal activities captured in the video feed. Human operators tasked with monitoring the feed face inherent limitations such as fatigue, attention lapses, and the inability to analyze hours of footage continuously and efficiently in real-time. Moreover, the lack of automated mechanisms for identifying and flagging suspicious behavior or illegal actions significantly burdens security personnel thereby leading to a reliance on increased staffing levels, which escalates operational costs and may result in missed or delayed response on critical events due to human error or oversight.
[0003] In view of the above problem, efforts have been made to provide a solution to the above limitations. For instance, a patent application US7999848B2 describes a system and method for automated scanning for identifying foreign objects or irregularities in the vicinity of rail tracks by capturing and analyzing im-ages to extract pertinent information. The above disclosure aims to offer early de-tection of foreign objects or abnormalities, like explosives or potential threats, near or on rail tracks, providing timely alerts for appropriate action. The system in-volves scanning devices to inspect the rail track and detection devices to pinpoint the presence and location of any object or abnormality based on the scan data. Additionally, continuous images of the track, timestamped and annotated with location data, are captured and stitched together, allowing decision-makers to re-view the images for accurate inspection. Measures are implemented to reduce false alarms caused by normal foreign objects, by categorizing the identified object as either normal or abnormal by cross-referencing it with entries in the object refer-ence database subsystem.
[0004] However, the above disclosure relies on scanning and image pro-cessing techniques to detect objects or abnormalities on or near rail tracks is lim-ited to detecting only suspicious activities pertaining to railways, and does not in-corporate any customized learning method for continuously learning from the his-torical data such that the current events can be compared or analysed with the available historical data for detecting any suspicious or illegal activity from the live fed. In addition, the accuracy and reliability of the detection system are im-pacted by factors such as lighting conditions, image quality, or the type of objects or abnormalities present in the vicinity of the rail track. These inaccuracies could potentially limit the overall effectiveness and safety of the system in preventing rail-related accidents.
[0005] Another patent application WO2052039323A1 describes a proposed system and method that involves a camera that rapidly zooms and focuses on a moving object, ensuring high-quality images. Using accurate distance, altitude, and direction data from a position sensor alongside real-time footage, a control unit predicts the object's future positions. This arrangement allows for swift and precise object recognition, enabling fast tracking, prediction, and consistent deliv-ery of top-notch images without the need for manual intervention in analyzing im-ages. However, the above disclosure is restricted to a particular task to detect fast-moving objects within a relatively smaller area, making its functioning restricted to high quality video feed.
[0006] However, the above patent disclosures are not capable of efficiently performing surveillance to detect a customized anomaly by statistical analysis of captured feed and generating real-time alerts and insights for operators. There is, therefore, a requirement in the art to overcome the above-mentioned problems by providing a simple, compact, and efficient system and method for detecting anomalies on the visual edge sensors directly.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] A general object of the present disclosure is to overcome the problems associated with technical field of surveillance systems, by providing a simple, compact, and efficient system and method for detecting anomalies on the visual edge sensors.
[0008] Another object of the present disclosure is to generate a dataset on anomalies upon analysing the captured feed.
[0009] Another object of the present disclosure is to minimize the need to transmit large amounts of raw video footage over networks using edge analysis methods.
[0010] Another object of the present disclosure is to provide real-time infor-mation extraction from the source for instant responses to detected anomalies, en-hancing situational awareness and security.
[0011] Another object of the present disclosure is to provide autonomous scanning of visual sensors that allows for the examination of areas of interest, re-sulting in improved object detection accuracy.
[0012] Yet another object of the present disclosure is to group alerts of hu-mans detected thereby allowing for the identification of unusual patterns or emerging trends in alert occurrences over time enabling early detection of potential anomalies or security threats.
SUMMARY
[0013] Aspects of the present disclosure relate generally to the field of sur-veillance systems. In particular, it pertains to a simple and efficient surveillance system and method for detecting anomalies using edge analysis on visual sensors.
[0014] According to an aspect, a system for detecting anomalies in an area of interest (AoI) includes an image acquisition unit. The image acquisition unit is configured to acquire a set of visual data of the area of interest. A computing edge device coupled to the image acquisition unit. The computing edge device is con-figured to receive the acquired visual data to detect the presence of objects by a detection component. The detected presence of an object pertains to human activi-ties within the area of interest.
[0015] In addition, the computing edge device is configured to analyse the detected data to establish a pre-defined threshold by a statistical component dis-tinguishing between anomaly patterns and non-anomaly patterns of the human ac-tivities by comparing the analyzed data with a reference dataset. The computing edge device is configured to generate an alert signal upon detection of the anoma-ly patterns of the human activities by an alert generator component. Further, the computing edge device is configured to transmit the generated alert signal to a decision support system (DSS) operatively coupled to the computing edge device to take appropriate actions facilitating timely and informed decision-making in response to potential security threats in the Area of Interest.
[0016] In an embodiment, the image acquisition unit may include a pan-tilt-zoom (PTZ) camera. The image acquisition unit in communication with the auton-omous scanner may control pan tilt, and zoom (PTZ) values of the image acquisi-tion unit. The pan tilt, and zoom (PTZ) values are pre-recorded.
[0017] In an embodiment, the reference dataset may contain historical data. The historical data may pertain to a number of human detections, corresponding timestamps, and other relevant details.
[0018] In an embodiment, the computing edge device may be operatively coupled to the decision support system (DSS) through an ethernet connection.
[0019] In an embodiment, the statistical component may include a data re-corder sub-component, a clustering sub-component and a threshold estimator sub-component. The statistical component may be configured to receive the detected data for statistical analysis by the data recorder sub-component. In addition, the statistical component may be configured to analyse temporal clusters of the detec-tion data. The statistical component may group data points that align within same time domain to extract the patterns for the anomaly detection by the clustering sub-component. Further, the statistical component may be configured to determine the pre-defined threshold by utilizing core data points and temporal neighbouring data points. The statistical component may extract outliers for distinguishing be-tween the anomaly patterns and the non-anomaly patterns by the threshold estima-tor sub-component.
[0020] In an embodiment, the computing edge device in the training phase may be configured to assign the PTZ values to the image acquisition unit by the autonomous scanner. In addition, the detection component may be configured to scan the set of visual data. The detection component may be configured to con-duct detection of the human activities within the scanned visual data with corre-sponding timestamps. Further, the statistical component may be configured to evaluate the historical data to establish the predefined threshold.
[0021] In an embodiment, the computing edge device in the inference phase may be configured to determine PTZ values to the image acquisition unit by the autonomous scanner. The detection component may be configured to scan the set of visual data. In addition, the detection component may be configured to con-duct detection of the human activities within the scanned visual data. The statisti-cal component may be configured to detect anomalies when determined human activities deviate from the reference dataset. The statistical component may be configured to compare the detected anomalies to the predefined threshold. Fur-ther, the statistical component may be configured to accumulate the anomalies when number of the detected anomalies is less than the pre-defined threshold. The alert generator component may be configured to generate the alert signal upon the number of accumulated anomalies may be more than the pre-defined threshold.
[0022] In an embodiment, the system may be configured to generate the alert signal in an offline mode.
[0023] According to another aspect, a method for detecting anomaly in an Area of Interest (AoI) is configured to perform the steps of receiving the acquired visual data to detect presence of objects pertaining to human activities within the area of interest using a detection component. Another step in analysing the detect-ed data is to establish a pre-defined threshold using a statistical component. The statistical component distinguishes between anomaly patterns and non-anomaly patterns of human activities by comparing the analyzed data with a reference da-taset. Yet another step of generating an alert signal upon detection of the anomaly patterns of the human activities using an alert generator component. Further, an-other step of transmitting the generated alert signal to a decision support system (DSS) operatively coupled to the computing edge device is to take appropriate actions using the computing edge device. The computing edge device may facili-tate timely and informed decision-making in response to potential security threats in the Area of Interest.
[0024] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF DRAWINGS
[0025] The accompanying drawings are included to provide a further under-standing of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limi-tation of the present disclosure.
[0026] FIG. 1 illustrates an exemplary network diagram of the visual edge anomaly detection system, in accordance with an embodiment.
[0027] FIG. 2 illustrates an exemplary block diagram of the proposed system along with the components, in accordance with an embodiment.
[0028] FIG. 3 illustrates an exemplary block diagram of the statistical compo-nent of proposed system along with the components therewith, in accordance with an embodiment.
[0029] FIG. 4 illustrates an exemplary flow diagram of the training phase and the inference phase of the proposed system along with the components, in accord-ance with an embodiment.
[0030] FIG. 5 illustrates an exemplary flow chart of the proposed method performing steps, in accordance with an embodiment.
DETAILED DESCRIPTION
[0031] The following is a detailed description of embodiments of the disclo-sure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0032] The present disclosure relates to a fusion of PTZ cameras with cutting-edge object detection and edge computing technologies within the visual edge anomaly detection system is introducing a transformation in the realm of surveil-lance and security operations. The system enables PTZ cameras to independently perform scanning, object detection, and real-time pattern analysis, thus minimizing bandwidth utilization and latency, all the while equipping command and control operators with actionable insights and alert capabilities. The system's various com-ponents, such as the autonomous scanner, human detection, and statistical mod-ules, collaborate seamlessly to provide comprehensive monitoring of the area of interest and the identification of irregularities in human activity patterns. Through integration with a decision support system, this data-centric approach empowers end-users to make well-informed decisions when responding to potential security threats, signifying a significant leap in the domain of AI-driven surveillance sys-tems applicable to critical infrastructure, public safety, and private sector security.
[0033] The proposed system represents a sophisticated and multifaceted solu-tion named visual edge anomaly detection system, integrating the distinct sub-components, each with a specific role and functionality. These subcomponents, namely the autonomous scanner component, the human detection component, and the statistical component along with an integrated camera unit, work in unison to provide a comprehensive approach to surveillance and anomaly detection.
[0034] In an aspect, the system designs an autonomous scanner component for dynamically adjusting the focus of the camera allowing a thorough monitoring of the given area of interest along with generating a database by human detection for statistical analysis. The generated dataset contains the historical evidence that is extremely useful for anomaly detection or any security application in varied domains. The system analyses the temporal patterns of alerts and employs an adaptive clustering algorithm that validates the alerts based on the proximity anal-ysis.
[0035] The proposed methodology employs a multi-step approach for anoma-ly detection that involves scanning, object detection, historical evidence and pat-tern analysis, and utilization of clustering techniques that surpass the benefits of performing anomaly detection solely through scanning and object detection. By analysing historical data and patterns, the system can distinguish between transi-ent anomalies and recurring potentially significant deviations. The incorporation of clustering techniques further refines the anomaly detection process by grouping similar anomalies and facilitating a more nuanced understanding of potential threats. This comprehensive approach enables a more intelligent, precise, and pro-active anomaly detection system compared to relying solely on scanning and ob-ject detection.
[0036] Deploying object detection solutions in edge devices reduces latency, improves bandwidth efficiency and increases cost efficiency because data is ana-lyzed locally, allowing for real-time or near-real-time responses to detected ob-jects. The proposed methodology in the present disclosure utilizes PTZ cameras for anomaly detection, offering a solution that surpasses the limitations posed by fixed cameras employed in the existing systems. Fixed cameras inherently cover a relatively smaller area, compelling the need for the deployment of multiple camer-as to effectively monitor larger areas. In contrast, the PTZ camera-based system provides the flexibility to adjust the camera's orientation and focus by utilizing the autonomous scanner module of the proposed system, allowing for comprehensive surveillance of large monitoring areas. This approach not only optimizes cost-efficiency but also streamlines the surveillance process, making it a superior choice for enhanced security and monitoring capabilities. The present disclosure can be described in enabling detail in the following examples, which may represent more than one embodiment of the present disclosure.
[0037] FIG. 1 illustrates an exemplary network diagram of the visual edge anomaly detection system, in accordance with an embodiment.
[0038] Referring to FIG. 1, a visual edge anomaly detection system 100 (also referred to as system 100, herein) for detecting anomalies in an area of interest (AoI) is disclosed. The system 100 can include an image acquisition unit 102. The image acquisition unit 102 can be configured to acquire a set of visual data of the area of interest. The image acquisition unit 102 may include a pan-tilt-zoom (PTZ) camera.
[0039] The system 100 can include a computing edge device 104. The compu-ting edge device 104 can be coupled to the image acquisition unit 102. The com-puting edge device is equipped with components such as an autonomous scanner 112, a detection component 106, a statistical component 108 and an alert genera-tor component 110 depicted in FIG. 2. The computing edge device 104 can be configured to receive the acquired visual data to detect the presence of objects by the detection component 106. The detected presence of an object pertains to hu-man activities within the area of interest.
[0040] In addition, the computing edge device 104 can be configured to ana-lyse the detected data to establish a pre-defined threshold by the statistical com-ponent 108 distinguishing between anomaly patterns and non-anomaly patterns of the human activities by comparing the analyzed data with a reference dataset. The reference dataset may contain historical data. The historical data may pertain to a number of human detections, corresponding timestamps, and other relevant de-tails.
[0041] Further, the computing edge device 104 may be configured to gener-ate an alert signal upon detection of the anomaly patterns of the human activities by the alert generator component 110. Furthermore, the computing edge device 104 may be configured to transmit the generated alert signal to a decision support system (DSS) 122. The decision support system (DSS) 122 can be operatively coupled to the computing edge device 104 to take appropriate actions. The appro-priate actions can facilitate timely and informed decision-making in response to potential security threats in the Area of Interest. In an embodiment, the computing edge device 104 may be operatively coupled to the decision support system (DSS) 122 through an ethernet connection.
[0042] In an embodiment, the image acquisition unit 102 and the computing edge device 104 are seamlessly connected through an ethernet interface. The cor-responding connection may enable the real-time transmission of video streams us-ing streaming protocols. This arrangement may facilitate the continuous monitor-ing and capture of video data. The computing edge device 104 may not be an iso-lated component. The computing edge device 104 can be in communication with a centralized control center or the Decision Support System (DSS) 122 through the ethernet connection. The corresponding connection streamlines the process of transmitting one or more anomalies. The transmitted anomaly can generate an alert from the edge device to the DSS 122. This may significantly alleviate the load on the DSS 122, especially in situations where multiple image acquisition units may be deployed throughout a geographical area, such as a boundary or perimeter.
[0043] The image acquisition unit 102 may be in communication with the au-tonomous scanner 112 as shown in FIG. 2. The image acquisition unit 102 with the autonomous scanner 112 may control pan tilt, and zoom (PTZ) values of the image acquisition unit 102. The pan tilt, and zoom (PTZ) values are pre-recorded. The system 100 collects the voluminous visual data across the perimeter utilizing the integrated image acquisition unit 102 managed by the autonomous scanner 112. Subsequently, a lightweight detection component 106 processes the data for object detection based on mobile-net convolution neural network (CNN) architec-ture, while a statistical component 108 examines the outputs from the detection component 106. The primary objective is to discern trends and patterns in the identified human activity.
[0044] In an embodiment, the statistical component 108 is shown in FIG. 3 may include a data recorder sub-component 114, a clustering sub-component 116 and a threshold estimator sub-component 118. The statistical component 108 may be configured to receive the detected data for statistical analysis by the data re-corder sub-component 114. The statistical component 108 may be configured to analyse temporal clusters of the detection data. The statistical component 108 may group data points that align within same time domain to extract the patterns for the anomaly detection by the clustering sub-component 116. Further, the statistical component 108 may be configured to determine the pre-defined threshold by uti-lizing core data points and temporal neighboring data points. The statistical com-ponent 108 may extract outliers for distinguishing between the anomaly patterns and the non-anomaly patterns by the threshold estimator sub-component 118.
[0045] The system 100 can be configured to integrate the detector compo-nent, along with the application of statistical analysis and clustering techniques. This may empower the system to operate as an autonomous entity. As a result, it can deliver outstanding and impactful results in the field of surveillance, particu-larly in identifying anomalies and potential security threats. The proposed system may not only enhance security but may also optimize resource utilization and min-imize response times.
[0046] FIG. 2 illustrates an exemplary block diagram of the proposed system along with the components therewith, in accordance with an embodiment.
[0047] Referring to FIG.2 of the proposed system 100 can include the com-puting edge device 104 equipped with a robust 8 GB or higher RAM with quad-core processor (1.5 GHz) and the image acquisition unit 102. In an exemplary em-bodiment, the image acquisition unit 102 can be pan tilt zoom (PTZ) camera. The computing edge device 104 may include the autonomous scanner 112, the detec-tor component 106, the statistical component 108, and the alert generator compo-nent 110.
[0048] In an embodiment, the image acquisition unit 102 may be a Pan-Tilt-Zoom (PTZ) camera. The PTZ camera may be a security camera. The PTZ camera can be controlled remotely to move in three directions pan (left and right), tilt (up and down), and zoom (in and out). The PTZ camera may be used for surveillance applications to track moving objects, inspect specific areas, or deter crime.
[0049] In an embodiment, the autonomous scanner 112 may autonomously control PTZ cameras through pre-recorded PTZ values. The pre-recorded PTZ val-ues may allow PTZ cameras to move to specific positions without the need for human intervention. This may be done by recording the PTZ values of the PTZ camera upon moving through the surveillance area. These recorded values can be used to program the PTZ camera to move to the same positions automatically.
[0050] The autonomous controlling of PTZ cameras through pre-recorded PTZ values has several advantages can be configured to lessen the load on security personnel such that the security personnel can focus on other tasks, such as moni-toring multiple cameras or responding to incidents. In addition, the autonomous controlling of PTZ cameras can ensure that the surveillance area is monitored con-sistently and effectively. Further, the autonomous controlling of PTZ cameras can be used to create custom surveillance patterns that are tailored to the specific needs of the site. The PTZ cameras can be configured to inspect specific areas of the surveillance area at regular intervals.
[0051] To get the complete PTZ sequence for the monitoring of specific areas, the following steps can be followed. In the first step, the start and end PAN an-gles for surveillance of the area may be assigned. In the second step, the human operators may be manually validating the object detector in different zoom levels in the marked areas. In the third step, the PTZ values can be recorded, including the PAN angle, tilt angle, and zoom level. In the fourth step, the tilt and zoom may be adjusted to the adjacent area either above or lower than the current area. Further, the steps two to four may be repeated until all the PTZ values may be recorded for the surveillance of the area.
[0052] The pivotal component is responsible for orchestrating the camera's operations in a highly systematic and precise manner. It employs a combination of pan, tilt, and zoom functionalities to ensure comprehensive surveillance coverage of the entire geographical area of interest. The camera scanner dynamically adjusts its focus by sending precise pan, tilt, and zoom values to the camera. This dynamic adjustment enables deep and thorough monitoring, allowing for a panoramic view. It is important to note that the scanning process is not limited to horizontal move-ments but also encompasses vertical shifts, ensuring that no aspect of the area goes unexamined. The autonomous scanner 112 plays a foundational role in capturing data and visual information, making it accessible for further analysis
[0053] In an exemplary embodiment, the detection component 106 is tasked with the challenging but crucial role of detecting the presence of humans within the defined area of interest. Its functionality extends to identifying and tracking human subjects in the monitored space. The detection component 106 (also re-ferred to as human detection component 106) leverages advanced image pro-cessing and computer vision techniques to discern human figures amidst the cap-tured data. The accurate detection of humans is pivotal for subsequent analysis and alert generation.
[0054] In an exemplary embodiment, the detection component 106 may be edge devices that perform real-time object detection. The edge devices can be re-nowned for their resource-constrained nature, characterized by limited processing power and memory capacity. These inherent constraints may pose a challenge when it comes to deploying traditional object detection models that may be com-putationally intensive and resource-demanding.
[0055] Unlike the resource-intensive counterparts, the detection component 106 can be specifically engineered to be highly efficient, making them exception-ally well-suited for deployment on devices with limited resources. This may oper-ate effectively in resource-constrained environments. These models can be careful-ly optimised to deliver superior performance while operating within the confines of limited computational power and memory.
[0056] In an exemplary embodiment, the synchronisation of the autonomous scanner 112 with the detection component 106 can enable to set a predefined stop-time of 2 + 1 seconds. This may be proven to be a significant advancement in the realm of surveillance technology. This coordinated approach has yielded supe-rior surveillance results. This decision may extend the stopping time by one addi-tional second carries important implications for the quality and effectiveness of the system. The combination of the autonomous scanner 112 with the detection component 106 may capitalise the strengths of both components. The autonomous scanner 112 may provide continuous and systematic coverage of an area. The au-tonomous scanner 112 may ensure that no part of the surveillance zone is over-looked. The detection component 106 may overcome the challenge of motion blur, particularly in the context of PTZ cameras. The detection component 106 may be configured to overcome the challenge of motion blur by pausing the scanning module for the predefined stop-time of 2 seconds. The system 100 can focus its attention on identifying and analysing objects or subjects within the field of view.
[0057] In addition, there may be a concern about extending the stopping time by an additional second. The additional second may be of 2 + 1 seconds. The mo-tion blur effect in the PTZ cameras can significantly impact the clarity and accura-cy of the surveillance footage. The clarity and accuracy of the surveillance footage may track fast-moving objects or individuals. This can be done by allocating that extra second for stabilization. The system 100 can effectively mitigate the motion blur issue. This additional time may allow the camera to settle into a steadier posi-tion. This may result in crisper and clearer images.
[0058] The data analysis may have a concern in data patterns in observing de-viations in data patterns upon monitoring a PTZ camera detections over a period of time. Various clustering elements may be instrumental in this process, including k-means clustering, hierarchical clustering, and density-based clustering, each of-fering unique advantages and use cases. The initial step in this deviation analysis may involve the collection of data concerning the number of detections around the PTZ camera during specific time intervals. This data collection can be automated to occur on an hourly basis.
[0059] In an embodiment, the statistical component 108 takes on the role of analysing the data generated by the autonomous scanner 112 and the detection component 106. Its primary function is to identify trends and patterns within the detected human activity. It monitors these patterns in real-time and compares them against established norms. If an abnormal pattern is detected, such as unusu-al behavior or unexpected changes in human activity by the statistical component 108 then alerts are generated promptly by the alert generator component 110. These alerts serve as a vital early warning system, enabling timely response to po-tential security threats.
[0060] The statistical component 108 may be configured to receive the auton-omously tracked and recorded number of detections within a designated area as time progresses from the detection component. The collected data may be subse-quently subjected to clustering on an hourly basis. This temporal granularity may provide a structured view of how detection patterns evolve over time. Any anomaly or threat detected by the system is further sent to the decision support system 122, which aids in better decision making. Thus, the entire system may help the end user to take an appropriate decision on any kind of malicious activity across the boundary using a data-driven approach.
[0061] FIG. 3 illustrates an exemplary block diagram of the statistical compo-nent of proposed system along with the components, in accordance with an em-bodiment.
[0062] Referring to FIG.3, the statistical component 108 can include a data recorder sub-component 114, a clustering sub-component 116, and a threshold es-timator sub-component 118.
[0063] The data recorder sub-component 114 may receive the detected ob-jects for statistical analysis. To further refine the analysis, the system 100 desig-nates a predefined area for scrutiny, emphasizing time-domain analysis.
[0064] The clustering sub-component 116 employs time-domain clusters and closely examines adjacent time intervals for pattern extraction and anomaly detec-tion. The clustering algorithm operates by grouping data points that align within the same time domain, creating clusters that reflect the patterns and dynamics within each temporal segment. Once the data is clustered, the clusters can be rig-orously analysed to identify any deviations from the anticipated or expected pat-terns. It is important to note that existing clustering algorithms may have limita-tions that are not ideally suited to the specific needs of this context. Hence, the present disclosure provides clustering algorithm tailored to the unique require-ments of our analysis. This tailored algorithm is poised to enhance the precision and effectiveness of deviation detection, thereby making it an invaluable tool for the monitoring and analysis of PTZ camera detections over time.
[0065] FIG. 4 illustrates an exemplary flow diagram of the training phase and the inference phase of the proposed system along with the components therewith, in accordance with an embodiment.
[0066] Referring to FIG.4, the training phase is described wherein the auton-omous scanner 112 may assign PTZ values. The autonomous scanner 112 can pro-vide the input for the scan. The detector component 106 can conduct human de-tection within the scanned image. If no human presence can be detected, this pro-cess can iterate until recording of any detection. The detection may be a corre-sponding timestamp. As a sequence, the historical data may be evaluated. The his-torical data can establish a detection threshold.
[0067] In the inference phase as referred to in FIG.4, the PTZ values may again be determined by the autonomous scanner 112. The image may be scanned for human presence. The system 100 may tally the detected anomalies. The system 100 may compare them to the predefined threshold such that if the count falls be-low the threshold, the counts may be accumulated. The system 100 may be con-figured to generate alerts upon the count exceeding the threshold.
[0068] The clustering method used for deviation analysis employs a meticu-lous process that involves calculating the standard deviation (s) for each cluster. This standard deviation calculation may extend to encompass the data points within both the current cluster and its two adjacent time clusters. In another next step of determining the radius of a circle, denoted as epsilon (e). The epsilon (e) is a critical parameter for anomaly detection. The epsilon may be calculated by mul-tiplying the cluster's standard deviation (s) by a predefined hyperparameter G. This parameter G may serve as a critical multiplication factor for the standard de-viation. The parameter G may enable the fine-tuning of the detection process. Moreover, the positioning of the circle may be done precisely. The centering may be with respect to the farthest core point within the cluster of interest. The core point may include the data point that exclusively belongs to the current time do-main. The current time domain may not extend to the adjacent time domain. This distinction may ensure that the circle's position accurately reflects the temporal context of the analysis.
[0069] In another embodiment, the significant criterion in this process may be the number of data points contained within the circle. The circle upon enclosing less than four data points may categorically be declared as an outlier point. This may denote that the fundamental requirement for a data point to be considered a non-anomaly may be the presence of at least four data points inside the circle. This threshold may be determined by the threshold estimator sub-component 118 through a series of extensive experiments and comparisons on various datasets. The threshold may ensure that it strikes the right balance between sensitivity and specificity.
[0070] In an exemplary embodiment, the hyperparameter G may be set to a value of 1.5. The minimum number of points that may be required inside the circle for a non-anomaly designation is four. These specific parameter values can be fine-tuned based on rigorous experimentation and rigorous analysis of results across a variety of datasets. This may ensure the robustness and accuracy of the anomaly detection process.
[0071] In an embodiment, the alert generator component 110 may promptly generate alerts on the detection of any anomaly or abnormality. The detected anomaly or abnormality may be further transmitted to decision support system for taking appropriate actions.
[0072] The integration of autonomous scanning, object detection, and pattern analysis in PTZ cameras, coupled with deployment on edge devices, represents a groundbreaking advancement in surveillance technology. The approach empowers cameras to operate with greater autonomy and intelligence, enabling them to iden-tify, and analyse objects and detect anomalies in real-time. By doing so, it signifi-cantly enhances the capabilities of command-and-control operators, allowing them to respond more effectively to security threats and incidents. As technology con-tinues to evolve, this solution showcases the immense potential of AI-driven sur-veillance systems in creating safer and more secure environments for various appli-cations, including critical infrastructure protection, public safety, and private sec-tor security.
[0073] FIG. 5 illustrates an exemplary flow chart of the proposed method performing steps, in accordance with an embodiment.
[0074] Referring to FIG.5, method 500 for anamoly detection includes at block 502, a set of visual data of the Area of Interest (AoI) is acquired by the im-age acquisition unit, wherein a computing edge device 104 coupled to the image acquisition unit 102.
[0075] At block 504, receiving the acquired visual data by the detection com-ponent 106 of the computing edge device to detect the presence of objects per-taining to human activities within the area of interest.
[0076] At block 506, analyzing the detected data by the statistical component to establish a pre-defined threshold, distinguishing between anomaly patterns and non-anomaly patterns of the human activities by comparing the analyzed data with a reference dataset.
[0077] At block 508, generating an alert signal by an alert generator compo-nent, upon detection of the anomaly patterns of the human activities and at block 510, the generated alert signal is transmitted to a decision support system (DSS) 122 operatively coupled to the computing edge device 104 to take appropriate ac-tions, facilitating timely and informed decision-making in response to potential security threats in the area of interest.
[0078] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[0079] The present invention implements a solution for edge devices, reduc-ing the necessity to transmit large amounts of raw video footage over networks. This minimizes the strain on network bandwidth, enhancing efficiency and cost-effectiveness.
[0080] The present invention enables real-time data processing near the source, significantly reducing latency. This facilitates quicker responses to detect-ed anomalies, thereby improving situational awareness and security.
[0081] The present invention integrates autonomous scanning, ensuring pre-cise camera control for the meticulous examination of areas of interest. This results in improved object detection accuracy.
[0082] The present invention incorporates clustering techniques that helps group alerts of detected humans based on their temporal relationships. This allows for the identification of unusual patterns or emerging trends in alert occurrences over time, enabling the early detection of potential anomalies or security threats.
, Claims:1. A system (100) for detecting anomaly in an area of interest (AoI), the sys-tem (100) comprising:
an image acquisition unit (102) configured to acquire a set of visual data of the AoI; and
a computing edge device (104) coupled to the image acquisition unit (102), the computing edge device (104) configured to:
receive, by a detection component (106), the acquired visual data to detect presence of objects pertaining to human activities within the area of interest;
analyse, by a statistical component (108), the detected data to establish a pre-defined threshold, distinguishing between anomaly patterns and non-anomaly patterns of the human activities by comparing the analyzed data with a reference dataset;
generate, by an alert generator component (110), an alert signal, upon detection of the anomaly patterns of the human activities; and
transmit the generated alert signal to a decision support system (DSS) (122) operatively coupled to the computing edge device (104) to take appropriate actions, facilitating timely and informed decision-making in response to potential security threats in the AoI.
2. The system (100) as claimed in claim 1, wherein the image acquisition unit (102) comprises a pan-tilt-zoom (PTZ) camera, wherein the image acquisi-tion unit (102), by communicating with an autonomous scanner (112), controls pan, tilt, and zoom (PTZ) values of the image acquisition unit (102) through pre-recorded values.
3. The system (100) as claimed in claim 1, wherein the reference dataset con-tains an historical data pertaining to number of human detections, corre-sponding timestamps, and other relevant details.
4. The system (100) as claimed in claim 1, wherein the computing edge de-vice (104) is operatively coupled to the decision support system (DSS) (122) through an ethernet connection.
5. The system (100) as claimed in claim 1, wherein the statistical component (108) comprises a data recorder sub-component (114), a clustering sub-component (116) and a threshold estimator sub-component (118), the sta-tistical component (108) configured to:
receive, by the data recorder sub-component (114), the detected data for statistical analysis;
analyse, by the clustering sub-component (116), temporal clusters of the detection data by grouping data points that align within same time domain to extract the patterns for the anomaly detection; and
determine, by the threshold estimator sub-component (118), the pre-defined threshold by utilizing core data points and temporal neighboring data points to extract outliers for distinguishing between the anomaly patterns and the non-anomaly patterns.
6. The system (100) as claimed in claim 6, wherein at the training phase, the computing edge device (104) configured to:
assign, by the autonomous scanner (112), the PTZ values to the image acquisition unit (102);
scan the set of visual data; and
conduct detection of the human activities within the scanned visual data with corresponding timestamps; and
evaluate the historical data to establish the predefined threshold.
7. The system (100) as claimed in claim 6, wherein at the inference phase, computing edge device (104) is configured to:
assign, by the autonomous scanner (112), the PTZ values to the image acquisition unit (102);
scan the set of visual data;
conduct detection of the human activities within the scanned visual data;
detect anomalies when determined human activities deviate from the reference dataset;
compare the detected anomalies to the predefined threshold;
accumulate the anomalies when number of the detected anomalies is less than the pre-defined threshold; and
generate the alert signal, when number of the accumulated anomalies is more than the pre-defined threshold.
8. The system (100) as claimed in claim 1, wherein the system (100) is con-figured to generate the alert signal in an offline mode.
9. A method (500) for detecting anomaly in an Area of Interest (AoI), the method (500) comprising:
acquiring (502), by an image acquisition unit, a set of visual data of the AoI, wherein a computing edge device (104) coupled to the image ac-quisition unit (102);
receiving (504), at a detection component (106) of the computing edge device (104), the acquired visual data to detect presence of objects pertaining to human activities within the area of interest;
analyzing (506), at a statistical component (108), the detected data to establish a pre-defined threshold, distinguishing between anomaly patterns and non-anomaly patterns of the human activities by comparing the analyzed data with a reference dataset;
generating (508), by an alert generator component (110), an alert signal, upon detection of the anomaly patterns of the human activities; and
transmitting (510) the generated alert signal to a decision support system (DSS) (122) operatively coupled to the computing edge device (104) to take appropriate actions, facilitating timely and informed decision-making in response to potential security threats in the AoI.
| # | Name | Date |
|---|---|---|
| 1 | 202441019310-STATEMENT OF UNDERTAKING (FORM 3) [15-03-2024(online)].pdf | 2024-03-15 |
| 2 | 202441019310-POWER OF AUTHORITY [15-03-2024(online)].pdf | 2024-03-15 |
| 3 | 202441019310-FORM 1 [15-03-2024(online)].pdf | 2024-03-15 |
| 4 | 202441019310-DRAWINGS [15-03-2024(online)].pdf | 2024-03-15 |
| 5 | 202441019310-DECLARATION OF INVENTORSHIP (FORM 5) [15-03-2024(online)].pdf | 2024-03-15 |
| 6 | 202441019310-COMPLETE SPECIFICATION [15-03-2024(online)].pdf | 2024-03-15 |
| 7 | 202441019310-Proof of Right [13-09-2024(online)].pdf | 2024-09-13 |
| 8 | 202441019310-RELEVANT DOCUMENTS [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202441019310-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 10 | 202441019310-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 11 | 202441019310-Response to office action [01-11-2024(online)].pdf | 2024-11-01 |