Abstract: The present disclosure provides a method and system for image segmentation in fiber optic sensing based intrusion detection system, wherein the method and system implements data classification and identification for image patterns. The method includes identifying, by an anomaly detection unit (210), an intrusion event by implementing a machine learning model in image patterns. Further, the method includes implementing, by a semantic segmentation unit (224), semantic segmentation in the image patterns where an anomaly is detected, wherein implementing the semantic segmentation includes extracting, by a neural network unit (306), pixels from the image patterns and labelling, by the neural network unit (306), the pixels obtained from the image patterns and predicting, by a conditional random field (CRF) unit (308), intrusion event and type. The neural network unit (306) is a fully convolution neural network and the CRF unit implements CRF for structured prediction from the labelled pixels.
[0001] The present disclosure relates to a fiber optic sensing technology, and more specifically relates to a method and a system for image segmentation and detection in a fiber optic sensing based intrusion detection system.
BACKGROUND OF INVENTION
[0002] Perimeter Intrusion Detection System (PIDS) refers to a suite of products that address physical security requirements through various solutions combining hardware and software technologies. PIDS solutions are used in industrial applications like defense installations fence/perimeter security border security, early detection of digging close to telecom cable installations (data centers), industrial facilities fence/perimeter security, power plants, manufacturing plants. Various technologies are available for a full-fledged Perimeter Intrusion Detection System (PIDS), such as Y Beams, motion cameras, infrared sensors, optical cameras, fiber optic sensing (FOS).
[0003] Of late, fiber-optic distributed acoustic sensing (DAS) based intrusion detection systems are being used that provide efficient solutions for several infrastructures such as border, airport, pipeline security, etc. In general, the DAS systems detect and classify acoustic vibrations using standard telecommunication fibers buried under the ground or deployed over a fence. These acoustic vibrations help identify events that may pose a threat to the area where the DAS system has been deployed.
[0004] Sometimes, activities of interest that are being captured by the DAS system may not pose any type of threat depending upon time and location of the activity, such as any ground digging activity related to an agricultural event or any other environmental factors like rain, wind, random noise.
[0005] The existing solutions suffer from several shortcomings such as high false alarm rate due to nuisance events and environmental factors like rain, wind, random noise, etc.; high response time as due to high data volumes, response times for detection and classification of intrusion events can be lengthy; low adaptability to various types of event; high cost for new deployments and signal fading. Further, for novel event types, environments, soil types, and applications, the system needs to be re-engineered, which involves experts' art and craft, and that leads to high turnaround time. Various analytical models have been developed and utilized to solve these shortcomings; however, the analytical models are frozen once trained and deployed.
[0006] The intrusion detection solution should have a low level of false positive rate while also returning alerts on intrusion within seconds (<2 seconds is a common requirement) in order to be viable. These requirements strain system resources in opposing directions. In order to have a low false positive rate, more suspected events need to be analyzed and this is computationally expensive and increases response time.
[0007] In addition to this, intrusions are rare events with a low occurrence rate - meaning most of the signal from the DAS will be background noise. An additional complication with the DAS is that it is not possible to ascribe a parametric value to an amplitude difference associated with a particular intrusion event - that is, it is not possible to quantify exactly how much variation in amplitude of the DAS signal will be caused by a “Human Walking” vs. a “Digging” intrusion event.
[0008] Recently, there is a huge surge in use of machine learning and deep neural networking algorithms for the purposes of event detection in various applications like data network security and credit card fraud. For example, for the purpose of parametric feature classification, use of machine learning, support vector machines and the like are being used to provide better classification of events and better event detections. Further, emphasis is given on reducing or removing the false alarms completely.
[0009] The PIDS systems are deployed in remote locations and the cost of false alarms is high. The current machine learning based DAS systems perform intrusion detection by implementing object detection process in a captured image that are likely to be amorphous background regions rather than objects with well-defined shape and that leads to false alarms. The conventional object detection process fails to avoid the false alarms as the object detection includes a lot of noisy and irrelevant data and fails to find exact boundaries of the object that needs to be detected to identify intrusion. Thus, there exists a need for a better approach that can identify objects with well-defined shape and thus, avoid false alarms.
[0010] The present invention seeks to ameliorate one or more of the aforementioned disadvantages or at least provide a useful alternative.
OBJECT OF INVENTION
[0011] The principal object of the present disclosure is to provide a method and a system for image segmentation and detection in a fiber optic sensing (FOS) enabled intrusion detection system.
[0012] Another object of the present disclosure is to provide a method for data classification and identification for intrusion detection using image patterns.
[0013] Another object of the present disclosure is to provide a method for usage of computer vision technology and associated deep learning methods in a fiber optic sensing (FOS) enabled intrusion detection system.
[0014] Another object of the present disclosure is to utilize a neural network for labelling image segments in a fiber optic sensing (FOS) enabled intrusion detection system.
[0015] Another object of the present disclosure is to utilize a conditional random field (CRF) for structured prediction from the labels in a fiber optic sensing (FOS) enabled intrusion detection system.
SUMMARY
[0016] The present disclosure provides a method and a system for intrusion detection using a distributed acoustic sensing (DAS) on a fiber optic cable. The method includes obtaining a two-dimensional (2D) distributed acoustic sensing (DAS) signal and interpreting the 2D DAS signal as an image pattern. Further, the method includes receiving the image pattern identifying an intrusion event by implementing a machine learning model in the image patterns by an anomaly detection unit. Further, the method includes implementing semantic segmentation by a semantic segmentation unit in the image patterns where an anomaly is detected, wherein implementing the semantic segmentation comprises: extracting, by a neural network unit, pixels from the image patterns and labelling, by the neural network unit, the pixels obtained from the image patterns, wherein the neural network unit is a fully convolution neural network and predicting, by a conditional random field unit, intrusion event and type, wherein the conditional random field unit implements a conditional random field for structured prediction from the labelled pixels.
[0017] The obtained 2D DAS signal has a time dimension and a sensor dimension. For semantic segmentation, a patch of the 2D signal with reasonable dimension is considered, wherein the reasonable dimension is 520 pulses×40 sensor locations as an image.
[0018] For anomaly detection a one-class support vector machine (?-svm ) is used, wherein the one-class SVM fits in every time slot to find outliers across all locations.
[0019] The semantic segmentation unit partitions an image into multiple segments (regions), wherein the multiple segments are sets of pixels of arbitrary shape and each shape is given multiple labels belonging to one or more class labels, wherein the one or more class labels may be a person, bike, digging and one of special class is no event representing no activity.
[0020] For each image pixel, a probability is predicted for each class label such that all the class probabilities need not sum up to 1. If all the class probabilities are low, the image pixel is categorized in “no event” class and the event that is not of interest and can be filtered out.
[0021] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0022] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0023] FIG. 1 illustrates a block diagram of a conventional intrusion detection system;
[0024] FIG. 2 illustrates a block diagram of a system including a two-tier machine learning model for identifying intrusion detection;
[0025] FIG. 3 illustrates a block diagram of an inference pipeline using a conditional random field;
[0026] FIG. 4 is a flow chart depicting a method for data classification and identification for intrusion detection; and
[0027] FIG. 5 is a flow chart depicting a method for intrusion detection using a distributed acoustic sensing (DAS) on a fiber optic cable.
DETAILED DESCRIPTION OF INVENTION
[0028] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0029] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0030] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0031] The present disclosure provides a method for data classification and identification for intrusion detection using image patterns. Further, the present disclosure provides a method for intrusion detection using a distributed acoustic sensing (DAS) on a fibre cable. The method includes identifying, by an anomaly detection unit, an intrusion event by implementing a machine learning model in image patterns. Further, the method includes implementing, by a semantic segmentation unit, semantic segmentation in the image patterns where an anomaly is detected, wherein implementing the semantic segmentation includes extracting, by a neural network unit, pixels from the image patterns and labelling, by the neural network unit, the pixels obtained from the image patterns, wherein the neural network unit is a fully convolution neural network and predicting, by a conditional random field unit, intrusion event and type, wherein the conditional random field unit implements a conditional random field for structured prediction from the labelled pixels.
[0032] Unlike to conventional methods and systems, the present disclosure provides a method and a system for image segmentation in a fiber optic sensing (FOS) based intrusion detection system. The image segmentation finds exact boundaries of objects to avoid false alarms. Further, the present disclosure provides a method for data classification and identification for intrusion detection using image patterns. The proposed image segmentation methodology enables the FOS based intrusion detection system to implement learning paradigms to automatically learn and improve from experience without being explicitly programmed.
[0033] The learning method uses networks capable of learning in a supervised fashion from data that is labelled as well as in an unsupervised fashion from data that is unstructured or unlabelled or labelled. The deep learning method employs multiple layers of neural networks that enable the system of the present disclosure to teach itself through inference and pattern recognition, rather than development of procedural code or explicitly coded software algorithms.
[0034] The neural networks enhance their learning capability by varying the uniquely weighted paths based on their received input. The successive layers within the neural network incorporates a learning capability by modifying their weighted coefficients based on their received input patterns. The training of the neural networks is very similar to teaching human brain to recognize an object. The neural network is repetitively trained from a base data set, where results from the output layer are successively compared to the correct classification of the image.
[0035] In an alternate representation, any machine learning paradigm instead of neural networks can be used in the training and learning process. The machine learning algorithms are a class of algorithms that have proven to be very successful in identifying signal from noisy environments. They can be retrained on adjacent datasets with relative ease. The key idea of the current solution to the problem of intrusion detection of distributed acoustic sensor (DAS) is to treat intrusion events as image patterns. The DAS signals are treated as image patterns and semantically segmented and labelled. In the present disclosure, the image segmentation approach integrates neural networks and conditional random fields.
[0036] Referring now to the drawings, and more particularly to FIGS. 1 through 5, there are shown preferred embodiments.
[0037] FIG. 1 illustrates a block diagram of a conventional intrusion detection system. The conventional intrusion detection system (100) comprises a sensor cable (102), a sensor interrogator (104), a workstation and digitizer (106), an intelligence layer (108) and a user interface (110).
[0038] In general, the sensor cable (102) may be a fence sensor cable, buried sensor cable, or the like. The sensor cable (102) is a cable designed with a specific configuration to measure pressure, temperature, strain, vibration, or the like for optimal performance of signal or data transmission and to locate a disturbance along a length of the sensor cable. The sensor cable (102) provides intrusion data. The sensed data from the sensor cable (102) is fed to the sensor interrogator (104). The sensor interrogator (104) measures a large sensing network by acquiring data from the sensor cable (102) simultaneously and at different sampling rates. The sensor interrogator (104) measures wavelength associated with the light reflected by the sensor cable (102) and sends it to the workstation and digitizer (106). The workstation and digitizer (106) receives analog information from the sensor interrogator (104) and records it digitally. The analog information may be in the form of sound or light. Usually, the information is stored in a file on a computing device.
[0039] The digitized information from the workstation and digitizer (106) is fed to the intelligence layer (108). The intelligence layer (108) is a machine learning based layer that is trained and learned to detect an event, classify an event or the like. In an embodiment, the learning can be supervised or unsupervised. In an embodiment, the intelligent layer (108) may incorporate neural networks, such as, but not limited to, feedforward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, modular neural network, in the training and learning process.
[0040] The intelligent layer (108) predicts intrusion events and displays output via the user interface (110). The user interface (110) displays an intrusion activity. However, the conventional intrusion detection system results in false alarming due to noisy and irrelevant data capturing such as images or videos. FIG. 2 provides solution to the problem of noisy and irrelevant data capturing.
[0041] FIG. 2 illustrate a block diagram of a system including a two-tier machine learning model for identifying intrusion detection. The two-tier machine learning model (200) detects events from a distributed acoustic sensor (DAS) such as fiber optic sensor (FOS) and helps in reducing false alarm rate due to nuisance events like environmental factors like rain, wind, random noise in a perimeter intrusion detection system (PIDS). The system (200) acts as a means for data classification and identification for intrusion detection using image patterns.
[0042] The system (200) comprises a fiber optic sensor (202) deployed on a premise, an on premise server with intrusion detection system (200a), an offline system (200b) and a semantic segmentation unit (224). The on premise server with intrusion detection system (200a) includes an interrogator (204), a data ingestion and transform unit (208), an anomaly detection unit (210), an event prediction unit (212), a push/pull interface to database (DB) (214) and an SCMV (Software Configuration Management and Visualization) unit (216). The offline system (200b) includes a labelled data from production (218), a prediction model (220), and a trained model (222). Alternatively, the whole system (200) may be called as an intrusion detection system (200).
[0043] The fiber optic sensor (FOS) (202) measures pressure, temperature, strain, vibration, or the like for optimal performance of signal or data transmission and to locate a disturbance around the FOS (202) in order to identify any type of intrusion. The FOS (202) may be a combination of a fiber optic cable and one or more sensors. The fiber optic cable may be a sensor cable. The sensor cable may be a fence sensor cable, a buried sensor cable, or the like. The sensed data from the FOS (202) is fed to the interrogator (204) connected to it. The FOS (202) acts as both a sensing element and a transmission medium to and from the interrogator (204). The interrogator (204) launches optical pulses into one or more optical fibers of the FOS (202). These optical pulses propagate along a core of the one or more optical fibers and interact with a core material. In this process, a small fraction of optical power is scattered, propagating back towards the interrogator. The interrogator (204) then analyses the backscattered signal as a function of time and, depending on configuration, and is further distinguishes temperature, pressure, strain or acoustic signal as a function of distance along the one or more optical fibers.
[0044] The FOS may comprise one or more types of sensor elements and their individual return signals are distinguished either through use of different optical wavelength bands, similar to radio channels.
[0045] The output of the interrogator (204) is fed to the data ingestion and transform unit (208). The output may be audio signals, video signals, image patterns or the like. The output is a two-dimensional distributed acoustic sensor (2D DAS) signal. The 2D signal consists of spatio-temporal acoustic signals. In other words, the 2D signal is the output of the Distributed Acoustic Sensor (DAS). One of the dimensions is a finite spatial dimension defined by a length of the fibre. Here, a discrete value is at approximately 1 meter. The other dimension is an unbounded one i.e., temporal dimension. The train of values at 1 meter intervals is obtained at every 2.5 ms approximately. The values of 1 meter and 2.5 ms depend on the settings of the DAS. They have been held constant to ensure consistency of measurements.
[0046] The 2D distributed acoustic sensor signal is converted into image patterns. For example, a DAS signal is segmented into image block. The two dimensional DAS signal for example has one dimension as sensor. In an example, at 40 locations on the FOS, is image width along sensor dimension. The second dimension of the 2D DAS signal, for an example, is 520 pulses taken as image height along the time dimension. The data ingestion and transform unit (208) acts as a data repository and data transformation unit. The 2D signals from interrogator obtained from the interrogator (204) are loaded into the data ingestion and transform unit (208) and is transformed into required format. For example, the 2D signal is transformed into an image having a sensor dimension and time dimension.
[0047] Alternatively, the data ingestion and transform unit (208) acts as a converter that converts the 2D DAS signals received from the interrogator (204) into image patterns. The data i.e. image patterns from the data ingestion and transform unit (208) are fed to the anomaly detection unit (210).
[0048] The anomaly detection unit (210) acts as a first tier of the system (200). The anomaly detection unit (210) is a coarse filter for intrusion events that is, it has a very high detection rate but also has a high false positive rate. In order to resolve the high false positive rate of intrusion events, a deep learning model is added in the second tier (224). The key idea of the anomaly detection unit (210) is to filter the high volume of image patterns received from the data ingestion and transform unit (208) and pass only a subset of the image patterns that have a high probability of containing events to the second tier (224). This is a key step in improving the timely performance of the FOS system.
[0049] The machine learning model may implement supervised learning or unsupervised learning. During the supervised learning, the model learns on a labelled dataset and maps an input to an output based on the labelled dataset. In contrast to the supervised learning, during the unsupervised learning, the model learns on unlabelled dataset and extract features and patterns to draw inference or output. The anomaly detection unit (210) uses an unsupervised learning model.
[0050] The unsupervised machine learning model may be, but not limited to, density-based techniques, one-class support vector machines, bayesian networks, hidden markov models (hmms), cluster analysis-based outlier detection, fuzzy logic-based outlier detection, ensemble techniques or the like.
[0051] The anomaly detection unit (210) may use one-class support vector machines as a choice of algorithm. Generally, the one-class support vector machines allow the anomaly detection unit (210) to learn a decision function for novelty detection. Support vector machines (SVMs) belong to the class of algorithms that classify datasets where the decision boundary between the classes is non-linear. SVMs can create a non-linear decision boundary between classes by projecting the features through a non-linear function to a higher dimensional hyperplane. The SVM identifies an optimal non-linear decision boundary by structuring it as an optimization problem which maximized the separation between the classes. An implementation of the generic Support Vector Machine is The Support Vector Method For Novelty Detection by Schölkopf et al. which identifies an optimal hyperplane which separates all the data points from the origin in a transformed feature space. This hyperplane is then utilized to identify outliers.
[0052] The stream of data that is received from the data ingestion and transform unit (208) is in the form of image blocks, i.e., amplitudes within a time slot at a set of locations or sensors. One-class support vector machine (?-SVM) takes as an input every image block (amplitudes within a time slot at a set of locations or sensors) and finds outliers, i.e. image blocks that are different from the majority of the image blocks. In one of the features of the one-class support vector machine (?-SVM), the one-class support vector machine (?-SVM) uses the median of the signal amplitudes within a time slot at a set of locations. The one-class support vector machine (?-SVM) framework also supports to define custom features by extending the class representing features.
[0053] The parameters defined in one-class support vector machine (?-SVM) are enlisted in the below table (TABLE A). The parameter ? is an upper bound on the number of outliers and lower bound on the fraction of support vectors.
Parameter Description
Timeslot 520 pulses of 2.508 ms i.e. 1.3 sec
? One class SVM parameter is set to 0.1
outlier_threshold
Decides boundary for outliers, currently set to 2.0
max_signal_amplitude
Dummy anomaly value, currently set to 128.0
TABLE A
[0054] Upon receiving image patterns from the data ingestion and transform unit (208), the anomaly detection unit (210) filters out image blocks that have a high probability of containing intrusion events. Only these image blocks are then passed on to the semantic segmentation unit (224), thus vastly reducing the computational requirements of the FOS system proposed.
[0055] The semantic segmentation unit (224) acts as a second tier of the system (200). The semantic segmentation unit (224) partitions the image into multiple segments (regions). These segments are set of pixels of arbitrary shape where all the pixels have a same assigned label. The semantic segmentation is a method in which labelling of image pixels is carried out in the image pattern. A dense CRF (Conditional Random Field) mechanism is used for modelling the semantic segmentation. The Conditional Random Field (CRF) is used for inference (prediction) drawing after image segmentation by a convolutional neural network/fully convolutional neural network (CNN/ FCN).
[0056] The DAS signals across the FOS (Fiber optic sensor) is a 2D signal having first dimension as a bounded dimension of sensor and second dimension as an unbounded time dimension. A patch of the 2D signal with dimensions, for example, 520 pulses×40 sensor locations considered as an image. The semantic segmentation unit (224) classifies the image at pixel level. For example, the classification (class labels) comprising categories of intrusion like person, bike, digging and a special class of “no events” representing “no activity”. The semantic segmentation unit (224) finds exact boundaries of objects as against the usual object detection that localizes objects with bounding boxes (rectangles).
[0057] In an example, a patch of the 2D signal with reasonable dimensions 520 pulses×40 sensor locations are considered as an image. In basic terms, the idea is to convert the sensor signals to an image and then to apply a deep neural network to recognize the events in the image. For every pixel of the image, the labelled data from production (218) is forwarded to the prediction model (220). The prediction model (220) predicts a probability for every class label. The probability for every class label need not to sum up to 1. The predictions from the prediction model (220) of every pixel is an output and quantified as a vector of probability values. The CRF is used to draw these predictions or inference. For example, if all class label probabilities are low, it indicates an “unknown event” that is not of interest and can be filtered out. In another example, for a value close to 0, it indicates that the corresponding class is not likely to be present, while a value close to 1, it indicates that the corresponding class is likely to be present.
[0058] Further, the trained model (222) is trained to identify the classes based upon their probability values. The trained model (222) is also trained for identifying and handling nuisance events.
[0059] The example of classes considered are, but not limited to, PERSON, BIKE, DIGGING (or any other event type) and a special class NO_EVENT (stationary noise output of the sensor, in the absence of any physical event, in an underground deployment).
[0060] An UNKNOWN EVENT class will then be returned by the trained model (222) for all events that the trained model (222) has not been trained for, that comprises nuisance events. A threshold is determined for the probability values corresponding to all classes. This threshold is determined through a grid search of likely values to obtain best classification accuracy across all classes. The threshold of the probability values is a hyper parameter for the CRF inference pipeline. The hyper parameter can be calibrated separately for all classes, however in the current implementation, the hyper parameter is considered as a single threshold value for all classes. If the probability returned by the model for a specific location is below the threshold for all the classes, the location is considered to be undergoing an UNKNOWN EVENT.
[0061] All the information is stored in a database or on a server and can be accessed via the push/pull interface to database (DB) (214). Further, the SCMV (Software Configuration Management and Visualization) unit (216) systematically manages, organizes, and controls the changes in the system (200), more specifically in the deep learning model.
[0062] FIG. 3 illustrates a block diagram of an inference pipeline using a conditional random field (CRF). The inference pipeline (300) includes a data ingestion unit (302), an anomaly detection unit (304), a neural network unit (306), a conditional random field unit (308) and a user interface (310).
[0063] The inference pipeline works in three stages. The first stage is related to anomaly detection. An input from data ingestion unit (302) is fed to the anomaly detection unit (304). The input is a raw data such as image patterns. The anomaly detection unit (304) is a coarse filter for intrusion events that is implemented with one-class support vector machine (?-SVM). The anomaly detector unit (304) filters and decides what blocks should be sent to the segmentation unit (224) (shown in FIG. 2). These blocks are 1.3 sec x 256 sensors. A forward pass through the two tier model provides classification output i.e. class labels on the image blocks. The number of such blocks that are formed in the training data is in the multi-millions and requires optimization. The vast majority of the blocks do not contain any event as the intrusion events we are trying to detect are infrequent. In an example, a modeller is trained with blocks containing 50 percent or more of pixels in event regions that are marked with ground truth labels referring to a class other than “NO EVENT”.
[0064] An important shortcoming that this optimization produces is with events which span the length of more than one block. For such events, the outlying edges (corners, beginning and ends) of the events get truncated and may not be transmitted to the intelligence layer. To mitigate this issue, overlapping blocks are considered. These blocks are overlapped in the time as well as the sensor dimensions. The image block creation process can now be considered to be a “rolling window” as opposed to discrete chunks of the 2D DAS signal. A new block at every 6th sensor location [configurable parameter] is passed to the segmentation unit. The combined output from the blocks is created by taking the middle portion of all output images.
[0065] Simultaneously, the input from data ingestion unit (302) is fed to the neural network unit (306). The neural network unit may implement a feedforward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, fully convolution neural network, modular neural network etc.
[0066] In an embodiment, the neural network unit (306) is a fully convolutional neural network (FCNN)/deep learning neural networks (DNN), which acts as a second stage. Apart from the input received from the data ingestion unit (302), the output of anomaly detector unit (304) is also fed to the neural network unit (306). The neural network unit (306) generates an output as a unary potential that is fed to CRF/Dense CRF unit (308), which acts as a third stage.
[0067] The output of the CRF/Dense CRF (308) is displayed on the user interface (310). The user interface (310) may be a smart phone, a computer, a laptop, or other suitable computing device.
[0068] FIG. 4 is a flow chart (400) depicting a method for data classification and identification for intrusion detection. At steps (402) and (404), the method includes receiving image patterns and identifying, by the anomaly detection unit (210), an intrusion event by implementing a machine learning model in the image patterns. The data ingestion and transform unit (208) converts the 2D DAS signals received from the interrogator (204) into the image patterns and forwards the image patterns to the anomaly detection unit and intrusion events are identified in the image patterns.
[0069] The anomaly detection unit (210) is a coarse filter for intrusion events that is, it has a very high detection rate but also has a high false positive rate. In order to resolve the high false positive rate of intrusion events, the anomaly detection unit (210) uses a machine learning model. The anomaly detection unit (210) may use one-class support vector machines learning. Generally, the one-class support vector machines learning method allows the anomaly detection unit (210) to learn a decision function for novelty detection or probability distribution. The one-class support vector machines learning method classifies a new data set as similar or different to a training set.
[0070] At steps (406) and (408), the method includes extracting, by neural network unit (306), pixels from the image patterns and labelling, by the neural network unit (306), the pixels obtained from the image patterns, wherein the neural network unit (306) is a fully convolution neural network.
[0071] Again, at steps (406) and (408), the image semantic segmentation is carried out, by the semantic segmentation unit (224), in the image patterns where anomaly is detected and the extraction of the pixels of the image is carried out. The semantic segmentation unit (224) partitions the image into multiple segments (regions). These segments are set of pixels of arbitrary shape where all the pixels have the same assigned label. The fully conventional neural network/ Deep Learning Neural Network is used for labelling the pixels obtained from the image patterns.
[0072] At step (410), the method includes predicting, by the conditional random field unit (308), intrusion event and type, wherein the conditional random field unit (308) implements a conditional random field for structured prediction from the labelled pixels. The intrusion events are being predicted using a conditional random field. The conditional random field (CRF) is used for structured inference (prediction) drawing from segmented labelled pixel from the image patterns received from the fully convolutional neural network.
[0073] The detailed functionality of all the blocks/units are already explained in conjunction with FIGS 2-3.
[0074] FIG. 5 is a flow chart (500) depicting a method for intrusion detection using a distributed acoustic sensing (DAS) on a fiber optic cable. At step (502), the method includes obtaining, by the data ingestion unit (302), a two-dimensional (2D) distributed acoustic sensing (DAS) signal and interpreting the 2D DAS signal as an image pattern. At step (504), the method includes receiving, by the anomaly detection unit (210), the image pattern. At step (506), the method includes identifying, by the anomaly detection unit (210), an intrusion event by implementing a machine learning model in the image patterns. At steps (508), (510) and (512), the method includes implementing, by the semantic segmentation unit (224), semantic segmentation in the image patterns where an anomaly is detected. Specifically, at step (508), the method includes extracting, by the neural network unit (306), pixels from the image patterns.
[0075] At step (510), the method includes labelling, by the neural network unit (306), the pixels obtained from the image patterns, wherein the neural network unit (306) is a fully convolution neural network.
[0076] Lastly, at step (512), the method includes predicting, by the conditional random field unit (308), intrusion event and type, wherein the conditional random field unit (308) implements a conditional random field for structured prediction from the labelled pixels.
[0077] The detailed functionality of all the blocks/units are already explained in conjunction with FIGS 2-3.
[0078] The various actions, acts, blocks, steps, or the like in the flow chart (400) and (500) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0079] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0080] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
[0081] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[0082] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0083] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0084] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0085] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0086] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0087] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0088] While the detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
Claims:CLAIMS
We claim:
1. A system (200) for data classification and identification for intrusion detection using image patterns, comprising:
an anomaly detection unit (210) configured to identify an intrusion event by implementing a machine learning model in the image patterns;
a semantic segmentation unit (224) configured to implement semantic segmentation in the image patterns where an anomaly is detected; wherein semantic segmentation unit (224) comprises:
a neural network unit (306) configured to extract pixels from the image patterns and label the pixels obtained from the image patterns, wherein the neural network unit (306) is a fully convolution neural network;
a conditional random field unit (308) configured to predict intrusion event and type, wherein the conditional random field unit (308) implements a conditional random field for structured prediction from the labelled pixels.
2. The system as claimed in claim 1, wherein a one-class support vector machine (?-svm ) is used for anomaly detection.
3. The system as claimed in claim 2, wherein the one-class SVM fits in every time slot to find outliers across all locations.
4. The system as claimed in claim 1, wherein the semantic segmentation unit (224) partitions an image into multiple segments (regions), wherein the multiple segments are sets of pixels of arbitrary shape and each shape is given multiple labels belonging to one or more class labels.
5. The system as claimed in claim 4, wherein the one or more class labels may be a person, bike, digging.
6. The system as claimed in claim 4, wherein one of special class is no event representing no activity.
7. The system as claimed in claim 4, wherein for each image pixel, a probability is predicted for each class label such that all the class probabilities need not sum up to 1.
8. The system as claimed in claim 4, wherein if all the class probabilities are low, the image pixel is categorized in “no event” class and the event that is not of interest and can be filtered out.
9. A method for data classification and identification for intrusion detection using image patterns, comprising:
identifying, by an anomaly detection unit (210), an intrusion event by implementing a machine learning model in image patterns;
implementing, by a semantic segmentation unit (224), semantic segmentation in the image patterns where an anomaly is detected, wherein implementing the semantic segmentation comprises:
extracting, by a neural network unit (306), pixels from the image patterns and labelling, by the neural network unit (306), the pixels obtained from the image patterns, wherein the neural network unit (306) is a fully convolution neural network;
predicting, by a conditional random field unit (308), intrusion event and type, wherein the conditional random field unit (308) implements a conditional random field for structured prediction from the labelled pixels.
10. The method as claimed in claim 9, wherein a one-class support vector machine (?-svm ) is used for anomaly detection.
11. The method as claimed in claim 10, wherein the one-class SVM fits in every time slot to find outliers across all locations.
12. The method as claimed in claim 9, wherein the semantic segmentation unit (224) partitions an image into multiple segments (regions), wherein the multiple segments are sets of pixels of arbitrary shape and each shape is given multiple labels belonging to one or more class labels.
13. The method as claimed in claim 12, wherein the one or more class labels may be a person, bike, digging.
14. The method as claimed in claim 12, wherein one of special class is no event representing no activity.
15. The method as claimed in claim 12, wherein for each image pixel, a probability is predicted for each class label such that all the class probabilities need not sum up to 1.
16. The method as claimed in claim 12, wherein if all the class probabilities are low, the image pixel is categorized in “no event” class and the event that is not of interest and can be filtered out.
17. A method for intrusion detection using a distributed acoustic sensing (DAS) on a fiber optic cable, comprising:
obtaining, by a data ingestion unit (302), a two-dimensional (2D) distributed acoustic sensing (DAS) signal and interpreting the 2D DAS signal as an image pattern;
receiving, by an anomaly detection unit (210), the image pattern;
identifying, by the anomaly detection unit (210), an intrusion event by implementing a machine learning model in the image patterns;
implementing, by a semantic segmentation unit (224), semantic segmentation in the image patterns where an anomaly is detected; wherein implementing the semantic segmentation comprises:
extracting, by a neural network unit (306), pixels from the image patterns and labelling, by the neural network unit (306), the pixels obtained from the image patterns, wherein the neural network unit (306) is a fully convolution neural network; and
predicting, by a conditional random field unit (308), intrusion event and type, wherein the conditional random field unit (308) implements a conditional random field for structured prediction from the labelled pixels.
18. The method as claimed in claim 17, wherein the obtained 2D DAS signal has a time dimension and a sensor dimension.
19. The method as claimed in claim 17, wherein a patch of the 2D signal with reasonable dimension is considered for semantic segmentation.
20. The method as claimed in claim 19, wherein the reasonable dimension is 520 pulses×40 sensor locations as an image.
21. The method as claimed in claim 17, wherein a one-class support vector machine (?-svm ) is used for anomaly detection.
22. The method as claimed in claim 21, wherein the one-class SVM fits in every time slot to find outliers across all locations.
23. The method as claimed in claim 17, wherein the semantic segmentation unit (224) partitions an image into multiple segments (regions), wherein the multiple segments are sets of pixels of arbitrary shape and each shape is given multiple labels belonging to one or more class labels.
24. The method as claimed in claim 23, wherein the one or more class labels may be a person, bike, digging.
25. The method as claimed in claim 23, wherein one of special class is no event representing no activity.
26. The method as claimed in claim 23, wherein for each image pixel, a probability is predicted for each class label such that all the class probabilities need not sum up to 1.
27. The method as claimed in claim 23, wherein if all the class probabilities are low, the image pixel is categorized in “no event” class and the event that is not of interest and can be filtered out.
| # | Name | Date |
|---|---|---|
| 1 | 202011044741-STATEMENT OF UNDERTAKING (FORM 3) [14-10-2020(online)].pdf | 2020-10-14 |
| 2 | 202011044741-POWER OF AUTHORITY [14-10-2020(online)].pdf | 2020-10-14 |
| 3 | 202011044741-FORM 1 [14-10-2020(online)].pdf | 2020-10-14 |
| 4 | 202011044741-DRAWINGS [14-10-2020(online)].pdf | 2020-10-14 |
| 5 | 202011044741-DECLARATION OF INVENTORSHIP (FORM 5) [14-10-2020(online)].pdf | 2020-10-14 |
| 6 | 202011044741-COMPLETE SPECIFICATION [14-10-2020(online)].pdf | 2020-10-14 |
| 7 | 202011044741-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 8 | 202011044741-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202011044741-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 10 | 202011044741-Proof of Right [18-11-2025(online)].pdf | 2025-11-18 |
| 11 | 202011044741-FORM-5 [18-11-2025(online)].pdf | 2025-11-18 |
| 12 | 202011044741-FORM-26 [18-11-2025(online)].pdf | 2025-11-18 |
| 13 | 202011044741-FORM 3 [18-11-2025(online)].pdf | 2025-11-18 |
| 14 | 202011044741-ENDORSEMENT BY INVENTORS [18-11-2025(online)].pdf | 2025-11-18 |