Abstract: The present disclosure provides a sampling method and system used for detecting intrusion events in a fiber optic sensing (FOS) based intrusion detection system (100). The sampling method (200) includes sampling, by a sampling unit (108), a two-dimensional signal transmitted from a distributed acoustic sensor (100a) into a block at a first sampling rate. Further, the method includes defining, by the sampling unit (108), a minimum percentage of pixels in the block that has an event. Furthermore, the method includes determining, by the sampling unit (108), whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
[0001] The present disclosure relates to a fiber optic sensing technology, and more specifically relates to a method and a system for sampling data for detecting intrusion events in a fiber optic sensing (FOS) environment.
BACKGROUND OF INVENTION
[0002] Perimeter Intrusion Detection System (PIDS) refers to a suite of products that address physical security requirements through various solutions combining hardware and software technologies. PIDS solutions are used in industrial applications like defense installations fence/perimeter security border security, early detection of digging close to telecom cable installations (data centers), industrial facilities fence/perimeter security, power plants, manufacturing plants. Various technologies are available for a full-fledged Perimeter Intrusion Detection System (PIDS), such as Y Beams, motion cameras, infrared sensors, optical cameras, fiber optic sensing (FOS).
[0003] Of late, fiber-optic distributed acoustic sensing (DAS) based intrusion detection systems are being used that provide efficient solutions for several infrastructures such as border, airport, pipeline security, etc. In general, the DAS systems detect and classify acoustic vibrations using standard telecommunication fibers buried under the ground or deployed over a fence. These acoustic vibrations help identify events that may pose a threat to the area where the DAS system has been deployed.
[0004] Sometimes, activities of interest that are being captured by the DAS system may not pose any type of threat depending upon time and location of the activity, such as any ground digging activity related to an agricultural event or any other environmental factors like rain, wind, random noise.
[0005] The existing solutions suffer from several shortcomings such as high false alarm rate due to nuisance events and environmental factors like rain, wind, random noise, etc.; high response time as due to high data volumes, response times for detection and classification of intrusion events can be lengthy; low adaptability to various types of event; high cost for new deployments and signal fading. Further, for novel event types, environments, soil types, and applications, the system needs to be re-engineered, which involves experts' art and craft, and that leads to high turnaround time. Various analytical models have been developed and utilized to solve these shortcomings; however, the analytical models are frozen once trained and deployed.
[0006] The intrusion detection solution should have a low level of false positive rate while also returning alerts on intrusion within seconds (<2 seconds is a common requirement) in order to be viable. These requirements strain system resources in opposing directions. In order to have a low false positive rate, more suspected events need to be analyzed and this is computationally expensive and increases response time.
[0007] In addition to this, intrusions are rare events with a low occurrence rate - meaning most of the signal from the DAS will be background noise. An additional complication with the DAS is that it is not possible to ascribe a parametric value to an amplitude difference associated with a particular intrusion event - that is, it is not possible to quantify exactly how much variation in amplitude of the DAS signal will be caused by a “Human Walking” vs. a “Digging” intrusion event.
[0008] Moreover, the current DAS systems perform intrusion detection by localizing objects with bounding boxes (rectangles). Such DAS intrusion patterns are more likely to be amorphous background regions rather than objects with well-defined shape that leads to false alarms.
[0009] Recently, there is a huge surge in use of machine learning and deep neural networking algorithms for the purposes of event detection in various applications like data network security and credit card fraud. For example, for the purpose of parametric feature classification, use of machine learning, support vector machines and the like are being used to provide better classification of events and better event detections. Further, emphasis is given on reducing or removing the false alarms completely.
[0010] For machine learning model based intrusion detection system, one of the biggest challenges is large data sets and their distribution. Further, in the intrusion detection system, the intrusion events are rare. As intrusion events are rare, there is an extreme imbalance between the portion of a signal that has events vs. has no events. For example, if an event is viewed as an image, more than 98 percent of pixels of the image have no event. Thus, any machine learning model trained on such an imbalanced dataset will be biased towards the most frequent class and will have low performance. The major problem such systems face is how to get a balanced dataset of normal events and events with anomalies, given that intrusion events are rare. In other words, if samples of images/blocks are drawn at random during intrusion detection, we would get a disproportionately high number of images where there is no event. This would bias a classifier to output only no event. Thus, there exists a need for a sampling strategy that adjusts such biasing in data while improving the classification accuracy of the machine learning model.
[0011] The present invention seeks to ameliorate one or more of the aforementioned disadvantages.
OBJECT OF INVENTION
[0012] The principal object of the present disclosure is to provide a sampling method and system to be used for detecting intrusion events using a fiber optic sensor (FOS) based intrusion detection system such that the detection has low false positives and can be performed within seconds.
[0013] Another object of the present disclosure is to create an image block from a distributed acoustic sensor (DAS) signal and a sub-sampling strategy of all such image blocks from a DAS signal in order to build a performant intrusion detection system.
[0014] Another object of the present disclosure is to provide a method and a system for sampling events to create a balance between a portion of the DAS signal that has event vs. a portion of the DAS signal that has no event, in order to build a performant intrusion detection system.
[0015] Another object of the present disclosure is to provide an overlap of a time dimension and a sensor dimension of an image block for ensuring higher accuracy in detecting intrusion events.
SUMMARY
[0016] The present disclosure provides a sampling method and system used for detecting intrusion events using fiber optic sensing (FOS). The fiber optic sensing (FOS) based intrusion detection system comprises a sampling unit. The sampling unit is configured to sample a two-dimensional signal transmitted from a distributed acoustic sensor into a block at a first sampling rate. Further, the sampling unit is configured to define a minimum percentage of pixels in the block that has an event. Furthermore, the sampling unit is configured to determine whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
[0017] The fiber optic sensing (FOS) based intrusion detection system further comprises a neural network unit configured to implement convolution neural network and to classify block into event types based on a sampled output received from the sampling unit and a conditional random field unit configured to implement a conditional random field mechanism to model the classified block and to provide structured predictions.
[0018] The sampling method for fiber optic sensing based intrusion detection comprises sampling, by a sampling unit, a two-dimensional signal transmitted from a distributed acoustic sensor into a block at a first sampling rate. The method further includes defining, by the sampling unit, a minimum percentage of pixels in the block that has an event; determining, by the sampling unit, whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
[0019] The sampling method further includes implementing, by a neural network unit, a convolution neural network to classify block into event types based on a sampled output received from the sampling unit; and implementing, by a conditional random field unit, a conditional random field mechanism to model the classified block and to provide structured predictions.
[0020] The block has a sensor dimension and a time dimension. A rolling window is used to provide a small overlap in the time and sensor dimension for consecutive blocks. The events are class labels defined to detect an intrusion event.
[0021] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0022] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0023] FIG. 1 illustrates a block diagram of a fiber optic sensing (FOS) based intrusion detection system including a sampling unit; and
[0024] FIG. 2 is a flow chart depicting a method for sampling data in the FOS based intrusion detection system.
DETAILED DESCRIPTION OF INVENTION
[0025] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0026] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0027] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0028] The present disclosure provides a sampling method and system used for detecting intrusion events in a fiber optic sensing (FOS) based intrusion detection system. The sampling method includes sampling, by a sampling unit, a two-dimensional signal transmitted from a distributed acoustic sensor into a block at a first sampling rate. Further, the method includes defining, by the sampling unit, a minimum percentage of pixels in the block that has an event. Furthermore, the method includes determining, by the sampling unit, whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
[0029] The key idea of the current solution to the problem of intrusion detection of distributed acoustic sensor is to create an image block from a distributed acoustic sensor (DAS) signal and defining a minimum pixel percentage criteria for event pixels in an image. Further, the key idea is to provide a method and a system for sampling events to create a balance between a portion of the DAS signal that has event vs. a portion of the DAS signal that has no event.
[0030] Referring now to the drawings, and more particularly to FIGS. 1 through 2, there are shown preferred embodiments.
[0031] FIG. 1 illustrates a block diagram of a fiber optic sensing (FOS) based intrusion detection system including a sampling unit. The FOS based intrusion detection system (100) detects events and helps in reducing false alarm rate due to nuisance events like environmental factors like rain, wind, random noise in a perimeter intrusion detection system (PIDS). The FOS based intrusion detection system (100) comprises a sensor cable (102), a sensor interrogator (104), an FOS data ingestion and transform unit (106), a sampling unit (108), a neural network unit (110), a conditional random field unit (112) and an FOS user interface (114).
[0032] The sensor cable (102) may be a fence sensor cable, buried sensor cable, or the like. The sensor cable (102) is a cable designed with a specific configuration to measure pressure, temperature, strain, vibration, or the like for optimal performance of signal or data transmission and to locate a disturbance along a length of the sensor cable. The sensor cable (102) provides intrusion data. The sensed data from the sensor cable (102) is fed to the sensor interrogator (104). The sensor interrogator (104) measures a large sensing network by acquiring data from the sensor cable (102) simultaneously and at different sampling rates. The sensor interrogator (104) measures wavelength associated with the light reflected by the sensor cable (102) and sends it to the workstation and digitizer (Not shown in Figure). The workstation and digitizer (unit) r eceives analog information from the sensor interrogator (104) and records it digitally. The analog information may be in the form of sound or light. Usually, the information is stored in a file on a computing device.
[0033] The sensor cable (102) is a fiber optic sensor (FOS) that measures pressure, temperature, strain, vibration, or the like for optimal performance of signal or data transmission and to locate a disturbance around the sensor cable (102) in order to identify any type of intrusion. The sensor cable (102) may be a combination of a fiber optic cable and one or more sensors. The sensor cable (102) may be a fiber optic cable. The sensor cable (102) may be a fence sensor cable, a buried sensor cable, or the like. The sensed data from the sensor cable (102) is fed to the interrogator (104) connected to it. The sensor cable (102) acts as both a sensing element and a transmission medium to and from the interrogator (104). The interrogator (104) launches optical pulses into one or more optical fibers of the sensor cable (102). These optical pulses propagate along a core of the one or more optical fibers and interact with a core material. In this process, a small fraction of optical power is scattered, propagating back towards the interrogator. The interrogator (104) then analyses the backscattered signal as a function of time and, depending on configuration, and is further distinguishes temperature, pressure, strain or acoustic signal as a function of distance along the one or more optical fibers.
[0034] The sensor cable (102) may comprise one or more types of sensor elements and their individual return signals are distinguished either through use of different optical wavelength bands, similar to radio channels.
[0035] The sensor cable (102) and the sensor interrogator (104) may jointly be called as a distributed acoustic sensor (100a) and an output of the distributed acoustic sensor (100a) can be called as a distributed acoustic sensor (DAS) signal, which may be 2D in nature.
[0036] The output of the interrogator (104) is fed to the FOS data ingestion and transform unit (106). The output may be audio signals, video signals, image patterns or the like. The output is a two-dimensional distributed acoustic sensor (2D DAS) signal. One of the dimensions is a finite spatial dimension defined by a length of a fibre. In an example, a discrete value is at approximately 1 meter. The other dimension is an unbounded one i.e. a temporal dimension. In an example, a train of values at 1 meter intervals is obtained at every 2.5 ms approximately. The values of 1 meter and 2.5 ms depend on the settings of the sensor cable (102). They have been held constant to ensure consistency of measurements.
[0037] The FOS data ingestion and transform unit (106) works in tandem with the sampling unit (108).
[0038] The sampling unit (108) converts the 2D distributed acoustic sensor signal received from the sensor interrogator (104) into blocks. The terms image, blocks, patches, ROIs, box, array, tensor, matrix may interchangeably be used. In an example, the proposed invention has a CxHxW matrix, where C = number of channels, H = height of 520 pixels and W = width of 40 pixels. One pixel corresponds to a discretized signal value corresponding ~ 2.508 ms and 1.02 m. Other CHS matrix is also possible.
[0039] The blocks are one or more images that are converted from the 2D DAS signals. During sampling process, a random sample from the 2D DAS signal is drawn, wherein the 2D DAS signals are spatio-temporal acoustic signals. The drawn samples are forwarded to the neural network unit (110). The drawn samples are a fixed sized images (blocks). An event in every image or block is determined by the pixels in it by defining a minimum percentage criteria of the pixel that helps in categorizing the event nature. For example, if the minimum percentage criteria of an image is 50%, the event is for Human Walking.
[0040] To identify or classify nature of event, determination of a minimum percentage of pixels in the image is carried out. If the sampled block does not have the minimum percentage of pixels that should have an event, the blocks sampled at a lower sampling rate than a first sampling rate.
[0041] In an example, for sampling, every selected image should have at least 40 percent of its pixels classified as an event. For small duration events like digging, the number is reduced to 25 percent. It is possible for a block/image to have multiple event labels - for example human walking and digging could occur simultaneously. The example is shown in below TABLE A. The sampling strategy ensures that there should be no class imbalance in the selection of images.
TABLE A
Parameter Description
sensor_dimension 40 locations, image width along sensor dimension
time_dim 520 pulses, image height along time dimension
min_pixel_per_cent Block should have more than min_pixel_per_cent of event pixels
Default values are:
Human walking: 50 percent
Bike: 50 percent
Digging: 25 percent
[0042] Further, due to time series nature of the 2D DAS signal and physical layout of the FOS, a rolling window technique in sampling methodology is applied. The rolling window means applying the sampling method repeatedly to a sub-data sets or sub-series in a full data set or series. The image data having two dimensions that is the sensor dimension and the time dimension are slightly overlapped consecutive blocks. This ensures that “corners” of images are preserved to avoid misclassification rate due to this as blocks with relatively whole sections of the image are included.
[0043] The neural network unit (110) outputs a classification or semantic segmentation of the images/blocks into event types based on the sampled output received from the sampling unit (108). The neural network unit (110) may implement, but not limited to, feed forward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, fully convolution neural network, modular neural network. The neural network unit (110) may include an anomaly detection unit and a segmentation unit. The neural network unit (110) acts as a classifier. The neural network unit (110) implements a convolutional neural network model that receives the drawn samples from the sampling unit (108).
[0044] The neural network unit (110) may partition the images into multiple segments (regions). These segments are set of pixels of arbitrary shape where all the pixels have a same assigned label. The neural network unit (110) labels the image pixels in the image pattern. In other words, the neural network unit (110) classifies the image at pixel level. For example, the classification (class labels) comprising categories of intrusion like person, bike, digging and a special class of “no events” representing “no activity”. The neural network unit (110) finds exact boundaries of objects as against the usual object detection that localizes objects with bounding boxes (rectangles).
[0045] The classified output of the neural network unit (110) is forwarded to the conditional random field unit (112). The conditional random field (CRF) unit (112) utilizes a dense CRF (Conditional Random Field) mechanism to model the classified output of the neural network unit (110). The conditional random field (CRF) may be used for inference (prediction) drawing after image segmentation by a convolutional neural network/fully convolutional neural network (CNN/ FCN).
[0046] For example, if all class label probabilities are low, it indicates an unknown event that is not of interest and can be filtered out. In another example, for a value close to 0, it indicates that the corresponding class is not likely to be present, while a value close to 1, it indicates that the corresponding class is likely to be present.
[0047] The example of classes considered are, but not limited to, PERSON, BIKE, DIGGING (or any other event type) and a special class NO_EVENT (stationary noise output of the sensor, in the absence of any physical event, in an underground deployment).
[0048] The results of the CRF unit (112) are provided at the FOS user interface (114). The FOS user interface (114) displays an alarm/signal on map. The FOS user interface (114) is associated with a memory and a processor that store the events, alarm history or the like and process the same respectively in order to display via the FOS user interface (114). The user interface may be a computer, a smart phone, a laptop, a handheld device or the like.
[0049] FIG. 2 is a flow chart (200) depicting a method for sampling data for the FOS based intrusion detection system. The method includes receiving a two-dimensional signal from the distributed acoustic sensor (100a) at step (202). At step (204), the method includes sampling, by the sampling unit (108), the two-dimensional signal transmitted from the distributed acoustic sensor (100a) into a block at a first sampling rate.
[0050] At (206), the method includes defining, by the sampling unit (108), a minimum percentage of pixels in the block that has an event.
[0051] At (208), the method includes determining, by the sampling unit (108), whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels at (210).
[0052] The detailed functionality of all the blocks/units are already explained in conjunction with FIG 1.
[0053] The various actions, acts, blocks, steps, or the like in the flow chart (200) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0054] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[0055] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
[0056] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[0057] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[0058] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[0059] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[0060] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[0061] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[0062] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[0063] While the detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
Claims:CLAIMS
We claim:
1. A fiber optic sensing (FOS) based intrusion detection system (100) comprising:
a sampling unit (108) configured to:
sample a two-dimensional signal transmitted from a distributed acoustic sensor (100a) into a block at a first sampling rate;
define a minimum percentage of pixels in the block that has an event;
determine whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
2. The system (100) as claimed in claim 1, wherein the block has a sensor dimension and a time dimension.
3. The system (100) as claimed in claim 2, wherein a rolling window is used to provide a small overlap in the time and sensor dimension for consecutive blocks.
4. The system (100) as claimed in claim 1, wherein the events are class labels defined to detect an intrusion event.
5. The system (100) as claimed in claim 1 further comprises:
a neural network unit (110) configured to implement convolution neural network and to classify the block into event types based on a sampled output received from the sampling unit (108); and
a conditional random field unit (112) configured to implement a conditional random field mechanism to model the classified block and to provide structured predictions.
6. A sampling method to be implemented in a fiber optic sensing based intrusion detection system (100) comprising:
sampling, by a sampling unit (108), a two-dimensional signal transmitted from a distributed acoustic sensor (100a) into a block at a first sampling rate;
defining, by the sampling unit (108), a minimum percentage of pixels in the block that has an event;
determining, by the sampling unit (108), whether the block has the minimum percentage of pixels having the event, wherein sampling the block at a lower sampling rate than the first sampling rate when the sampled block does not have the minimum percentage of pixels.
7. The method as claimed in claim 6, wherein the block has a sensor dimension and a time dimension.
8. The method as claimed in claim 7, wherein a rolling window is used to provide a small overlap in the time and sensor dimension for consecutive blocks.
9. The method as claimed in claim 6, wherein the events are class labels defined to detect an intrusion event.
10. The method as claimed in claim 6 further comprises:
implementing, by a neural network unit (110), a convolution neural network to classify the block into event types based on a sampled output received from the sampling unit (108); and
implementing, by a conditional random field unit (112), a conditional random field mechanism to model the classified block and to provide structured predictions.
| # | Name | Date |
|---|---|---|
| 1 | 202011045046-STATEMENT OF UNDERTAKING (FORM 3) [16-10-2020(online)].pdf | 2020-10-16 |
| 2 | 202011045046-POWER OF AUTHORITY [16-10-2020(online)].pdf | 2020-10-16 |
| 3 | 202011045046-FORM 1 [16-10-2020(online)].pdf | 2020-10-16 |
| 4 | 202011045046-DRAWINGS [16-10-2020(online)].pdf | 2020-10-16 |
| 5 | 202011045046-DECLARATION OF INVENTORSHIP (FORM 5) [16-10-2020(online)].pdf | 2020-10-16 |
| 6 | 202011045046-COMPLETE SPECIFICATION [16-10-2020(online)].pdf | 2020-10-16 |
| 7 | 202011045046-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 8 | 202011045046-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202011045046-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |