Abstract: The present disclosure relates to a system and a method for deploying a deep learning model in a fiber optic sensing (FOS) enabled intrusion detection system (100). The system includes a labelling means (300) to label image pixels obtained from image patterns received from a distributed acoustic sensing sensor (304). Further, the system includes a training means (400) to train a convolution neural network using a minibatch stochastic gradient descent training procedure. The training means (400) trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed. The system further includes a testing means (500) to test performance of the intrusion detection system (100) against a holdout set of training data. Furthermore, the system includes an inference means (600) for a structured prediction of occurrence of an intrusion event from the image pixels.
[0001] The present disclosure relates to a fiber optic sensing technology, and more specifically relates to a system and a method for deploying a deep learning model in fiber optic sensing based intrusion detection.
BACKGROUND OF INVENTION
[0002] Perimeter Intrusion Detection System (PIDS) refers to a suite of products that address physical security requirements through various solutions combining hardware and software technologies. PIDS solutions are used in industrial applications like defense installations fence/perimeter security border security, early detection of digging close to telecom cable installations (data centers), industrial facilities fence/perimeter security, power plants, manufacturing plants. Various technologies are available for a full-fledged Perimeter Intrusion Detection System (PIDS), such as Y Beams, motion cameras, infrared sensors, optical cameras, fiber optic sensing (FOS).
[0003] Of late, fiber-optic distributed acoustic sensing (DAS) based intrusion detection systems are being used that provide efficient solutions for several infrastructures such as border, airport, pipeline security, etc. In general, the DAS systems detect and classify acoustic vibrations using standard telecommunication fibers buried under the ground or deployed over a fence. These acoustic vibrations help identify events that may pose a threat to the area where the DAS system has been deployed.
[0004] Sometimes, activities of interest that are being captured by the DAS system may not pose any type of threat depending upon time and location of the activity, such as any ground digging activity related to an agricultural event or any other environmental factors like rain, wind, random noise.
[0005] The existing solutions suffer from several shortcomings such as high false alarm rate due to nuisance events and environmental factors like rain, wind, random noise, etc.; high response time as due to high data volumes, response times for detection and classification of intrusion events can be lengthy; low adaptability to various types of event; high cost for new deployments and signal fading. Further, for novel event types, environments, soil types, and applications, the system needs to be re-engineered, which involves experts' art and craft, and that leads to high turnaround time. Various analytical models have been developed and utilized to solve these shortcomings; however, the analytical models are frozen once trained and deployed.
[0006] The intrusion detection solution should have a low level of false positive rate while also returning alerts on intrusion within seconds (<2 seconds is a common requirement) in order to be viable. These requirements strain system resources in opposing directions. In order to have a low false positive rate, more suspected events need to be analyzed and this is computationally expensive and increases response time.
[0007] In addition to this, intrusions are rare events with a low occurrence rate - meaning most of the signal from the DAS will be background noise. An additional complication with the DAS is that it is not possible to ascribe a parametric value to an amplitude difference associated with a particular intrusion event - that is, it is not possible to quantify exactly how much variation in amplitude of the DAS signal will be caused by a “Human Walking” vs. a “Digging” intrusion event.
[0008] Recently, there is a huge surge in use of machine learning and deep neural networking algorithms for the purposes of event detection in various applications like data network security and credit card fraud. For example, for the purpose of parametric feature classification, use of machine learning, support vector machines and the like are being used to provide better classification of events and better event detections. Further, emphasis is given on reducing or removing the false alarms completely.
[0009] The PIDS systems are deployed in remote locations and the cost of false alarms is high. The classic machine learning models implemented for the intrusion detection system extracts raw data followed by feature extraction, event probability evaluation and finally classification of events. With such systems, we have a typical problem of rare event detection that often results in a high false alarm rate. Also, the type of events that would be encountered in different deployments of a PIDS can wary vastly - industrial perimeter applications can see different events from border security deployments. In addition, environmental factors like soil type can vary even within the same deployment and they impact the signature of the event in the DAS signal. The existing machine learning models are application specific and fail to provide a framework that can be extended to various event types as per need of an application. Thus, there exists a need for a dynamic platform that can address the aforesaid gaps.
[0010] The present invention seeks to ameliorate one or more of the aforementioned disadvantages or at least provide a useful alternative.
OBJECT OF INVENTION
[0011] The principal object of the present disclosure is to provide a system and a method for deploying a deep learning model in a fiber optic sensing (FOS) based intrusion detection environment.
[0012] Another object of the present disclosure is to provide a framework that can be extended to various event types as per need of an application.
[0013] Another object of the present disclosure is to provide an adaptive intrusion detection system using the FOS.
[0014] Another object of the present disclosure is to provide a mechanism to continuously learn from deployments and improve application performance.
SUMMARY
[0015] The present disclosure provides a system for deploying a deep learning model in a fiber optic sensing (FOS) enabled intrusion detection system. The system comprises a labelling means to label image pixels obtained from image patterns, wherein the image patterns are a patch of a two-dimensional signal obtained from a distributed acoustic sensing (DAS) sensor. The system further comprises a training means to train a convolution neural network using a minibatch stochastic gradient descent training procedure, wherein the training means trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed. Further, the system comprises a testing means to test performance of the intrusion detection system against a holdout set of training data. Furthermore, the system includes an inference means to make a structured prediction of occurrence of an intrusion event from the image pixels.
[0016] The convolution neural network is a fully convolution neural network and the pre-defined time can be 150 minutes.
[0017] The labelling means is configured to sense and capture, by the DAS sensor and an image capturing unit, an on-field event and a real-time feed, wherein the on-field event and the real-time feed are an input image. Further, the labelling means is configured to receive and store, in a storage, the input image to prepare for labelling. Furthermore, a labelling unit is configured to receive the input image prepared for labelling and outputting labelled regions that is further forwarded to the storage, and accessed by the intrusion detection system.
[0018] The training means runs an iterative process for an adaptable intrusion detection and is configured to store, by a dataset unit, one or more sampled blocks received from the DAS sensor and maintain an index for each of the one or more sampled blocks, wherein the one or more sampled blocks are training data. A sampler selects one or more sampling blocks as per a predefined sampling criteria and a data loader unit implements the minibatch stochastic gradient descent to batch on the one or more sampling blocks received from the sampler. Further, a modeler performs an anomaly detection and a semantic segmentation. Furthermore, an evaluator determines a confusion matrix showing accuracy of model predictions against a ground truth in the training data and plots learning curves and pass on the confusion matrix showing accuracy of model predictions against the ground truth in the training data and plots learning curves to the dataset unit for correction.
[0019] The predefined sampling criteria represents the one or more sampling blocks having more than minimum event pixels.
[0020] The inference means is configured to identify, by an anomaly detection unit, a one or more blocks to be sent to a semantic segmentation unit, the one or more blocks includes image patterns; segment and label, by a neural network, the image patterns; and output, by a conditional random field unit, a structured prediction of the labelled image patterns and displaying, by a user interface, the structured prediction of the labelled image patterns.
[0021] The conditional random field unit implements a dense conditional random field and the neural network is the convolution neural network.
[0022] The inference means includes a data ingestion unit that forwards the one or more blocks to the anomaly detection unit and the neural network unit.
[0023] The testing means comprises an evaluation unit configured to receive a ground truth and predictions and utilizes a precision-recall metrics to identify fraction of relevant instances among retrieved instances and fraction of total amount of relevant instances actually retrieved.
[0024] Further, the present disclosure provides a method for deploying a deep learning model in a fiber optic sensing (FOS) enabled intrusion detection system. The method includes labelling, by a labelling means, image pixels obtained from image patterns, wherein the image patterns are a patch of a two-dimensional signal obtained from a distributed acoustic sensing (DAS) sensor. The method further includes training, by a training means, a convolution neural network using a minibatch stochastic gradient descent training procedure, wherein the training means trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed. Further, the method includes performing test, by a testing means, for the intrusion detection system against a holdout set of training data. Furthermore, the method includes predicting, by an inference means, occurrence of an intrusion event from the image pixels.
[0025] The labelling means includes sensing and capturing, by the DAS sensor and an image capturing unit, an on-field event and a real-time feed, wherein the on-field event and the real-time feed are an input image. Further, the labelling means includes receiving and storing, by a storage, the input image to prepare for labelling. Furthermore, the labelling means includes receiving, by a labelling unit, the input image prepared for labelling and outputting labelled regions that is further forwarded to the storage, and accessed by the intrusion detection system.
[0026] The training means runs an iterative process for an adaptable intrusion detection and includes storing, by a dataset unit, one or more sampled blocks received from the DAS sensor and maintaining an index for each of the one or more sampled blocks, wherein the one or more sampled blocks are training data. The training means further includes selecting, by a sampler, one or more sampling blocks as per a predefined sampling criteria; implementing, by a data loader unit, the minibatch stochastic gradient descent to batch on the one or more sampling blocks received from the sampler. Further, the training means includes performing, by a modeler, an anomaly detection and a semantic segmentation. Additionally, the training means includes determining, by an evaluator, a confusion matrix showing accuracy of model predictions against a ground truth in the training data and plots learning curves and passing on, by the evaluator, the confusion matrix showing accuracy of model predictions against the ground truth in the training data and plots learning curves to the dataset unit for correction.
[0027] The inference means includes identifying, by an anomaly detection unit, a one or more blocks to be sent to a semantic segmentation unit, the one or more blocks includes image patterns; segmenting and labelling, by a neural network, the image patterns and outputting, by a conditional random field unit, a structured prediction of the labelled image patterns and displaying, by a user interface, the structured prediction of the labelled image patterns.
[0028] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[0029] The method and system are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[0030] FIG. 1 illustrates a block diagram of a fiber optic sensing (FOS) enabled intrusion detection system including a deep learning model;
[0031] FIG. 2 illustrates a block diagram of the deep learning model;
[0032] FIG. 3 is a block diagram depicting a labelling means in the FOS enabled intrusion detection system;
[0033] FIG. 4 is a block diagram depicting a training means in the FOS enabled intrusion detection system;
[0034] FIG. 5 is a block diagram depicting a testing means of the FOS enabled intrusion detection system;
[0035] FIG. 6 is a block diagram depicting an inference means of the FOS enabled intrusion detection system to predict intrusion;
[0036] FIG. 7 illustrates a flow chart for deploying the deep learning model in fiber optic sensing (FOS) enabled intrusion detection system; and
[0037] FIG. 8 is an example setup of the intrusion detection system.
DETAILED DESCRIPTION OF INVENTION
[0038] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a thorough understanding of the embodiment of invention. However, it will be obvious to a person skilled in the art that the embodiments of the invention may be practiced with or without these specific details. In other instances, well known methods, procedures and components have not been described in details so as not to unnecessarily obscure aspects of the embodiments of the invention.
[0039] Furthermore, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art, without parting from the scope of the invention.
[0040] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[0041] Accordingly, embodiments herein disclose a method for intrusion detection using a distributed acoustic sensor (DAS) on an optical fibre cable. The method includes obtaining a two-dimensional (2D) distributed acoustic sensing signal and interpreting a patch of the 2D signal as an image pattern. Further, the method includes identifying and labelling such image patterns using a two-step method, wherein an intrusion event is identified using anomaly detection by implementing a machine learning model to detect anomalies in the image patterns. Furthermore, the method includes implementing semantic segmentation on the image patterns where the anomaly is detected, wherein a fully convolutional neural network extracts pixels from the image patterns and labels the pixels obtained from the image patterns. The convolutional neural network provides structured prediction of intrusion event and type from the labelled pixels.
[0042] The present disclosure relates to a system and a method for deploying a deep learning model in a fiber optic sensing (FOS) enabled intrusion detection system. The system includes a labelling means to label image pixels obtained from image patterns received from a distributed acoustic sensing sensor. Further, the system includes a training means to train a convolution neural network using a minibatch stochastic gradient descent training procedure. The training means trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed. The system further includes a testing means to test performance of the intrusion detection system against a holdout set of training data. Furthermore, the system includes an inference means for a structured prediction of occurrence of an intrusion event from the image pixels.
[0043] The terms “image”, “block”, “patch”, “box”, “array”, “tensor”, and “matrix” may interchangeably be used throughout the present disclosure. The terms “means” and “system” may interchangeably be used throughout the present disclosure.
[0044] Unlike to the conventional methods and systems, the proposed invention provides a framework that can be extended to various event types as per need of an application and provides an adaptive intrusion detection system using fiber optic sensing (FOS). The proposed invention uses a deep learning model that enables the FOS to automatically learn and improve from experience without being explicitly programmed.
[0045] The learning method uses networks capable of learning in a supervised fashion from data that is labelled as well as in an unsupervised fashion from data that is unstructured or unlabelled or labelled. The deep learning method employs multiple layers of neural networks that enable the system of the present disclosure to teach itself through inference and pattern recognition, rather than development of procedural code or explicitly coded software algorithms.
[0046] The neural networks enhance their learning capability by varying the uniquely weighted paths based on their received input. The successive layers within the neural network incorporates a learning capability by modifying their weighted coefficients based on their received input patterns. The training of the neural networks is very similar to teaching human brain to recognize an object. The neural network is repetitively trained from a base data set, where results from the output layer are successively compared to the correct classification of the image.
[0047] In an alternate representation, any machine learning paradigm instead of neural networks can be used in the training and learning process.
[0048] The machine learning algorithms are a class of algorithms that have proven to be very successful in identifying signals from noisy environments. They can be retrained on adjacent datasets with relative ease. The key idea of the current solution to the problem of FOS based intrusion detection system is to treat intrusion events as image patterns. Further, the key idea is to provide a framework that can be extended to various event types as per need of an application and to provide an adaptive intrusion detection system using the FOS. Furthermore, the key idea is to provide a mechanism to continuously learn from new deployments and improve application performance. Furthermore, the key idea is to provide a mechanism to detect and classify intrusion events with a low rate of false positives.
[0049] Referring now to the drawings, and more particularly to FIGS. 1 through 8, there are shown preferred embodiments.
[0050] FIG. 1 illustrates a block diagram of a fiber optic sensing (FOS) enabled intrusion detection system including a deep learning model. The fiber optic sensing (FOS) enabled intrusion detection system (100) comprises a sensor cable (102), a sensor interrogator (104), a workstation and digitizer (106), a deep learning model (108) and a user interface (110).
[0051] The fiber optic sensing (FOS) enabled intrusion detection system (100) implements a deep learning model. The fiber optic sensing (FOS) enabled intrusion detection system (100) may include a two-tier machine learning model that detects events from the sensor cable (102) and helps in reducing false alarm rate due to nuisance events like environmental factors like rain, wind, random noise in a perimeter intrusion detection system (PIDS). The deep learning model proposed in the present disclosure is adaptable to new deployment site and is advantageously trained for a small amount of time with labelled data from a new test bed in addition to labelled data from a source test bed. The small amount of time can be 150 minutes. The small amount of time can be pre-defined by a user.
[0052] In general, the sensor cable (102) may be a fence sensor cable, buried sensor cable, or the like. The sensor cable (102) is a cable designed with a specific configuration to measure pressure, temperature, strain, vibration, or the like for optimal performance of signal or data transmission and to locate a disturbance along a length of the sensor cable. The sensor cable (102) may be a combination of a fiber optic cable and one or more sensors. The sensor cable (102) provides intrusion data. The sensed data from the sensor cable (102) is fed to the sensor interrogator (104) communicatively coupled to it. The sensor cable (102) acts as both a sensing element and a transmission medium to and from the sensor interrogator (104).
[0053] The sensor interrogator (104) measures a large sensing network by acquiring data from the sensor cable (102) simultaneously and at different sampling rates. The sensor interrogator (104) launches optical pulses into one or more optical fibers of the sensor cable (102). These optical pulses propagate along a core of the one or more optical fibers and interact with a core material. In this process, a small fraction of optical power is scattered, propagating back towards the interrogator. The sensor interrogator (104) then analyses the backscattered signal as a function of time and, depending on configuration, and is further distinguishes temperature, pressure, strain or acoustic signal as a function of distance along the one or more optical fibers.
[0054] The sensor cable (102) may comprise one or more types of sensor elements and their individual return signals are distinguished either through use of different optical wavelength bands, similar to radio channels.
[0055] In short, the sensor interrogator (104) measures wavelength associated with the light reflected by the sensor cable (102) and sends it to the workstation and digitizer (106). The workstation and digitizer (106) receives analog information from the sensor interrogator (104) and records it digitally. The analog information may be in the form of sound or light. Usually, the information is stored in a file on a computing device.
[0056] Alternatively, an input to the workstation and digitizer (106) may be audio signals, video signals, image patterns or the like. The input is a two-dimensional distributed acoustic sensor (2D DAS) signal. The 2D signal consists of spatio-temporal acoustic signals. In other words, the 2D signal is the output of the Distributed Acoustic Sensor (DAS). One of the dimensions is a finite spatial dimension defined by a length of the fibre. Here, a discrete value is at approximately 1 meter. The other dimension is an unbounded one - temporal dimension. The train of values at 1 meter intervals is obtained at every 2.5 ms approximately. The values of 1 meter and 2.5 ms depend on the settings of the DAS. They have been held constant to ensure consistency of measurements.
[0057] The 2D distributed acoustic sensor signal is converted into image patterns. For example, a DAS signal is segmented into image block. The two dimensional DAS signal, for example, has one dimension as sensor and. In an example, at 40 locations on the sensor cable (102), is image width along sensor dimension. The second dimension of the 2D DAS signal, for example, is 520 pulses taken as image height along the time dimension.
[0058] The workstation and digitizer (106) acts as a data repository and data transformation unit. The 2D signals obtained from the sensor interrogator (104) are loaded into the workstation and digitizer (106) and is transformed into required format. For example, the 2D signal is transformed into an image pattern having a sensor dimension and time dimension.
[0059] The digitized information from the workstation and digitizer (106) is fed to the deep learning model (108). The deep learning model (108) is trained and learned to detect an event, classify an event or the like. In an embodiment, the learning can be supervised or unsupervised. In an embodiment, the deep learning model (108) may incorporate neural networks, such as, but not limited to, feedforward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, modular neural network, in the training and learning process.
[0060] During the supervised learning, the model learns on a labelled dataset and maps an input to an output based on the labelled dataset. In contrast to the supervised learning, during the unsupervised learning, the model learns on unlabelled dataset and extract features and patterns to draw inference or output.
[0061] The unsupervised machine learning model may be, but not limited to, density-based techniques, one-class support vector machines, bayesian networks, hidden markov models (hmms), cluster analysis-based outlier detection, fuzzy logic-based outlier detection, ensemble techniques or the like.
[0062] The deep learning model (108) may act as a standalone module to generate results related to any type of event and intrusion upon learning the event type. The deep learning model (108) categorizes the type of events and intrusions and provides the results via the user interface (110). The user interface (110) displays an alarm/signal on map. The user interface (110) is associated with a memory and a processor that store the events, alarm history or the like and process the same respectively in order to display via the user interface (110). The user interface may be a computer, a smart phone, or the like.
[0063] FIG. 2 illustrates a block diagram of the deep learning model. The deep learning model (108) includes a labelling means (300), a training means (400), a testing means (500), and an inference means (600). The function and structure of the labelling means (300), the training means (400), the testing means (500), and the inference means (600) is explained in conjunction with FIGS. 3 through 6.
[0064] FIG. 3 is a block diagram depicting the labelling means in the FOS enabled intrusion detection system. The labelling means (300) comprises a distributed acoustic sensing (DAS) sensor (304), an image capturing unit (306), a server 308, a storage (310) and a labelling unit (312). The DAS sensor (304) senses on-field events (302). The on-field events may be, but not limited to, human walking, digging, or moving vehicle. The server (308) receives the sensed on-field events from the DAS sensor (304) and stores therein. Simultaneously, the image capturing unit (306) captures data such as images, videos, or any multimedia information of an area where it has been installed. The image capturing unit (306) may be a camera, a video recorder, or the like. The captured data from the image capturing unit (306) is forwarded to the server (308).
[0065] In other words, the server (308) receives raw data from the DAS sensor (304) and the image capturing unit (306). The raw data may be an image, a video footage etc in real-time or non-real-time manner. The server (308) stores the raw data in suitable format, prepares the raw data for labelling and serves it to the storage (310). The storage (310) forwards the prepared raw data to the labelling unit (312).
[0066] The server (308) may be a computer or any computing device running a server program. The storage (310) may be a cloud server or a memory such as an on-premise storage. The storage (310) may include one or more computer-readable storage media. The storage (310) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard disc, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the storage (310) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the storage (310) is non-movable.
[0067] Alternatively, the server (308) and the storage (310) may act a single entity that stores the raw data in suitable format, prepares the raw data for labelling and serves it to the labelling unit (312).
[0068] Once the labelling unit (312) receives the raw data prepared for labelling. In an embodiment, the raw data is an image or image patterns. The labelling unit (312) extracts image pixels from the obtained image patterns and labels every image pixel. The image patterns are a patch of two-dimensional (2D) signals obtained from the DAS sensor (304). The 2D signals are spatio-temporal acoustic signals. The labelling unit (312) converts pixel co-ordinates captured into sensor and time co-ordinates relevant to DAS domain.
[0069] In an embodiment, if the 2D signals are represented as a 2D image, events are regions in this space. During labelling process, an event “region” is marked as polygons and rectangles. All points outside the region are considered as no event.
[0070] The raw data, that is an input data, is a continuous stream i.e. collection of images. For the purpose of training the semantic segmentation unit (224), class labels (“BIKE”, “UNKNOWN”, “HUMAN WALKING” or the like) are required for portions of each image in the training set of data. These class labels may be created by manual inspection by experts, correlating the DAS signal with CCTV footage that are taken from the test suite at which the training data are collected. The labelling unit (312) forwards the labelled regions back to the storage (310). These labels are referred at as “the ground truth”.
[0071] In an embodiment, the labelling means (300) may use a convolutional neural network (CNN) based technique to extract the features corresponding to the image patterns and a multilayer perceptron technique to consolidate image features to classify events. The multilayer perceptron technique is trained and validated against the manually labelled events. Further, the labelling means (300) acts as a response center to classify the events by consolidating information collected from the DAS sensor (304) and the image capturing unit (306).
[0072] FIG. 4 is a block diagram depicting the training means in the FOS enabled intrusion detection system. The training means (400) includes a dataset unit (402), a sampler (404), a data loader unit (406), a modeler (408), a multilabel softmargin loss unit (410), an optimizer (412), a learning rate scheduler (414) and an evaluator (416). The training means (400) is a convolutional neural network (CNN) based system. Alternatively, the training means (400) may implement a feedforward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, fully convolution neural network, modular neural network etc.
[0073] The dataset unit (402) consists of FOS data, which is a training set (data) and relates to the on-field events, in the form of the image patterns for training the two-tier machine learning model. The image patterns are a block of size 520x40. The image blocks may be of other suitable sizes. The dataset unit (402) maintains index for each image block. In other words, the dataset unit (402) stores one or more sampled blocks received from the DAS sensor (304) and maintains an index for each of the one or more sampled blocks. The terms “image block” and “block” and “one or more sampled blocks” may interchangeably be used. The sampler (404) selects one or more sampling blocks as per a pre-defined sampling criteria from the dataset unit (402). The sampling criteria is related to event pixels. In an example, the blocks having more than 50 percent event pixels are selected. In an example, event pixels are user-defined. The event pixels are the “region” marked as polygons and rectangles in the labelling procedure. The sampled blocks, i.e. the training set, are forwarded to the data loader unit (406).
[0074] The data loader unit (406) implements minibatch stochastic gradient descent training procedure on the one or more sampling blocks i.e. training set and validation set, received from the sampler (404). During this process, the training set is split into smaller set. The smaller set may be called as batches or blocks or minibatches. Afterwards, gradient descent is applied on each batch or block or minibatches one after the other. In an embodiment, the minibatches may include image blocks.
[0075] The minibatches of image blocks are the sampled blocks and their segmentation maps from the set of all possible image blocks of size 520x40. The image blocks may be of other suitable sizes.
[0076] The modeler (408) and the multilabel softmargin loss unit (410) utilizes backpropagation mechanism on the data received from the data loader unit (406). The backpropagation mechanism trains the CNN through a method called chain rule during which backpropagation performs a backward pass while adjusting the modeler's parameters after each forward pass through a network. During the forward pass, a mapping function based on current parameter values is determined. In the backward pass, gradients w.r.t layer inputs and parameters are determined by the modeler (408). Further, the multilabel softmargin loss unit (410) determines a cross entropy loss from the modeler’s output and target label values during the forward pass. In the backward pass, a gradient of loss w.r.t the modeler’s output is determined.
[0077] The optimizer (412) updates parameters of the multilabel softmargin loss unit’s output. The parameters are updated using variant of the minibatch stochastic gradient descent which computes an exponentially weighted moving average of the past gradients and square gradients. As the number of possible image blocks is very large, around one million image blocks as one “epoch” in the training cycle is considered. In an example, as of release 0.4, the training set consists of 79 experiments and the test set consists of 39 experiments distinct from the training set. Each experiment is of 10 minutes duration.
[0078] The learning rate scheduler (414) lowers the learning rate at the end of the epoch if there is no improvement in the past epoch. In other words, at the end of an epoch, if there is no improvement in the validation loss, the learning rate is lowered by a factor of 10. Afterwards, the evaluator (416) determines confusion matrix, and learning curves for monitoring the training procedure. The confusion matrix indicates the performance of a classification model on the test data set for which the true values are known.
[0079] The training means (400) continues the training loop or iterates the above process to enhance the functionality and adaptability of the system.
[0080] FIG. 5 is a block diagram depicting the testing means of the FOS enabled intrusion detection system. The testing means (500) includes the distributed acoustic sensing (DAS) sensor (304), the image capturing unit (306), the server 308 and the storage (310). The testing means (500) further includes an event timestamp store (502), an evaluation unit (504), a data ingestion unit (506), a prediction unit (508) and a user interface (510).
[0081] The DAS sensor (304) senses on-field events (302). The on-field events may be, but not limited to, human walking, digging, or moving vehicle. The server (308) receives the sensed on-field events from the DAS sensor (304) and stores therein. Simultaneously, the image capturing unit (306) captures data such as images, videos, or any multimedia information of an area where it has been installed. The image capturing unit (306) may be a camera, a video recorder, or the like. The captured data from the image capturing unit (306) is forwarded to the server (308). On the other hand, the event timestamp store (502) stores the timestamps of the on-field events.
[0082] In other words, the server (308) receives raw data from the DAS sensor (304) and the image capturing unit (306). The raw data may be an image, a video footage etc. in real-time or non-real-time manner. The server (308) stores the raw data in suitable format and serves it to the storage (310). The storage (310) forwards the raw data to the data ingestion unit (506).
[0083] The server (308) may be a computer or any computing device running a server program. The storage (310) may be a cloud server or a memory such as an on-premise storage. The storage (310) may include one or more computer-readable storage media. The storage (310) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard disc, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the storage (310) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the storage (310) is non-movable.
[0084] Alternatively, the server (308) and the storage (310) may act a single entity that stores the raw data in suitable format and serves it to the data ingestion unit (506).
[0085] The output of the prediction unit (508) which contains the class labels is forwarded to the user interface (510). The user interface (510) then forwards the predictions to the evaluation unit (504). The user interface (510) may be a smart phone, a computer, a laptop or other suitable computing device. The evaluation unit (504) further receives output from the event timestamp store (502) in form of “ground truth”. At last, the evaluation unit (504) combines the “ground truth” and “predictions” and utilizes precision-recall metrics to identify fraction of relevant instances among the retrieved instances and fraction of the total amount of relevant instances that were actually retrieved.
[0086] Thus, the testing means (500) performs test for the intrusion detection system (100) against a holdout set of training data.
[0087] FIG. 6 is a block diagram depicting the inference means of the FOS enabled intrusion detection system to predict intrusion. The inference means (600) includes a data ingestion unit (602), an anomaly detection unit (604), a neural network unit (606), a conditional random field unit (608) and a user interface (610).
[0088] The inference means works in three stages. The first stage is related to anomaly detection. An input from data ingestion unit (602) is fed to the anomaly detection unit (604). The input is a raw data such as image patterns. The anomaly detection unit (604) is a coarse filter for intrusion events that is implemented with one-class support vector machine (?-SVM). The anomaly detector unit (604) filters and decides what blocks should be sent to the segmentation unit (224) (shown in FIG. 2). These blocks are 1.3 sec x 256 sensors. The forward pass through the two tier model provides classification output i.e. class labels on the image blocks. The number of such blocks that are formed in the training data is in the multi-millions and requires optimization. The vast majority of the blocks do not contain any event as the intrusion events we are trying to detect are infrequent. In an example, the modeller is trained with blocks containing 50 percent or more of pixels in event regions that are marked with ground truth labels referring to a class other than “NO EVENT”.
[0089] An important shortcoming that this optimization produces is with events which span the length of more than one block. For such events, the outlying edges (corners, beginning and ends) of the events get truncated and may not be transmitted to the deep learning model. To mitigate this issue, overlapping blocks are considered. These blocks are overlapped in the time as well as the sensor dimensions. The image block creation process can now be considered to be a “rolling window” as opposed to discrete chunks of the 2D DAS signal. A new block at every 6th sensor location [configurable parameter] is passed to the segmentation unit. The combined output from the blocks is created by taking the middle portion of all output images.
[0090] Simultaneously, the input from data ingestion unit (602) is fed to the neural network unit (606). The neural network unit may implement a feedforward neural network, radial basis function neural network, kohonen self-organizing neural network, recurrent neural network, convolutional neural network, fully convolution neural network, modular neural network etc.
[0091] In an embodiment, the neural network unit (606) is a fully convolutional neural network (FCNN)/deep learning neural networks (DNN), which acts as a second stage. Apart from the input received from the data ingestion unit (602), the output of anomaly detector unit (604) is also fed to the neural network unit (606). The neural network unit (606) generates an output as a unary potential that is further fed to CRF/Dense CRF unit (608), which acts as a third stage.
[0092] The output of the CRF/Dense CRF (608) is displayed on the user interface (610). The user interface (610) may be a smart phone, a computer, a laptop, or other suitable computing device.
[0093] FIG. 7 illustrates a flow chart (700) for deploying the deep learning model in fiber optic sensing (FOS) enabled intrusion detection system. At step (702), the method includes labelling image pixels obtained from image patterns. The labelling means (300) labels image pixels obtained from image patterns, wherein the image patterns are a patch of a two-dimensional signal obtained from a distributed acoustic sensing (DAS) sensor (304).
[0094] At step (704), the method includes training convolution neural network using minibatch stochastic gradient descent training procedure. The training means (400) trains the convolution neural network using a minibatch stochastic gradient descent training procedure, wherein the training means (400) trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed;
[0095] At step (706), the method includes testing intrusion detection system after training. The testing means (500) performs testing for the intrusion detection system (100) against a holdout set of training data.
[0096] At step (708), the method includes drawing structured prediction from the labelled image pixels. The inference means (600) provides structured prediction of occurrence of an intrusion event from the image pixels.
[0097] The labelling process, training process, testing methodology, and prediction method are already explained in detail in conjunction with FIGS. 3 to 6.
[0098] The various actions, acts, blocks, steps, or the like in the flow chart (700) may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the invention.
[0099] FIG. 8 is an example setup or a test setup (800) of the intrusion detection system. The solution proposed in the present disclosure was tested on two buried test sites and various fence types. Test beds (802, 804) cover a variety of soil types, a cable inside duct and a direct buried cable. Test beds also included fence with a variety of fence types. In the example setup, a length of a dummy fiber (806) could be manipulated to test the solution with different fiber lengths. A first test bed (may be a source test bed) (802) having a 110 meters of cable out of which 60 meters is direct buried and the remaining is inside a duct. Cables are buried at a depth of 25 cms and 60 cms. A second test bed (may be a new or target test bed) (804) having a 220 meters of direct buried cable at a depth of 40 cms. Further, about 500 meters of the cable is deployed on fence covering various fence types such as Anti-Climb fencing, Y-Beams with Concertina coils, and Wire Mesh fencing.
[00100] The training means (400) trains a convolution neural network using a minibatch stochastic gradient descent training procedure for a small amount of time with a labelled data from the new test bed and a labelled data from the source test bed. The test setup (800) ensures data collection, training and testing of the solution proposed on a variety of events. Types of events tested included:
Human Event class: Human Walking and Running on cable, parallel to cable, perpendicular to the cable and movement at an angle to the cable.
Digging Event Class: Digging with a Shovel/Pickaxe, Mechanized Digging at various distances away from the cable.
Vehicle Event Class: Vehicle movement parallel, perpendicular and at an angle to the cable. Vehicles covered included Motorcycle, LMV and HMV.
Fence Event class: Fence climbing and cutting/tampering.
[00101] The performance of the proposed solution is shown across three event types, namely, Human movement (walking and running), vehicle movement and Manual digging on field and performance metrics are obtained. Important performance metrics are given below:
1. Detection rate or probability of detection
a. Human walking and running- 98.4%
b. Light vehicle movement - 98.3%
c. Manual digging -98.6%
2. Response time or detection time < 8 sec
[00102] Probability of detection ~98% (% of intrusions detected out of total intrusions) as measured on the test beds and event sets listed beats the existing the detection rate of market available solutions which are at 95%.
[00103] The intrusion detection system of the present disclosure considers unsupervised (or semi supervised) domain adaptation problem of adapting a model trained on a “source” test bed to a “target” test bed. The simplistic approach of the present disclosure for dealing with the domain adaptation problem is to train a model for small amount of user-defined time (e.g. 150 minutes ) with labelled data from the new test bed in addition to labelled data from the source test bed.
[00104] The embodiments disclosed herein can be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
[00105] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
[00106] It will be apparent to those skilled in the art that other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the invention. It is intended that the specification and examples be considered as exemplary, with the true scope of the invention being indicated by the claims.
[00107] The methods and processes described herein may have fewer or additional steps or states and the steps or states may be performed in a different order. Not all steps or states need to be reached. The methods and processes described herein may be embodied in, and fully or partially automated via, software code modules executed by one or more general purpose computers. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in whole or in part in specialized computer hardware.
[00108] The results of the disclosed methods may be stored in any type of computer data repository, such as relational databases and flat file systems that use volatile and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or solid state RAM).
[00109] The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
[00110] Moreover, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor device, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor device can be a microprocessor, but in the alternative, the processor device can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor device can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor device includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor device may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
[00111] The elements of a method, process, routine, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor device such that the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor device. The processor device and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor device and the storage medium can reside as discrete components in a user terminal.
[00112] Conditional language used herein, such as, among others, "can," "may," "might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list.
[00113] While the detailed description has shown, described, and pointed out novel features as applied to various embodiments, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the scope of the disclosure. As can be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
Claims:CLAIMS
We claim:
1. A system for deploying a deep learning model (108) in a fiber optic sensing (FOS) enabled intrusion detection system (100), the system comprising:
a labelling means (300) configured to label image pixels obtained from image patterns, wherein the image patterns are a patch of a two-dimensional signal obtained from a distributed acoustic sensing (DAS) sensor (304);
a training means (400) configured to train a convolution neural network using a minibatch stochastic gradient descent training procedure, wherein the training means (400) trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed;
a testing means (500) configured to test performance of the intrusion detection system (100) against a holdout set of training data; and
an inference means (600) configured to make a structured prediction of occurrence of an intrusion event from the image pixels.
2. The system as claimed in claim 1, wherein the convolution neural network is a fully convolution neural network.
3. The system as claimed in claim 1, wherein the pre-defined time can be 150 minutes.
4. The system as claimed in claim 1, wherein the labelling means (300) comprising:
the DAS sensor (304) and an image capturing unit (306) configured to sense and capture an on-field event (302) and a real-time feed, wherein the on-field event (302) and the real-time feed are an input image;
a storage (310) configured to receive and store the input image to prepare for labelling; and
a labelling unit (312) configured to receive the input image prepared for labelling and outputting labelled regions that is further forwarded to the storage (310), and accessed by the intrusion detection system (100).
5. The system as claimed in claim 1, wherein the training means (400) runs an iterative process for an adaptable intrusion detection, the training means comprising:
a dataset unit (402) configured to store one or more sampled blocks received from the DAS sensor (304) and maintain an index for each of the one or more sampled blocks, wherein the one or more sampled blocks are training data;
a sampler (404) configured to select one or more sampling blocks as per a predefined sampling criteria;
a data loader unit (406) configured to implement the minibatch stochastic gradient descent to batch on the one or more sampling blocks received from the sampler (404);
a modeler (408) configured to perform an anomaly detection and a semantic segmentation;
an evaluator (416) configured to:
determine a confusion matrix showing accuracy of model predictions against a ground truth in the training data and plots learning curves; and
pass on the confusion matrix showing accuracy of model predictions against the ground truth in the training data and plots learning curves to the dataset unit (402) for correction.
6. The system as claimed in claim 5, wherein the predefined sampling criteria represents the one or more sampling blocks having more than minimum event pixels.
7. The system as claimed in claim 1, wherein the inference means (600) comprising:
an anomaly detection unit (604) configured to identify one or more blocks to be sent to a semantic segmentation unit (224), the one or more blocks includes image patterns;
a neural network (606) configured to segment and label the image patterns; and
a conditional random field unit (608) outputting a structured prediction of the labelled image patterns and a user interface (610) configured to display the structured prediction of the labelled image patterns.
8. The system as claimed in claim 7, wherein the conditional random field unit (608) implements a dense conditional random field and the neural network (606) is the convolution neural network.
9. The system as claimed in claim 7, wherein the inference means (600) includes a data ingestion unit (602) that forwards the one or more blocks to the anomaly detection unit (604) and the neural network unit (606).
10. The system as claimed in claim 1, wherein the testing means (500) comprises an evaluation unit (504) configured to receive a ground truth and predictions and utilizes a precision-recall metrics to identify fraction of relevant instances among retrieved instances and fraction of total amount of relevant instances actually retrieved.
11. A method for deploying a deep learning model (108) in a fiber optic sensing (FOS) enabled intrusion detection system (100), comprising:
labelling, by a labelling means (300), image pixels obtained from image patterns, wherein the image patterns are a patch of a two-dimensional signal obtained from a distributed acoustic sensing (DAS) sensor (304);
training, by a training means (400), a convolution neural network using a minibatch stochastic gradient descent training procedure, wherein the training means (400) trains the convolution neural network for a pre-defined time with a labelled data from a new test bed and a labelled data from a source test bed;
performing test, by a testing means (500), for the intrusion detection system (100) against a holdout set of training data; and
predicting, by an inference means (600), occurrence of an intrusion event from the image pixels.
12. The method as claimed in claim 11, wherein the convolution neural network is a fully convolution neural network.
13. The method as claimed in claim 11, wherein the pre-defined time can be 150 minutes.
14. The method as claimed in claim 11, wherein the labelling means (300) comprising:
sensing and capturing, by the DAS sensor (304) and an image capturing unit (306), an on-field event (302) and a real-time feed, wherein the on-field event (302) and the real-time feed are an input image;
receiving and storing, by a storage (310), the input image to prepare for labelling; and
receiving, by a labelling unit (312), the input image prepared for labelling and outputting labelled regions that is further forwarded to the storage (310), and accessed by the intrusion detection system (100).
15. The method as claimed in claim 11, wherein the training means (400) runs an iterative process for an adaptable intrusion detection, comprising:
storing, by a dataset unit (402), one or more sampled blocks received from the DAS sensor (304) and maintaining an index for each of the one or more sampled blocks, wherein the one or more sampled blocks are training data;
selecting, by a sampler (404), one or more sampling blocks as per a predefined sampling criteria;
implementing, by a data loader unit (406), the minibatch stochastic gradient descent to batch on the one or more sampling blocks received from the sampler (404);
performing, by a modeler (408), an anomaly detection and a semantic segmentation;
determining, by an evaluator (416), a confusion matrix showing accuracy of model predictions against a ground truth in the training data and plots learning curves; and
passing on, by the evaluator (416), the confusion matrix showing accuracy of model predictions against the ground truth in the training data and plots learning curves to the dataset unit (402) for correction.
16. The method as claimed in claim 15, wherein the predefined sampling criteria represents the one or more sampling blocks having more than minimum event pixels.
17. The method as claimed in claim 11, wherein the inference means (600) comprising:
identifying, by an anomaly detection unit (604), one or more blocks to be sent to a semantic segmentation unit (224), the one or more blocks includes image patterns;
segmenting and labelling, by a neural network (606), the image patterns; and
outputting, by a conditional random field unit (608), a structured prediction of the labelled image patterns and displaying, by a user interface (610), the structured prediction of the labelled image patterns.
18. The method as claimed in claim 17, wherein the conditional random field unit (608) implements a dense conditional random field and the neural network (606) is the convolution neural network.
19. The method as claimed in claim 17, wherein the inference means (600) includes a data ingestion unit (602) that forwards the one or more blocks to the anomaly detection unit (604) and the neural network unit (606).
20. The method as claimed in claim 17, wherein the testing means (500) comprises an evaluation unit (504) configured to receive a ground truth and predictions and utilizes a precision-recall metrics to identify fraction of relevant instances among retrieved instances and fraction of total amount of relevant instances actually retrieved.
| # | Name | Date |
|---|---|---|
| 1 | 202011044656-STATEMENT OF UNDERTAKING (FORM 3) [14-10-2020(online)].pdf | 2020-10-14 |
| 2 | 202011044656-POWER OF AUTHORITY [14-10-2020(online)].pdf | 2020-10-14 |
| 3 | 202011044656-FORM 1 [14-10-2020(online)].pdf | 2020-10-14 |
| 4 | 202011044656-DRAWINGS [14-10-2020(online)].pdf | 2020-10-14 |
| 5 | 202011044656-DECLARATION OF INVENTORSHIP (FORM 5) [14-10-2020(online)].pdf | 2020-10-14 |
| 6 | 202011044656-COMPLETE SPECIFICATION [14-10-2020(online)].pdf | 2020-10-14 |
| 7 | 202011044656-POA [04-10-2024(online)].pdf | 2024-10-04 |
| 8 | 202011044656-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202011044656-FORM 13 [04-10-2024(online)].pdf | 2024-10-04 |
| 10 | 202011044656-Proof of Right [18-11-2025(online)].pdf | 2025-11-18 |
| 11 | 202011044656-FORM-5 [18-11-2025(online)].pdf | 2025-11-18 |
| 12 | 202011044656-FORM-26 [18-11-2025(online)].pdf | 2025-11-18 |
| 13 | 202011044656-FORM 3 [18-11-2025(online)].pdf | 2025-11-18 |
| 14 | 202011044656-ENDORSEMENT BY INVENTORS [18-11-2025(online)].pdf | 2025-11-18 |