Specification
Claims:
1. A processor-implemented method (200) for time-series classification using a reservoir-based spiking neural network, the method comprising the steps of:
receiving, via one or more hardware processors, a plurality of training time-series data, wherein each training time-series data of the plurality of training time-series data comprises a plurality of training time-series data values in an ordered sequence (202); and
training, via the one or more hardware processors, the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data, to obtain a time-series classification model (204), wherein the training comprises:
passing each training time-series data, to a first spike encoder of the reservoir-based spiking neural network, to obtain encoded spike trains for each training time-series data (204a);
passing a time-shifted training time-series data associated with each training time-series data, to a second spike encoder of the reservoir-based spiking neural network, to obtain the encoded spike trains for the time-shifted training time-series data associated with each training time-series data (204b);
providing (i) the encoded spike trains for each training time-series data and (ii) the encoded spike trains for the time-shifted training time-series data associated with each training time-series data, to a spiking reservoir of the reservoir-based spiking neural network, to obtain neuronal trace values of a plurality of excitatory neurons for each training time-series data (204c);
extracting a plurality of spatio-temporal features for each training time-series data from the neuronal trace values of the plurality of excitatory neurons for each training time-series data (204d); and
passing the plurality of spatio-temporal features for each training time-series data, to train a classifier of the reservoir-based spiking neural network, with corresponding class labels (204e).
2. The method of claim 1, further comprising:
receiving, via the one or more hardware processors, a plurality of input time-series data, wherein each of the plurality of input time-series data comprises a plurality of input time-series data values in the ordered sequence (206); and
passing, via the one or more hardware processors, the plurality of input time-series data to the time-series classification model, to obtain a class label for each of the plurality of input time-series data (208).
3. The method of claim 1, wherein the plurality of training time-series data is received from an edge computing network having one or more edge devices.
4. The method of claim 1, wherein the time-shifted training time-series data associated with each training time-series data, is obtained by shifting the training time-series data with a predefined shifted value.
5. The method of claim 1, wherein the reservoir-based spiking neural network comprises a first spike encoder, a second spike encoder, a spiking reservoir, and a classifier.
6. The method of claim 5, wherein the spiking reservoir is a dual population spike-based reservoir architecture comprising a plurality of excitatory neurons, a plurality of inhibitory neurons, and a plurality of sparse, random, and recurrent connections connecting the plurality of excitatory neurons and the plurality of inhibitory neurons.
7. A system (100) for time-series classification using a reservoir-based spiking neural network, the system comprising:
a memory (102) storing instructions;
one or more input/output (I/O) interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive a plurality of training time-series data, wherein each training time-series data of the plurality of training time-series data comprises a plurality of training time-series data values in an ordered sequence; and
train the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data, to obtain a time-series classification model, wherein the training comprises:
passing each training time-series data, to a first spike encoder of the reservoir-based spiking neural network, to obtain an encoded spike trains for each training time-series data;
passing a time-shifted training time-series data associated with each training time-series data, to a second spike encoder of the reservoir-based spiking neural network, to obtain the encoded spike trains for the time-shifted training time-series data associated with each training time-series data;
providing (i) the encoded spike trains for each training time-series data and (ii) the encoded spike trains for the time-shifted training time-series data associated with each training time-series data, to a spiking reservoir of the reservoir-based spiking neural network, to obtain neuronal trace values of a plurality of excitatory neurons for each training time-series data;
extracting a plurality of spatio-temporal features for each training time-series data from the neuronal trace values of the plurality of excitatory neurons for each training time-series data; and
passing the plurality of spatio-temporal features for each training time-series data, to train a classifier of the reservoir-based spiking neural network, with corresponding class labels.
8. The system of claim 7, wherein the one or more hardware processors (104) are further configured by the instructions to:
receive a plurality of input time-series data, wherein each of the plurality of input time-series data comprises a plurality of input time-series data values in the ordered sequence; and
pass the plurality of input time-series data to the time-series classification model, to obtain a class label for each of the plurality of input time-series data the input time-series data.
9. The system of claim 7, wherein the plurality of training time-series data is received from an edge computing network having one or more edge devices.
10. The system of claim 7, wherein the time-shifted training time-series data associated with each training time-series data, is obtained by shifting the training time-series data with a predefined shifted value.
11. The system of claim 7, wherein the reservoir-based spiking neural network comprises a first spike encoder, a second spike encoder, a spiking reservoir, and a classifier.
12. The system of claim 11, wherein the spiking reservoir is a dual population spike-based reservoir architecture comprising a plurality of excitatory neurons, a plurality of inhibitory neurons, and a plurality of sparse, random, and recurrent connections connecting the plurality of excitatory neurons and the plurality of inhibitory neurons.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHODS AND SYSTEMS FOR TIME-SERIES CLASSIFICATION USING RESERVOIR-BASED SPIKING NEURAL NETWORK
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to the field of time-series classification, and, more particularly, to methods and systems for time-series classification using a reservoir-based spiking neural network implemented at edge computing applications.
BACKGROUND
Time series is considered as an ordered sequence of real values - either single numerical or multidimensional vectors, thereby rendering the series univariate or multivariate respectively. Thus, time series classification (TSC) can also be treated as a sequence classification problem.
In other hand, embedding intelligence at the edge computing network has become a critical requirement for many industry domains, especially disaster management, manufacturing, retail, surveillance, remote sensing, etc. Many of the Internet of Things (IoT) applications, such as predictive maintenance in manufacturing industry, need efficient classification of time series data from various sensors together with low-latency real-time response, thus making efficient time series classification (TSC) a prime need. As network reliability is not guaranteed, and data transfer affects the latency as well as power consumption, processing in-situ is an important requirement in the industry.
Many different techniques exist for the TSC, of which, distance measure and nearest neighbour (NN) based clustering techniques such as Weighted Dynamic Time Wrapping (DTW), Derivative DTW etc. are commonly used together for the TSC. Transforming the time series into a new feature space coupled with ensembles of classification techniques (e.g. support vector machine (SVM), k-nearest neighbour (k-NN)) are also used for the same to improve upon the accuracy. Simultaneously, Artificial Neural Networks (ANN) based methods, such as a convolutional neural network (CNN), a multilayer perceptron (MLP), an autoencoder, a recurrent neural network (RNN) etc. for solving TSC problems have also evolved. However, most of the such conventional techniques for the TSC problem are generally computationally intensive, and hence, achieving low-latency real-time response via on-board processing on computationally constrained edge devices remains unrealized. One edge-compatible variant, exists, which is based on adaptive learning.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
In an aspect, there is provided a processor-implemented method for time-series classification using a reservoir-based spiking neural network, the method comprising the steps of: receiving a plurality of training time-series data, wherein each training time-series data of the plurality of training time-series data comprises a plurality of training time-series data values in an ordered sequence; training the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data, to obtain a time-series classification model, wherein the training comprises: passing each training time-series data, to a first spike encoder of the reservoir-based spiking neural network, to obtain encoded spike trains for each training time-series data; passing a time-shifted training time-series data associated with each training time-series data, to a second spike encoder of the reservoir-based spiking neural network, to obtain the encoded spike trains for the time-shifted training time-series data associated with each training time-series data; providing (i) the encoded spike trains for each training time-series data and (ii) the encoded spike trains for the time-shifted training time-series data associated with each training time-series data, to a spiking reservoir of the reservoir-based spiking neural network, to obtain neuronal trace values of a plurality of excitatory neurons for each training time-series data; extracting a plurality of spatio-temporal features for each training time-series data from the neuronal trace values of the plurality of excitatory neurons for each training time-series data; and passing the plurality of spatio-temporal features for each training time-series data, to train a classifier of the reservoir-based spiking neural network, with corresponding class labels, receiving a plurality of input time-series data, wherein each of the plurality of input time-series data comprises a plurality of input time-series data values in the ordered sequence; and passing the plurality of input time-series data to the time-series classification model, to obtain a class label for each of the plurality of input time-series data.
In another aspect, there is provided a system for time-series classification using a reservoir-based spiking neural network, the system comprising: a memory storing instructions; one or more input/output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a plurality of training time-series data, wherein each training time-series data of the plurality of training time-series data comprises a plurality of training time-series data values in an ordered sequence; train the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data, to obtain a time-series classification model, wherein the training comprises: passing each training time-series data, to a first spike encoder of the reservoir-based spiking neural network, to obtain an encoded spike trains for each training time-series data; passing a time-shifted training time-series data associated with each training time-series data, to a second spike encoder of the reservoir-based spiking neural network, to obtain the encoded spike trains for the time-shifted training time-series data associated with each training time-series data; providing (i) the encoded spike trains for each training time-series data and (ii) the encoded spike trains for the time-shifted training time-series data associated with each training time-series data, to a spiking reservoir of the reservoir-based spiking neural network, to obtain neuronal trace values of a plurality of excitatory neurons for each training time-series data; extracting a plurality of spatio-temporal features for each training time-series data from the neuronal trace values of the plurality of excitatory neurons for each training time-series data; and passing the plurality of spatio-temporal features for each training time-series data, to train a classifier of the reservoir-based spiking neural network, with corresponding class labels; receive a plurality of input time-series data, wherein each of the plurality of input time-series data comprises a plurality of input time-series data values in the ordered sequence; and pass the plurality of input time-series data to the time-series classification model, to obtain a class label for each of the plurality of input time-series data the input time-series data.
In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a plurality of training time-series data, wherein each training time-series data of the plurality of training time-series data comprises a plurality of training time-series data values in an ordered sequence; train the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data, to obtain a time-series classification model, wherein the training comprises: passing each training time-series data, to a first spike encoder of the reservoir-based spiking neural network, to obtain an encoded spike trains for each training time-series data; passing a time-shifted training time-series data associated with each training time-series data, to a second spike encoder of the reservoir-based spiking neural network, to obtain the encoded spike trains for the time-shifted training time-series data associated with each training time-series data; providing (i) the encoded spike trains for each training time-series data and (ii) the encoded spike trains for the time-shifted training time-series data associated with each training time-series data, to a spiking reservoir of the reservoir-based spiking neural network, to obtain neuronal trace values of a plurality of excitatory neurons for each training time-series data; extracting a plurality of spatio-temporal features for each training time-series data from the neuronal trace values of the plurality of excitatory neurons for each training time-series data; and passing the plurality of spatio-temporal features for each training time-series data, to train a classifier of the reservoir-based spiking neural network, with corresponding class labels; receive a plurality of input time-series data, wherein each of the plurality of input time-series data comprises a plurality of input time-series data values in the ordered sequence; and pass the plurality of input time-series data to the time-series classification model, to obtain a class label for each of the plurality of input time-series data the input time-series data.
In an embodiment, the plurality of training time-series data is received from an edge computing network having one or more edge devices.
In an embodiment, the time-shifted training time-series data associated with each training time-series data, is obtained by shifting the training time-series data with a predefined shifted value.
In an embodiment, the reservoir-based spiking neural network comprises a first spike encoder, a second spike encoder, a spiking reservoir, and a classifier.
In an embodiment, the spiking reservoir is a dual population spike-based reservoir architecture comprising a plurality of excitatory neurons, a plurality of inhibitory neurons, and a plurality of sparse, random, and recurrent connections connecting the plurality of excitatory neurons and the plurality of inhibitory neurons.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the present disclosure, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 is an exemplary block diagram of a system for time-series classification using a reservoir-based spiking neural network, in accordance with some embodiments of the present disclosure.
FIG. 2A and FIG. 2B illustrate exemplary flow diagrams of a processor-implemented method for time-series classification using a reservoir-based spiking neural network, in accordance with some embodiments of the present disclosure.
FIG. 3 is an exemplary block diagram showing an architecture of a reservoir-based spiking neural network, in accordance with some embodiments of the present disclosure.
FIG. 4A through FIG. 4C illustrate exemplary graphical representation of an encoded spike train for an exemplary time-series data value, using a temporal Gaussian encoding technique, in accordance with some embodiments of the present disclosure.
FIG. 5A through FIG. 5D are graphs showing a performance comparison of a rate-based Poisson encoding and a temporal gaussian encoding, on a sample time-series data, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Recent evolution of non-von Neumann neuromorphic systems that collocate computation and data in a manner similar to mammalian brains, coupled with the paradigm of Spiking Neural Networks (SNNs) have shown promise as a candidate for providing effective solutions the time-series classification (TSC) problem. The SNNs, owing to their event based asynchronous processing and sparse data handling, are less computationally intensive compared to other techniques - which make them potential candidates for the TSC problems at edge. Among different network architectures of SNN, reservoirs - a set of randomly and recurrently connected excitatory and inhibitory neurons are found to be most suitable for temporal feature extraction.
However, the conventional reservoir based SNN techniques addressed either by using non-bio-plausible backpropagation based mechanisms, or by optimizing the network weight parameters. Further, the conventional reservoir based SNN techniques are limited and not so accurate in solving the TSC problems. Also, for SNNs to perform efficiently, the input data must be encoded into spike trains which is not much discussed in the conventional techniques and always an area of improvement to obtain an efficient reservoir based time-series classification model for solving the TSC problems at the edge computing network.
The present disclosure herein provides methods and systems for time-series classification using a reservoir-based spiking neural network, to solve the technical problems of TSC at an edge computing network. The disclosed reservoir-based spiking neural network is capable of mimicking brain functionalities in a better fashion and to learn the dynamics of the reservoir using a fixed set of weights thus saving on weight learning. According to an embodiment of the present disclosure, the time-series data is encoded first using a spiking encoder in order to get the maximum possible information which is of utmost importance. Then the spiking reservoir is used to extract the spatio-temporal features for the time-series data. Lastly, the extracted spatio-temporal features of the time-series data is used to train a classifier to obtain the time-series classification model that is used to classify the time-series data in real-time, received from edge devices present at the edge computing network.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 5D, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods.
FIG. 1 is an exemplary block diagram of a system 100 for time-series classification using a reservoir-based spiking neural network, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism.
The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases.
The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server.
The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102b is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database.
Referring to FIG. 2A and FIG. 2B, components and functionalities of the system 100 are described in accordance with an example embodiment of the present disclosure. For example, FIG. 2A and FIG. 2B illustrate exemplary flow diagrams of a processor-implemented method 200 for time-series classification using a reservoir-based spiking neural network, in accordance with some embodiments of the present disclosure. Although steps of the method 200 including process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently.
At step 202 of the method 200, the one or more hardware processors 104 of the system 100 are configured to receive a plurality of training time-series data. Each training time-series data of the plurality of training time-series data includes a plurality of training time-series data values. The plurality of training time-series data values of each training time-series data may present in an ordered sequence. The plurality of training time-series data of a fixed length or a varied length.
The plurality of training time-series data is associated with one or more edge devices that are present in an edge computing network. The one or more edge devices include different type of sensors, actuators and so on. One training time-series data or some of the plurality of training time-series data may be received from each edge device. For example, temperature measurement values from a temperature sensor in a given time instance may form one training time-series data. Similarly, the temperature measurement values from the temperature sensor measured in multiple given time instances results in multiple training time-series data, and so on. Hence the plurality of training time-series data values are the real numbers in nature as they are the measurement values.
An exemplary training time-series data is: {2, 6, 34, 69, 78, 113, 283}. The length of the exemplary training time-series data is 7 and 2, 6, 34… are the training time-series data values.
At step 204 of the method 200, the one or more hardware processors 104 of the system 100 are configured to train the reservoir-based spiking neural network with each training time-series data, at a time, of the plurality of training time-series data received at step 202 of the method 200, to obtain a time-series classification model. The obtained time-series classification mode at this step is used for classifying the time-series required.
FIG. 3 is an exemplary block diagram showing an architecture of a reservoir-based spiking neural network 300, in accordance with some embodiments of the present disclosure. As shown in FIG. 3, the reservoir-based spiking neural network 300 comprises a first spike encoder 302A, a second spike encoder 302B, a spiking reservoir 304, and a classifier 306. In an embodiment, the spiking reservoir 304 is a dual population spike-based reservoir architecture comprising a plurality of excitatory neurons, a plurality of inhibitory neurons. A plurality of sparse, random, and recurrent connections is present connecting the plurality of excitatory neurons and the plurality of inhibitory neurons. More specifically, the spiking reservoir 304 is a sparse and recurrently connected population of the plurality of excitatory neurons and the plurality of inhibitory neurons, where each neuron is connected to a set of other neurons in the same population in a probabilistic fashion such that resulting dynamics of the network remains stable and does not go into chaotic regime. The sparse and recurrently connected population is capable of extracting spatio-temporal features better than the other spiking network architecture.
The paradigm of the spiking reservoir 304 of the present disclosure may be evolved with two different types based on the nature of neurons. The first type is (i) Echo State Networks (ESN) where rate-based neurons and continuous activation functions are used. The second type is (ii) Liquid State Machines (LSM) where spiking neurons with asynchronous threshold activation function is used. The Liquid State Machines (LSM) are found to be efficient for tasks involving spatio temporal feature extraction such as gesture recognition, time series prediction etc. when used with proper spike encoding techniques.
In the context of the present disclosure, the spiking reservoir architecture 304 includes a number of excitatory neurons N_ex, a number of inhibitory neurons N_inhi, and a number of recurrent connections N_rec. The sparse random connections between input features and the LSM are controlled by their out-degree parameter denoted by ?Input?_(out-degree). All these are tunable parameters and can be adjusted to improve the dynamics of the spiking reservoir 304 to achieve better performance. Finally, a set of weight scalar values are tuned and fixed for the inter-population network connections (such as input-to-excitatory, excitatory-to-input, inhibitory-to- excitatory, inihibitory-to-inihibitory, time-shifted input-to-excitatory) in order to bring in stability and better performance.
Also, in the context of the present disclosure, a Leaky-Integrate and Fire (LIF) neuron model is used which can be described by equation 1, as it is computationally easier to simulate and work with.
t_m dV/dt=(V_rest-V)+IR
s= {¦(1,V=V_thresh@0,V) ||^n/n!) ------------------- (3)
Where the average spike count is expressed as:
=?_(t_(1,))^(t_2)¦r(t)dt -------------------- (4)
r(t) being the instantaneous firing rate. By slow varying r(t) in small time sub-interval d(t), it can be assumed equivalent to discrete rate value r[i]. With the approximation of a Poisson process by infinitely many Bernoulli trails and each Bernoulli trail by a uniform draw x[i] at each time step i, a spike T[i] can be denoted as = 1, if x[i]
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202221022826-STATEMENT OF UNDERTAKING (FORM 3) [18-04-2022(online)].pdf |
2022-04-18 |
| 2 |
202221022826-REQUEST FOR EXAMINATION (FORM-18) [18-04-2022(online)].pdf |
2022-04-18 |
| 3 |
202221022826-FORM 18 [18-04-2022(online)].pdf |
2022-04-18 |
| 4 |
202221022826-FORM 1 [18-04-2022(online)].pdf |
2022-04-18 |
| 5 |
202221022826-FIGURE OF ABSTRACT [18-04-2022(online)].jpg |
2022-04-18 |
| 6 |
202221022826-DRAWINGS [18-04-2022(online)].pdf |
2022-04-18 |
| 7 |
202221022826-DECLARATION OF INVENTORSHIP (FORM 5) [18-04-2022(online)].pdf |
2022-04-18 |
| 8 |
202221022826-COMPLETE SPECIFICATION [18-04-2022(online)].pdf |
2022-04-18 |
| 9 |
202221022826-FORM-26 [01-07-2022(online)].pdf |
2022-07-01 |
| 10 |
Abstract1.jpg |
2022-07-20 |
| 11 |
202221022826-Proof of Right [30-09-2022(online)].pdf |
2022-09-30 |
| 12 |
202221022826-Request Letter-Correspondence [13-01-2023(online)].pdf |
2023-01-13 |
| 13 |
202221022826-Power of Attorney [13-01-2023(online)].pdf |
2023-01-13 |
| 14 |
202221022826-Form 1 (Submitted on date of filing) [13-01-2023(online)].pdf |
2023-01-13 |
| 15 |
202221022826-Covering Letter [13-01-2023(online)].pdf |
2023-01-13 |
| 16 |
202221022826-CERTIFIED COPIES TRANSMISSION TO IB [13-01-2023(online)].pdf |
2023-01-13 |
| 17 |
202221022826-CORRESPONDENCE(IPO)-(CERTIFIED COPY WIPO DAS)-(16-01-2023)..pdf |
2023-01-16 |
| 18 |
202221022826-FORM 3 [26-07-2023(online)].pdf |
2023-07-26 |
| 19 |
202221022826-FER.pdf |
2025-03-27 |
| 20 |
202221022826-FORM 3 [26-05-2025(online)].pdf |
2025-05-26 |
| 21 |
202221022826-FER_SER_REPLY [20-08-2025(online)].pdf |
2025-08-20 |
| 22 |
202221022826-COMPLETE SPECIFICATION [20-08-2025(online)].pdf |
2025-08-20 |
| 23 |
202221022826-CLAIMS [20-08-2025(online)].pdf |
2025-08-20 |
Search Strategy
| 1 |
SearchHistory-202221022826E_06-03-2024.pdf |