Sign In to Follow Application
View All Documents & Correspondence

Method And System For Multi Sensor Fusion Using Transform Learning

Abstract: This disclosure relates to multi-sensor fusion using Transform Learning (TL) that provides a compact representation of data in many scenarios as compared to Dictionary Learning (DL) and Deep network models that may be computationally intensive and complex. A two-stage approach for better modeling of sensor data is provided, wherein in the first stage, representation of the individual sensor time series is learnt using dedicated transforms and their associated coefficients and in the second stage, all the representations are fused together using a fusing (common) transform and its associated coefficients to effectively capture correlation between the different sensor representations for deriving an inference. The method and system of the present disclosure can find application in areas employing multiple sensors that are mostly heterogeneous in nature.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 August 2020
Publication Number
08/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai - 400021, Maharashtra, India
Indian Institute of Technology Kharagpur
Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India

Inventors

1. KUMAR, Kriti
Tata Consultancy Services Limited, #152, Gopalan Global Axis H - Block, Opposite Satya Sai Hospital, ITPL Main road, EPIP Zone, Whitefield, Bangalore - 560066, Karnataka, India
2. CHANDRA, Mariswamy Girish
Tata Consultancy Services Limited, #152, Gopalan Global Axis H - Block, Opposite Satya Sai Hospital, ITPL Main road, EPIP Zone, Whitefield, Bangalore - 560066, Karnataka, India
3. KUMAR, Achanna Anil
Tata Consultancy Services Limited, #152, Gopalan Global Axis H - Block, Opposite Satya Sai Hospital, ITPL Main road, EPIP Zone, Whitefield, Bangalore - 560066, Karnataka, India
4. MAJUMDAR, Angshul
Department of Electronics and Communication Engineering, Indraprastha Institute of Information Technology, Delhi - 110020, India
5. MISHRA, Debasish
Advanced Technology Development Centre, Indian Institute of Technology, Kharagpur - 721302, West Bengal, India
6. PAL, Surjya Kanta
Department of Mechanical Engineering, Indian Institute of Technology, Kharagpur - 721302, West Bengal, India

Specification

DESC:FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENT RULES, 2003 COMPLETE SPECIFICATION (See Section 10 and Rule 13) Title of invention: METHOD AND SYSTEM FOR MULTI-SENSOR FUSION USING TRANSFORM LEARNING Applicants: Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956 Having address: Nirmal Building, 9th floor, Nariman point, Mumbai 400021,Maharashtra, India & Indian Institute of Technology Kharagpur an Indian education institute having address as: Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India The following specification particularly describes the invention and the manner in which it is to be performed. CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY The present application claims priority from Indian provisional patent application no. 202021036163, filed on August 21, 2020. The entire contents of the aforementioned application are incorporated herein by reference. TECHNICAL FIELD The disclosure herein generally relates to multi-sensor fusion, and, more particularly, to multi-sensor fusion using transform learning. BACKGROUND Multi-sensor fusion is a technique which combines sensory data from disparate sensors to obtain more useful information that may not be possible by a single sensor. The useful information thus obtained may be more accurate, complete or dependable ’view’ of an entity or a system being sensed. This technique also offers several advantages associated with enhancing the data availability and authenticity with relatively lesser complexity. Often, multi-sensor fusion techniques encounter challenges related to data imperfection, diversity of sensing mechanism and nature of the application environment. Hence, depending on the nature of the problem and the available information from the sensors, various fusion architectures may be adopted. However, prevalent solutions for multi-sensor fusion are computationally complex and may not perform well in all scenarios. SUMMARY Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. In an aspect, there is provided a processor implemented method comprising the steps of: receiving, via one or more hardware processors, a plurality of training data ?(X?_1,X_2,…..X_n)from a plurality of sensors connected to a monitored system with a training output (y); performing, via the one or more hardware processors, a joint optimization of a set of parameters including (i) sensor specific transforms ?(T?_1,T_2,…..T_n) and (ii) sensor specific coefficients ?(Z?_1,Z_2,…..Z_n), wherein each of the sensor specific transforms and the sensor specific coefficients correspond to a training data in the plurality of training data ?(X?_1,X_2,…..X_n), (iii) a fusing transform ?(T?^((f))), (iv) a fusing coefficient ?(Z?^((f))), and (v) a weight matrix (w) (304), and wherein the joint optimization comprises: initializing the sensor specific transforms ?(T?_1,T_2,…..T_n) and the fusing transform ?(T?^((f))) with a random matrix comprising real numbers between 0 and 1; estimating the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the initialized sensor specific transforms ?(T?_1,T_2,…..T_n) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n); estimating the fusing coefficient ?(Z?^((f))) based on the initialized fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); estimating the weight matrix (w) based on the training output (y) and the estimated fusing coefficient ?(Z?^((f))); and iteratively performing joint learning using the initialized parameters and the estimated parameters in a first iteration and learnt parameters thereafter until a termination criterion is met, the joint learning comprising: learning each of the sensor specific transforms ?(T?_1,T_2,…..T_n) based on a corresponding sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the plurality of training data ?(X?_1,X_2,…..X_n); learning each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the fusing transform ?(T?^((f))), a corresponding sensor specific transform ?(T?_1,T_2,…..T_n), the fusing coefficient ?(Z?^((f))) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n), and remaining of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); learning the fusing transform ?(T?^((f))) based on the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the fusing coefficient ?(Z?^((f))); learning the fusing coefficient ?(Z?^((f))) based on the fusing transform ?(T?^((f))), the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n), the weight matrix (w) and the training output (y); and learning the weight matrix (w) based on the fusing coefficient ?(Z?^((f))) and the training output (y); wherein the termination criterion is one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference of the fusing transform ?(T?^((f))) of a current iteration and the fusing transform ?(T?^((f))) of a previous iteration being less than an empirically determined threshold value (Tol); to obtain jointly (i) the learnt sensor specific transforms ?(T?_1,T_2,…..T_n), (ii) the learnt fusing transform ?(T?^((f))) and (iii) the learnt weight matrix (w) for the monitored system being sensed by the plurality of sensors. In another aspect, there is provided a system comprising: memory storing instructions; one or more communication interfaces; one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a plurality of training data ?(X?_1,X_2,…..X_n) from a plurality of sensors connected to a monitored system with a training output (y); perform a joint optimization of a set of parameters including (i) sensor specific transforms ?(T?_1,T_2,…..T_n) and (ii) sensor specific coefficients ?(Z?_1,Z_2,…..Z_n), wherein each of the sensor specific transforms and the sensor specific coefficients correspond to a training data in the plurality of training data ?(X?_1,X_2,…..X_n), (iii) a fusing transform ?(T?^((f))), (iv) a fusing coefficient ?(Z?^((f))), and (v) a weight matrix (w), and wherein the joint optimization comprises: initializing the sensor specific transforms ?(T?_1,T_2,…..T_n) and the fusing transform ?(T?^((f))) with a random matrix comprising real numbers between 0 and 1; estimating the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the initialized sensor specific transforms ?(T?_1,T_2,…..T_n) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n); estimating the fusing coefficient ?(Z?^((f))) based on the initialized fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); estimating the weight matrix (w) based on the training output (y) and the estimated fusing coefficient ?(Z?^((f))); and iteratively performing joint learning using the initialized parameters and the estimated parameters from the set of parameters in a first iteration and learnt parameters thereafter until a termination criterion is met, the joint learning comprising: learning each of the sensor specific transforms ?(T?_1,T_2,…..T_n) based on a corresponding sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the plurality of training data ?(X?_1,X_2,…..X_n); learning each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the fusing transform ?(T?^((f))), a corresponding sensor specific transform ?(T?_1,T_2,…..T_n), the fusing coefficient ?(Z?^((f))) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n), and remaining of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); learning the fusing transform ?(T?^((f))) based on the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the fusing coefficient ?(Z?^((f))); learning the fusing coefficient ?(Z?^((f))) based on the fusing transform ?(T?^((f))), the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n), the weight matrix (w) and the training output (y); and learning the weight matrix (w) based on the fusing coefficient ?(Z?^((f))) and the training output (y); wherein the termination criterion is one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference of the fusing transform ?(T?^((f))) of a current iteration and the fusing transform ?(T?^((f))) of a previous iteration being less than an empirically determined threshold value (Tol); to obtain jointly (i) the learnt sensor specific transforms ?(T?_1,T_2,…..T_n), (ii) the learnt fusing transform ?(T?^((f))) and (iii) the learnt weight matrix (w) for the monitored system being sensed by the plurality of sensors. In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a plurality of training data ?(X?_1,X_2,…..X_n) from a plurality of sensors connected to a monitored system with a training output (y); perform a joint optimization of a set of parameters including (i) sensor specific transforms ?(T?_1,T_2,…..T_n) and (ii) sensor specific coefficients ?(Z?_1,Z_2,…..Z_n), wherein each of the sensor specific transforms and the sensor specific coefficients correspond to a training data in the plurality of training data ?(X?_1,X_2,…..X_n), (iii) a fusing transform ?(T?^((f))), (iv) a fusing coefficient ?(Z?^((f))), and (v) a weight matrix (w), and wherein the joint optimization comprises: initializing the sensor specific transforms ?(T?_1,T_2,…..T_n) and the fusing transform ?(T?^((f))) with a random matrix comprising real numbers between 0 and 1; estimating the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the initialized sensor specific transforms ?(T?_1,T_2,…..T_n) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n); estimating the fusing coefficient ?(Z?^((f))) based on the initialized fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); estimating the weight matrix (w) based on the training output (y) and the estimated fusing coefficient ?(Z?^((f))); and iteratively performing joint learning using the initialized parameters and the estimated parameters from the set of parameters in a first iteration and learnt parameters thereafter until a termination criterion is met, the joint learning comprising: learning each of the sensor specific transforms ?(T?_1,T_2,…..T_n) based on a corresponding sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the plurality of training data ?(X?_1,X_2,…..X_n); learning each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) based on the fusing transform ?(T?^((f))), a corresponding sensor specific transform ?(T?_1,T_2,…..T_n), the fusing coefficient ?(Z?^((f))) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n), and remaining of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n); learning the fusing transform ?(T?^((f))) based on the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the fusing coefficient ?(Z?^((f))); learning the fusing coefficient ?(Z?^((f))) based on the fusing transform ?(T?^((f))), the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n), the weight matrix (w) and the training output (y); and learning the weight matrix (w) based on the fusing coefficient ?(Z?^((f))) and the training output (y); wherein the termination criterion is one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference of the fusing transform ?(T?^((f))) of a current iteration and the fusing transform ?(T?^((f))) of a previous iteration being less than an empirically determined threshold value (Tol); to obtain jointly (i) the learnt sensor specific transforms ?(T?_1,T_2,…..T_n), (ii) the learnt fusing transform ?(T?^((f))) and (iii) the learnt weight matrix (w) for the monitored system being sensed by the plurality of sensors. In accordance with an embodiment of the present disclosure, wherein the joint optimization is represented as min¦(T_1,T_2,…T_n,T^((f) ) Z_1,Z_2,…Z_n,Z^((f)),w) ?T_1 X_1-Z_1 ?_F^2+?T_2 X_2-Z_2 ?_F^2+?+?T_n X_n-Z_n ?_F^2+?_1 (?T_1 ?_F^2-log?det??T_1 ? )+?_2 (?T_2 ?_F^2-log?det??T_2 ? )+??+??_n (?T_n ?_F^2-log?det??T_n ? )+??||T?^((f)) [¦(Z_1@Z_2@¦(.@.@Z_n ))]-Z^((f)) ?||?_F^2+??(?T^((f)) ??_F^2-log?det??T^((f)))? +a?y-wZ^((f)) ?_2^2, and wherein T_1?R^(K_1×m_1 ), T_2?R^(K_2×m_2 ), T_n?R^(K_n×m_n ) are the sensor specific transforms, T^((f))?R^(K×(K_1+K_2+...K_n)) is the fusing transform, Z_1?R^(K_1×N), Z_2?R^(K_2×N), Z_n?R^(K_n×N) are the sensor specific coefficients, Z^((f))?R^(K×N) is the fusing coefficient and w?R^(1×K) is the weight matrix provided the training output y?R^(1×N), K being the size of the sensor specific transforms and the fusing transform and N being the number of measurements in the training data. In accordance with an embodiment of the present disclosure, wherein the learning of each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) is represented as Z_j=(?T_j^((f)^T ) T_j^((f) )+I)^(-1).(T_j X_j+?(T_j^((f)^T ) (Z^((f) )-?_(i=1,i?j)^n¦?T_j^((f) ) Z_i ?))) for j=1,..,n. In accordance with an embodiment of the present disclosure, wherein the learning of the fusing coefficient Z^((f)) is represented as Z^((f))=?(I+aw^T w)?^(-1).?(T?^((f)) Z^'+aw^T y), wherein Z^'=[¦(Z_1@Z_2@¦(.@.@Z_n ))]. In accordance with an embodiment of the present disclosure, wherein the learning of the weight matrix (w) is represented as w?¦(min@w) a||y-wZ^((f)) ?||?_2^2. In accordance with an embodiment of the present disclosure, wherein the one or more processors are further configured to estimate an output (y_new) of the monitored system for a plurality of new data ?(x?_1,x_2,…..x_n) by: receiving, via the one or more hardware processors, the plurality of new data ?(x?_1,x_2,…..x_n) from the plurality of sensors connected to the monitored system; estimating the sensor specific coefficients ?(z?_1,z_2,…..z_n) corresponding to the plurality of new data ?(x?_1,x_2,…..x_n) using the plurality of new data ?(x?_1,x_2,…..x_n) and the learnt sensor specific transforms ?(T?_1,T_2,…..T_n); estimating a new fusing coefficient z^((f)) using the learnt fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(z?_1,z_2,…..z_n); and estimating the output (y_new) for the monitored system based on the learnt weight matrix (w) and the estimated new fusing coefficient z^((f)). In accordance with an embodiment of the present disclosure, wherein the plurality of training data ?(X?_1,X_2,…..X_n) and the plurality of new data ?(x?_1,x_2,…..x_n) are kernelized using a kernel function. In accordance with an embodiment of the present disclosure, wherein the monitored system is a Friction Stir Welding (FSW) machine, the plurality of sensors include sensors configured to capture the new data pertaining to force, torque and power for a welding process implemented by the FSW machine, the output (y) is a value representing Ultimate Tensile Strength (UTS) indicative of the quality of the weld performed by the FSW machine. In accordance with an embodiment of the present disclosure, wherein estimating the sensor specific coefficients ?(z?_1,z_2,…..z_n) corresponding to the plurality of new data ?(x?_1,x_2,…..x_n) is represented as z_j=T_j x_j, for j=1,…,n; wherein estimating the new fusing coefficient z^((f))is represented as z^((f))=T^((f)) [¦(z_1@z_2@¦(.@.@z_n ))]; and wherein estimating the output (y_new) for the monitored system is represented as y_new=wz^((f)). BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: FIG.1 illustrates an exemplary block diagram of a system for multi-sensor fusion using Transform Learning, in accordance with some embodiments of the present disclosure. FIG.2 is a high-level flow diagram of a method for multi-sensor fusion using Transform Learning according to some embodiments of the present disclosure. FIG.3A through FIG.3D illustrate an exemplary flow diagram of a computer implemented method for multi-sensor fusion using Transform Learning, in accordance with some embodiments of the present disclosure. FIG.4A illustrates Ultimate Tensile Strength (UTS) of welded joints estimated using kernel-based methods known in the art and the kernel-based method according to some embodiments of the present disclosure. FIG.4B illustrates Ultimate Tensile Strength (UTS) of welded joints estimated using sensors individually versus all sensors by the multi-sensor fusion using the kernel-based method according to some embodiments of the present disclosure. FIG.5A illustrates half day ahead building power consumption forecast obtained using kernel-based methods known in the art and the multi-sensor fusion using the methods according to some embodiments of the present disclosure. FIG.5B illustrates half day ahead building power consumption forecast obtained using sensors individually versus all sensors by the multi-sensor fusion using the kernel-based method according to some embodiments of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following embodiments described herein. Solutions for multi-sensor fusion may be broadly classified into three levels: Data-level, Feature-level and Decision-level. Alternative to the classical hand-crafted feature design-based approach, learning representations directly from data streams is gaining popularity. The data-driven representations may be learnt by employing representation learning techniques. There has been some work on multi-sensor fusion at raw data level. Fusion at feature level has also been utilized in the art, wherein an output from different sensor signal pipelines, processed individually by deep learning, are fused by a fully connected layer for time series classification. Deep network models are computationally intensive and complex. In contrast, Dictionary Learning (DL), Transform Learning (TL) provide compact representation of data in many scenarios and may perform well in different application domains. Several studies have shown that between DL and TL techniques, TL based approach performs better in many scenarios with relatively lesser computational complexity. DL has been explored for multi-sensor fusion. In a previous patent application (Application No. 201921035106 filed on 30th August 2019), the Applicant has disclosed a basic TL for Regression (TLR) approach and a Kernel TL for Regression (KTLR) approach for learning non-linear relationship in received data . In the present disclosure, Applicant leverages advantages of the TL approach for multi-sensor fusion, wherein the input data may be multi-modal since the sensors may be heterogeneous. The Applicant’s Application No. 201921035106 provided a single stage approach, wherein a single representation in terms of transform and coefficient is learnt for the received data. In the present disclosure, a two-stage approach is employed for better modeling of the sensor data, wherein in the first stage, representation of the individual sensor time series is learnt using dedicated transforms and their associated coefficients and in the second stage, all the representations are fused together using a fusing (common) transform and its associated coefficients to effectively capture correlation between the different sensor representations for deriving an inference. Formulations of both non-kernel and kernelized versions i.e., Transform Learning for Fusion (TLF) and the Kernel Transform Learning for Fusion (KTLF) are addressed in the present disclosure. Applicant has further evaluated performance of the method and system of the present disclosure in comparison with standard DL techniques for regression (where a single transform or dictionary is learnt for an entire multi-variate data) and also with the TLR and KTLR techniques of the Applicant’s Application No. 201921035106. A brief description of TL and its kernel variant KTL as known in the art is provided below. Basic Transform Learning Framework (TL): The relationship between a data matrix X?R^(m×N), the transform T?R^(K×m) and corresponding sparse coefficients Z?R^(K×N) may be expressed as TX=Z (1) wherein m is the number of features of length N of the raw input data matrix X and K is the number of atoms (size) of the transform T. Given X, the appropriate transform matrix T and sparse coefficients Z is learnt by solving the following optimization problem: (min)-(T,Z) ?TX-Z?_F^2+?(?T?_F^2-logdetT)+µ||Z?||?_0 ) (2) wherein the additional regularization term ?T?_F^2-logdetT) prevents trivial solution by controlling the condition number of the transform matrix T and ||Z?||?_0 enforces sparsity on the learnt coefficient Z. The above minimization equation (2) is solved for T and Z employing an alternative minimization framework. Z is updated using the following steps: (Z?min)-( Z) ?TX-Z?_F^2+µ||Z?||?_0 (3) Z=(abs(TX)=µ).TX (4) wherein the term in the bracket is hard thresholded against a threshold µ and ‘.’ denotes element-wise product. On the other hand, T is updated as: (T?min)-( T) ?TX-Z?_F^2+?(?T?_F^2-logdetT) (5) Cholesky decomposition is used for solving this. It is expressed as: XX^T+?I=LL^T (6) wherein L is a lower triangular matrix and L^T denotes the conjugate transpose of L. Singular value decomposition is applied which results in: L^(-1) XZ^T=USV^T (7) wherein the diagonal entries of S are the singular values and U and V are the left and right singular vectors of L^(-1) XZ^T respectively. Using the above, the final T update is given as: T=0.5V?(S+(S^2+2?I)?^(1/2))U^T L^(-1) (8) The transform thus learnt may be used to carry out classification or regression tasks depending on whether an associated output is discrete or continuous. Kernel Transform Learning Framework (KTL): To capture the non-linearities in the data, KTL may be employed as: BK(X,X)=Z (9) wherein B is the transform and K(X,X) is the kernel matrix which may be defined upfront unlike in dictionary-based methods and is express as: K(X,X)=f(X)^T f(X) (10) The complete formulation of KTL by imposing sparsity on the Z may be expressed as: (min)-(B,Z) ?BK(X,X)-Z?_F^2+?(?B?_F^2-logdetB)+µ||Z?||?_0 ) (11) The closed form solution updates for B and Z in KTL is identical to the TL with the only difference being that the kernelized version of the data K(X,X) is utilized instead of the raw input data X. Referring now to the drawings, and more particularly to FIG.1 through FIG.5B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. FIG.1 illustrates an exemplary block diagram of a system 100 for multi-sensor fusion using Transform Learning (TL), in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 for multi-sensor fusion includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system for multi-sensor fusion 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like. I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface(s) can include one or more ports for connecting a number of devices to one another or to another server. The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system for multi-sensor fusion 100 can be stored in the memory 102. FIG.2 is a high-level flow diagram 200 of a method for multi-sensor fusion using TL according to some embodiments of the present disclosure. In an exemplary monitored system such as a Friction Stir Welding (FSW) machine, there may be multiple sensors connected for measuring various parameters like force, pressure, and the like. As shown in FIG.2, ?(X?_1,X_2,…..X_n) represent an input (a plurality of training data referred later in the description) to the exemplary monitored system, wherein the input is received from the plurality of sensors. ?(T?_1,T_2,…..T_n) represent sensor specific transforms that are learnt to represent the corresponding sensor data. The sensor specific transforms are appropriately fused using a fusing transform ?(T?^((f))) which is learnt by utilizing the knowledge of an output (y) or a final inference (or training output referred later in the description). The illustrated flow diagram in FIG.2 represents a supervised learning framework, where the transforms and weight matrix (for classification or regression) are learnt in a training phase and later utilized practically for carrying out multi-sensor fusion. FIG.3A through FIG.3D illustrate an exemplary flow diagram of a computer implemented method 300 for multi-sensor fusion using Transform Learning, in accordance with some embodiments of the present disclosure. In an embodiment, the system for multi-sensor fusion 100 includes one or more data storage devices or memory 102 operatively coupled to the one or more processors 104 and is configured to store instructions configured for execution of steps of the method 300 by the one or more processors 104. The steps of the method 300 will now be explained in detail with reference to the components of the system for multi-sensor fusion 100 of FIG.1 and the flow diagram 200 of FIG.2. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously. Basic Transform Learning Framework for Fusion (TLF): Let the multiple sensors (plurality of sensors) be of length N samples given by X_1?R^(m_1×N),…., X_n?R^(m_n×N), wherein m_1,….,m_n are feature lengths of the individual sensors. The output in terms of a regressor may be given as y?R^(1×N). Multi-sensor fusion is carried out by learning the transforms and coefficients for each sensor, a fusing transform, its associated fusing coefficient and a weight matrix w together in a joint optimization framework. Accordingly, in an embodiment of the present disclosure, the one or more processors 104, are configured to receive, at step 302, the plurality of training data ?(X?_1,X_2,…..X_n) from the plurality of sensors connected to a monitored system with a training output (y). The training data ?(X?_1,X_2,…..X_n) in the context of the present disclosure is a time series data. In an embodiment, the one or more processors 104, are configured to perform, at step 304, a joint optimization of a set of parameters including (i) sensor specific transforms ?(T?_1,T_2,…..T_n) and (ii) sensor specific coefficients ?(Z?_1,Z_2,…..Z_n), wherein each of the sensor specific transforms and the sensor specific coefficients correspond to a training data in the plurality of training data ?(X?_1,X_2,…..X_n), (iii) a fusing transform ?(T?^((f))), (iv) a fusing coefficient ?(Z?^((f))), and (v) a weight matrix (w). Without loss of generality, for n sensors, the joint optimization may be expressed as: min¦(T_1,T_2,…T_n,T^((f) ) Z_1,Z_2,…Z_n,Z^((f)),w) ?T_1 X_1-Z_1 ?_F^2+?T_2 X_2-Z_2 ?_F^2+?+?T_n X_n-Z_n ?_F^2+?_1 (?T_1 ?_F^2-log?det??T_1 ? )+?_2 (?T_2 ?_F^2-log?det??T_2 ? )+??+??_n (?T_n ?_F^2-log?det??T_n ? )+??||T?^((f)) [¦(Z_1@Z_2@¦(.@.@Z_n ))]-Z^((f)) ?||?_F^2+??(?T^((f)) ??_F^2-log?det??T^((f)))? +a?y-wZ^((f)) ?_2^2 (12) wherein T_1?R^(K_1×m_1 ), T_2?R^(K_2×m_2 ), T_n?R^(K_n×m_n ) are the sensor specific transforms, T^((f))?R^(K×(K_1+K_2+...K_n)) is the fusing transform, Z_1?R^(K_1×N), Z_2?R^(K_2×N), Z_n?R^(K_n×N) are the sensor specific coefficients, Z^((f))?R^(K×N) is the fusing coefficient and w?R^(1×K) is the weight matrix being a weight vector provided the training output y?R^(1×N), K being the size of the sensor specific transforms and the fusing transform and N being the number of measurements in the training data. In an embodiment of the present disclosure, the joint optimization comprises initializing and estimating some of the parameters from the set of parameters followed by iteratively learning all of the parameters. In an embodiment, the sensor specific transforms ?(T?_1,T_2,…..T_n) and the fusing transform ?(T?^((f))) are initialized, at step 304a, with a random matrix comprising real numbers between 0 and 1. In an embodiment, the real numbers may be chosen from a uniform distribution. The sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) are estimated, at step 304b, based on the initialized sensor specific transforms ?(T?_1,T_2,…..T_n) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n) as represented by Equation (1). The fusing coefficient ?(Z?^((f))) is estimated, at step 304c, based on the initialized fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(Z?_1,Z_2,…..Z_n). The weight matrix (w) is estimated, at step 304d, based on the training output (y) and the estimated fusing coefficient ?(Z?^((f))). Further, joint learning is performed iteratively, at step 304e, using the initialized parameters from step 304a and the estimated parameters from steps 304b through 304d in a first iteration and learnt parameters thereafter until a termination criterion is met to obtain jointly (i) the learnt sensor specific transforms ?(T?_1,T_2,…..T_n), (ii) the learnt fusing transform ?(T?^((f))) and (iii) the learnt weight matrix (w) for the monitored system being sensed by the plurality of sensors. In an embodiment the joint learning comprises learning each of the sensor specific transforms ?(T?_1,T_2,…..T_n), at step 304e-1, based on a corresponding sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the plurality of training data ?(X?_1,X_2,…..X_n). The learning of the each of the sensor specific transforms may be represented as: T_j?min¦T_j ?T_j X_j-Z_j ?_F^2+?_j (?T_j ?_F^2-log?det??T_j ? ) for j=1,..,n (13) At step 304e-2, as part of the joint learning, each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) are learnt based on the fusing transform ?(T?^((f))), a corresponding sensor specific transform ?(T?_1,T_2,…..T_n), the fusing coefficient ?(Z?^((f))) and a corresponding training data from the plurality of training data ?(X?_1,X_2,…..X_n), and remaining of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n). The learning of each of the sensor specific coefficients ?(Z?_1,Z_2,…..Z_n) may be represented as: Z_j=(?T_j^((f)^T ) T_j^((f) )+I)^(-1).(T_j X_j+?(T_j^((f)^T ) (Z^((f) )-?_(i=1,i?j)^n¦?T_j^((f) ) Z_i ?))) for j=1,..,n (14) At step 304e-3, as part of the joint learning, the fusing transform ?(T?^((f))) is learnt based on the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n) and the fusing coefficient ?(Z?^((f))). At step 304e-4, as part of the joint learning, the fusing coefficient ?(Z?^((f))) is learnt based on the fusing transform ?(T?^((f))), the sensor specific coefficient ?(Z?_1,Z_2,…..Z_n), the weight matrix (w) and the training output (y). The learning of the fusing coefficient Z^((f)) may be represented as: Z^((f))=?(I+aw^T w)?^(-1).?(T?^((f)) Z^'+aw^T y), wherein Z^'=[¦(Z_1@Z_2@¦(.@.@Z_n ))] (15) At step 304e-5, as part of the joint learning, the weight matrix (w) is learnt based on the fusing coefficient ?(Z?^((f))) and the training output (y). The learning of the weight matrix (w) may be represented as: w?¦(min@w) a||y-wZ^((f)) ?||?_2^2 (16) In accordance with an embodiment of the present disclosure, the termination criterion is one of (i) completion of a predefined number of iterations (Maxiter) and (ii) difference of the fusing transform ?(T?^((f))) of a current iteration and the fusing transform ?(T?^((f))) of a previous iteration being less than an empirically determined threshold value (Tol). Typically, the empirically determined threshold value is a very low value e.g. 0.001. Having obtained jointly (i) the learnt sensor specific transforms ?(T?_1,T_2,…..T_n) in Equation (13), (ii) the learnt fusing transform ?(T?^((f))) based on Equation (6-8) (Refer step 10 in Algorithm 1 below, wherein X=Z') and (iii) the learnt weight matrix (w) in Equation (16) for the monitored system being sensed by the plurality of sensors, in an embodiment of the present disclosure, the one or more processors 104, are configured to estimate, at step 306, an output (y_new) of the monitored system for a plurality of new data ?(x?_1,x_2,…..x_n). In the context of the present disclosure, the new data is also a time series data. In an embodiment, the step of estimating the output (y_new) of the monitored system represents application of the method 300 of the present disclosure to the monitored system using new data which is different from the training data ?(X?_1,X_2,…..X_n) that was used in the earlier steps. In an embodiment, at step 306a, the one or more hardware processors are configured to receive the plurality of new data ?(x?_1,x_2,…..x_n) from the plurality of sensors connected to the monitored system. The sensor specific coefficients ?(z?_1,z_2,…..z_n) corresponding to the plurality of new data ?(x?_1,x_2,…..x_n) are estimated, at step 306b, using the plurality of new data ?(x?_1,x_2,…..x_n) and the learnt sensor specific transforms ?(T?_1,T_2,…..T_n). The estimation of the sensor specific coefficients ?(z?_1,z_2,…..z_n) corresponding to the plurality of new data ?(x?_1,x_2,…..x_n) may be represented as: z_j=T_j x_j, for j=1,…,n. (17) A new fusing coefficient z^((f)) is then estimated, at step 306c, using the learnt fusing transform ?(T?^((f))) and the estimated sensor specific coefficients ?(z?_1,z_2,…..z_n). The estimation of the new fusing coefficient z^((f)) may be represented as: z^((f))=T^((f)) [¦(z_1@z_2@¦(.@.@z_n ))] (18) The output (y_new) is estimated, at step 306d, for the monitored system based on the learnt weight matrix (w) and the estimated new fusing coefficient z^((f)). The estimation of the output (y_new) for the monitored system may be represented as: y_new=wz^((f)) (19) Kernel Transform Learning Framework for Fusion (KTLF): To capture complex non-linear relationship in the data, in accordance with an embodiment of the present disclosure, the plurality of training data ?(X?_1,X_2,…..X_n) and the plurality of new data ?(x?_1,x_2,…..x_n) are kernelized using a kernel function. The kernel function may be a radial basis function, a polynomial kernel, and the like. In an embodiment, a kernel version of the joint optimization of Equation (12) may be represented as: min¦(B_1,B_2,…B_n,T^((f) ) Z_1,Z_2,…Z_n,Z^((f)),w) ?B_1 K?(X?_1,X_1)-Z_1 ?_F^2+?B_2 K(X_2,X_2)-Z_2 ?_F^2+?+?B_n K(X_n,X_n)-Z_n ?_F^2+?_1 (?B_1 ?_F^2-log?det??B_1 ? )+?_2 (?B_2 ?_F^2-log?det??B_2 ? )+??+??_n (?B_n ?_F^2-log?det??B_n ? )+??||T?^((f)) [¦(Z_1@Z_2@¦(.@.@Z_n ))]-Z^((f)) ?||?_F^2+??(?T^((f)) ??_F^2-log?det??T^((f)))? +a?y-wZ^((f)) ?_2^2 (20) The closed form solution updates of Equations (13) and (14) for B and Z in the kernelized version remains the same with the only difference being that the kernelized version of the input data K?(X?_j,X_j) for j=1,…,n is utilized. It may be noted that the updates for the fusing transform T^((f)) and the fusing coefficient Z^((f)) also remain the same as in the TLF. The sensor specific coefficients ?(z?_1,z_2,…..z_n) corresponding to the plurality of new data ?(x?_1,x_2,…..x_n) in the kernelized version may be represented as: z_j=B_j ?K(x?_j,X_j), for j=1,…,n (21) The new fusing coefficient z^((f)) in the kernelized version may be represented as: z^((f))=T^((f)) [¦(z_1@z_2@¦(.@.@z_n ))] (22) The output (y_new) for the monitored system in the kernelized version may be represented as: y_new=wz^((f)) (23) The pseudocode for TLF and KTLF algorithm (using n =3) explained herein above is presented in Algorithm 1 below. Algorithm 1: Transform and Kernel Transform Learning for Fusion (TLF or KTLF) Input: Set of training data, ?(X?_1,X_2,X_3), training output (y), size of transforms (atoms) K_1,K_2,K_3, hyperparameters ?_1,?_2,?_3, a, ?, predefined number of iterations (Maxiter), kernel function K and new data for which output needs to be estimated ?(x?_1,x_2,x_3), Output: Learnt sensor specific transforms ?(T?_1,T_2,T_3) or ?(B?_1,B_2,B_3), (ii) the learnt fusing transform ?(T?^((f))) (iii) the learnt weight matrix (w) and the estimated output (y_new). Initialization: Set sensor specific transforms ?(T?_1,T_2,T_3) and the fusing transform ?(T?^((f))) to a random matrix comprising real numbers between 0 and 1 drawn from a uniform distribution. Z_1=T_1 X_1 or B_1 ?K(X?_1,X_1), Z_2=T_2 X_2 or B_2 ?K(X?_2,X_2), Z_3=T_3 X_3 or B_3 ?K(X?_3,X_3) Z^((f))=T^((f)) Z', w=y_new Z^?(? denotes pseudo-inverse) and iteration i=1 1: procedure 2: loop: Repeat until convergence (or Maxiter) 3: Z_1i? update using Equation (14) with T_1i (or B_1i) 4: T_1i (or B_1i) ? update using Equations (6)-(8) with X=X_1 (or ?K(X?_1,X_1)) 5: Z_2i? update using Equation (14) with T_2i (or B_2i) 6: T_2i (or B_2i) ? update using Equations (6)-(8) with X=X_2 (or ?K(X?_2,X_2)) 7: Z_3i? update using Equation (14) with T_3i (or B_3i) 8: T_3i (or B_3i) ? update using Equations (6)-(8) with X=X_3 (or ?K(X?_3,X_3)) 9: Z_i^((f))? updated using Equation (15) with T_i^((f)) 10: T_i^((f))? updated using Equations (6)-(8) with X=Z' 11: w_i?y_new Z_i^((f)?) 12: i?i+1 13: if ?||?_ T_i^((f))-T_(i-1)^((f)) ?||?_F

Documents

Application Documents

# Name Date
1 202021036163-STATEMENT OF UNDERTAKING (FORM 3) [21-08-2020(online)].pdf 2020-08-21
2 202021036163-PROVISIONAL SPECIFICATION [21-08-2020(online)].pdf 2020-08-21
3 202021036163-FORM 1 [21-08-2020(online)].pdf 2020-08-21
4 202021036163-DRAWINGS [21-08-2020(online)].pdf 2020-08-21
5 202021036163-DECLARATION OF INVENTORSHIP (FORM 5) [21-08-2020(online)].pdf 2020-08-21
6 202021036163-FORM-26 [12-11-2020(online)].pdf 2020-11-12
7 202021036163-Proof of Right [19-02-2021(online)].pdf 2021-02-19
8 202021036163-FORM 3 [20-08-2021(online)].pdf 2021-08-20
9 202021036163-FORM 18 [20-08-2021(online)].pdf 2021-08-20
10 202021036163-ENDORSEMENT BY INVENTORS [20-08-2021(online)].pdf 2021-08-20
11 202021036163-DRAWING [20-08-2021(online)].pdf 2021-08-20
12 202021036163-COMPLETE SPECIFICATION [20-08-2021(online)].pdf 2021-08-20
13 202021036163-ASSIGNMENT DOCUMENTS [20-08-2021(online)].pdf 2021-08-20
14 202021036163-8(i)-Substitution-Change Of Applicant - Form 6 [20-08-2021(online)].pdf 2021-08-20
15 202021036163-Request Letter-Correspondence [08-11-2021(online)].pdf 2021-11-08
16 202021036163-Power of Attorney [08-11-2021(online)].pdf 2021-11-08
17 202021036163-Form 1 (Submitted on date of filing) [08-11-2021(online)].pdf 2021-11-08
18 202021036163-Covering Letter [08-11-2021(online)].pdf 2021-11-08
19 202021036163-CERTIFIED COPIES TRANSMISSION TO IB [08-11-2021(online)].pdf 2021-11-08
20 202021036163-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(9-12-2021).pdf 2021-12-21
21 Abstract1.jpg 2022-06-01
22 202021036163-FORM 3 [07-07-2022(online)].pdf 2022-07-07
23 202021036163-FORM-26 [08-11-2022(online)].pdf 2022-11-08
24 202021036163-FER.pdf 2024-01-17
25 202021036163-FORM 3 [04-04-2024(online)].pdf 2024-04-04
26 202021036163-OTHERS [17-06-2024(online)].pdf 2024-06-17
27 202021036163-FER_SER_REPLY [17-06-2024(online)].pdf 2024-06-17
28 202021036163-COMPLETE SPECIFICATION [17-06-2024(online)].pdf 2024-06-17
29 202021036163-CLAIMS [17-06-2024(online)].pdf 2024-06-17

Search Strategy

1 SearchStrategy202021036163E_15-01-2024.pdf