Sign In to Follow Application
View All Documents & Correspondence

Method And System For Forecasting Deformation Maps From Sar Images Using Multi Scale Attention Guided Rnn

Abstract: This disclosure relates generally to method and system for forecasting deformation maps from SAR images using multi-scale attention guided RNN. Deformation monitoring and prediction frameworks are critical for creating early warning systems for abnormal events. The proposed method preprocesses the plurality of SAR images acquired to generate a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps. Further, the deformation forecasting network is trained with the differential interferometric SAR (DInSAR) time series training data to forecast future deformation maps for the SAR test data using a multi-scale feature sampler. The trained network performs optimization on the RNN with an activation function and a network loss function. Further, the present disclosure achieves minimal prediction error when compared to the observed deformation maps with high reliability.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 September 2021
Publication Number
12/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-05-16
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai Maharashtra India 400021

Inventors

1. GUBBI LAKSHMINARASIMHA, Jayavardhana Rama
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore Karnataka India 560066
2. KATHIRVEL, Ram Prabhakar
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore Karnataka India 560066
3. NUKALA, Veera Harikrishna
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore Karnataka India 560066
4. NAYAK, Madhumita
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore Karnataka India 560066
5. PURUSHOTHAMAN, Balamuralidhar
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village, Bangalore Karnataka India 560066

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
TITLE
METHOD AND SYSTEM FOR FORECASTING DEFORMATION MAPS FROM SAR IMAGES USING MULTI-SCALE ATTENTION GUIDED
RNN
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD [001] The disclosure herein generally relates to forecast synthetic aperture radar (SAR) images, and, more particularly, to method and system for forecasting deformation maps from SAR images using multi-scale attention guided recurrent neural network (RNN).
BACKGROUND
[002] Remote sensing offers a vast ability to observe the earth for natural and other human activities. The data collected over different time duration enables a plethora of temporal analysis-based applications like land deformation change detection. Among several techniques available for land deformation analysis, Interferometric Synthetic Aperture Radar (InSAR) is an effective and robust method. Synthetic Aperture Radar (SAR) is widely used in military, disaster monitoring and other fields due to its advantages of all weather and long-distance detection, multiple angles and multiple resolutions, for detecting and positioning different targets. Many advantages offered by SAR, such as visibility through day, night and different weather conditions, combined with increased temporal frequency, make it an ideal candidate for land deformation analysis. Deformation monitoring and prediction frameworks are critical for creating early warning systems for abnormal events. Such forecasting facilitates quick countermeasure to avoid undesirable conditions.
[003] Conventional methods such as a hyperbolic model and a Markov model-based approach predicts ground subsidence using InSAR. Dependency on prior knowledge and non-applicability for different temporal patterns are some of the limitations of such methods. Further, deep learning solves complex problems by learning a hierarchically structured model. Applications of deep learning techniques across various tasks such as object detection, classification, and pattern recognition have achieved significant results. Convolutional neural network-based methods have been used to monitor changes in deformations and have shown better modelling capacity than rule-based prediction systems. In another method, deformations have been forecasted using standard time series forecasting tool and

compared against simple extrapolation methods for Sentinel-1 InSAR data. The existing data-driven models have shown to perform well for seasonal signals but, the forecast quality decreases for the less seasonal signals. Also, their prediction quality improves with increasing the amount of training data but obtaining large training data is a laborious task.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method and system for forecasting deformation maps from SAR images using multi-scale attention guided RNN is provided. The system includes preprocessing a plurality of SAR images acquired by means of a SAR sensor. The plurality of preprocessed SAR images is utilized to generate a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period. The DInSAR time series training data is utilized to train a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function. Furthermore, a SAR test data is inputted to the trained deformation forecasting network to process and forecast future deformation maps.
[005] In one embodiment, training the deformation forecasting network performs the steps of splitting the plurality of SAR images for grouping into at least one of a multi-scale feature sampler and feeding each group independently to a feature encoder corresponding to the multi-scale feature sampler. Further, a plurality of feature set is extracted from each feature encoder of the multi-scale feature sampler. Then, the plurality of feature set is concatenated into a single feature vector and feeding the single feature vector to each attention module of the attention guiding component corresponding to the multi-scale feature sampler to predict attention weight vectors for each feature set. Then, each attention weight

vector is multiplied with a corresponding feature set to obtain a plurality of attention multiplied signal. Further, the concatenation of the plurality of attention multiplied signal are fed into a long short term memory (LSTM) recurrent cell to generate the DInSAR time series deformation maps using the decoder by combining LSTM recurrent cell output with the plurality of feature set.
[006] In another aspect, a method for forecasting deformation maps from SAR images using multi-scale attention guided RNN is provided. The method includes preprocessing a plurality of SAR images acquired by means of a SAR sensor. The plurality of preprocessed SAR images is utilized to generate a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period. The DInSAR time series training data is utilized to train a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function. Furthermore, a SAR test data is inputted to the trained deformation forecasting network to process and forecast future deformation maps.
[007] In one embodiment training the deformation forecasting network performs the steps of splitting the plurality of SAR images for grouping into at least one of a multi-scale feature sampler and feeding each group independently to a feature encoder corresponding to the multi-scale feature sampler. Further, a plurality of feature set is extracted from each feature encoder of the multi-scale feature sampler. Then, the plurality of feature set is concatenated into a single feature vector and feeding the single feature vector to each attention module of the attention guiding component corresponding to the multi-scale feature sampler to predict attention weight vectors for each feature set. Then, each attention weight vector is multiplied with the corresponding feature set to obtain a plurality of attention multiplied signal. Further, the concatenation of the plurality of attention multiplied signal are fed into a long short term memory (LSTM) recurrent cell to

generate the DInSAR time series deformation maps using the decoder by combining LSTM recurrent cell output with the plurality of feature set.
[008] In yet another aspect, a non-transitory computer readable medium for preprocessing a plurality of SAR images acquired by means of a SAR sensor. The plurality of preprocessed SAR images is utilized to generate a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period. The DInSAR time series training data is utilized to train a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function. Furthermore, a SAR test data is inputted to the trained deformation forecasting network to process and forecast future deformation maps.
[009] In one embodiment training the deformation forecasting network performs the steps of splitting the plurality of SAR images for grouping into at least one of a multi-scale feature sampler and feeding each group independently to a feature encoder corresponding to the multi-scale feature sampler. Further, a plurality of feature set is extracted from each feature encoder of the multi-scale feature sampler. Then, the plurality of feature set is concatenated into a single feature vector and feeding the single feature vector to each attention module of the attention guiding component corresponding to the multi-scale feature sampler to predict attention weight vectors for each feature set. Then, each attention weight vector is multiplied the corresponding feature set to obtain a plurality of attention multiplied signal. Further, the concatenation of the plurality of attention multiplied signal are fed into a long short term memory (LSTM) recurrent cell to generate the DInSAR time series deformation maps using the decoder by combining LSTM recurrent cell output with the plurality of feature set.
[010] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[011] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[012] FIG.1 illustrates a deformation forecasting system to forecast deformation maps from synthetic aperture radar (SAR) images, according to some embodiments of the present disclosure.
[013] FIG. 2 illustrates a functional block diagram to forecast deformation maps by inputting a differential interferometric SAR (DInSAR) time series data using the system of FIG.1, according to some embodiments of the present disclosure.
[014] FIG. 3 illustrates a flow diagram to forecast deformation maps for the differential interferometric SAR (DInSAR) time series data using a trained deformation forecasting network using the system of FIG.1, according to some embodiments of the present disclosure.
[015] FIG.4A and FIG.4B illustrates an experimental graph representing qualitative comparison of deformation maps having aperiodic signals with randomly chosen locations using the system of FIG.1, according to some embodiments of the present disclosure.
[016] FIG.4C and FIG.4D illustrates an experimental graph representing qualitative comparison of deformation maps having periodic signals with randomly chosen locations using the system of FIG.1, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [017] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are

possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[018] Embodiments herein provides a method and system for forecasting deformation maps from SAR images using multi-scale attention guided RNN. The method disclosed, enables forecasting time series deformation maps from synthetic aperture radar (SAR) images using a trained deformation forecasting network. The present disclosure provides an efficient attention-guided recurrent neural network which forecasts deformation maps in advance with the trained network based on univariate and multivariate time series signals. Deep neural network learns to extract effective representations adaptively with the trained attention-guided multiscale RNN to predict deformation maps from each SAR image. This approach achieves minimal prediction error when compared to the observed deformation maps with high reliability. Experimental results indicate superiority in forecasting deformation maps with high accuracy compared to existing state-of-the-art approaches.
[019] Referring now to the drawings, and more particularly to FIG. 1 through FIG.4D, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[020] FIG. 1 illustrates a deformation forecasting system to forecast deformation maps from synthetic aperture radar (SAR) images, according to some embodiments of the present disclosure. In an embodiment, the system 100 includes processor (s) 104, communication interface (s), alternatively referred as or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the processor (s) 104. The system 100, with the processor(s) is configured to execute functions of one or more functional blocks of the system 100. Referring to the components of the system 100, in an embodiment, the processor (s) 104 can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one

or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 104 is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, 10 hand-held devices, workstations, mainframe computers, servers, a network cloud, and the like.
[021] The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting a number of devices (nodes) of the system 100 to one another or to another server.
[022] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The modules 108 can be an Integrated Circuit (IC) (not shown), external to the memory 102, implemented using a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). The names (or expressions or terms) of the modules of functional block within the modules 108 referred herein, are used for explanation and are not construed to be limitation(s). The modules 108 includes the time series time predictor module 110 for processing of a plurality of human body inputs received from one or more external sources. Further, the memory 102 may comprises information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
[023] FIG. 2 illustrates a functional block diagram to forecast deformation maps by inputting a differential interferometric SAR (DInSAR) time series data

using the system of FIG.1, according to some embodiments of the present disclosure. FIG.2 is the deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence. The RNN network employs a multi-scale attention mechanism to identify vital temporal features that influences subsequent time series deformation maps. The multi-scale feature extraction component learns varying patterns of multi-scale feature sampler having a (i) long-term signal data, (ii) a mid-range signal data, and (iii) a short-term signal data. Each multi-scale feature sampler considers SAR image data for a predefined interval of time series. The attention guiding component predicts attention maps for atleast one of the multi-scale feature samplers and multiplies the attention maps with corresponding feature maps. The recurrent neural network (RNN) is the attention multiplied encoder features concatenated in feature dimension. Then, the RNN learns to model the long and short-term deformation patterns.
[024] FIG. 3 illustrates a flow diagram to forecast deformation maps for the differential interferometric SAR (DInSAR) time series data using a trained deformation forecasting network using the system of FIG.1, according to some embodiments of the present disclosure. In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 300 by the processor(s) or one or more hardware processors 104. The steps of the method 300 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG.1 and FIG.2 and the steps of flow diagram as depicted in FIG.3. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.

[025] Referring now to the steps of the method 300, at step 302, the one or more hardware processors 104 preprocess, a plurality of SAR images acquired by means of a SAR sensor. With an example, Sentinel-1 imagery captures the plurality of SAR images of each location for a defined time period. The plurality of SAR images are raw data obtained from (licSAR tool) open source Sentinel-1 imagery tool. Here, each pair of SAR images are co-registered to unwrap interferograms and coherence masking. The generated coherence masked map has flat earth and topographic phase, which are removed.
[026] Referring now to the steps of the method 300, at step 304, the one or more hardware processors 104 generate from the plurality of SAR images, via the one or more hardware processors, a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period. From the above step, (FIG.2) the generated coherence map having resultant signal still has phase wrapped, which is unwrapped after differential interferometric (DInSAR) phase calculation. Finally, the phase unwrapped signal is converted into displacement signal to generate the DInSAR time series training data which comprises of time series deformation maps with size 1393*1198*92, where the last dimension denotes observed time duration.
[027] Referring now to the steps of the method 300, at step 306, the one or more hardware processors 104 train using the DInSAR time series training data, a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function. Now from the above step, the deformation forecasting network is trained with the SAR training data to forecast time series deformation signals in advance for ahead of time period. The training steps are as follows,
Step 1 - The SAR training data is splitted and grouped to atleast one of a multi-scale feature sampler, and then each group is independently inputted to a feature encoder corresponding to the multi-scale feature sampler. For example the SAR training data has 62-dimension data represented as (X : [ t0 , t1 ,… …., t61]) the

multi-scale feature sampler are categorized into three types that includes a long-term signal data, (ii) a mid-range signal data, and (iii) a short-term signal data. The 62 dimension data is grouped into the long-term signal data has
covering the complete SAR training data. The mid-range signal data covers only last thirty samples from t31 ,… …., t61 and the short-term signal data covers only the last ten data points from the
Each scale of signal (Xi) is fed to separate feature encoder (Ne) to extract three different levels of feature set as represented in equation 1,
fi = Ne (Xi ) ,∀i= 1,2,3 equation 1
Step 2 - Further, a plurality of feature set is extracted from each feature encoder of the multi-scale feature sampler, wherein the plurality of feature set comprises a first feature set, a second feature set, and a third feature set. The first feature set (f1) is extracted from the long-term signal data of the multi-scale feature sampler. The second feature set (f2) is extracted from the mid-range signal data of the multi-scale feature sampler. The third feature set (f3) is extracted from the short-term signal data of the multi-scale feature sampler.
Step 3 - The plurality of feature set is concatenated into a single feature vector and fed into each attention module of the attention guiding component corresponding to the multi-scale feature sampler to predict attention weight vectors for each feature set. While (f1) captures the long-term signal data having regular and predictable changes occurring every predefined time period observed globally, (f2) captures the mid-range signal features and (f3) captures the short-term signal data with recent trends. (Ne) network consists of five dense layers with [64, 128, 256, 128, 64] filters and Leaky Rectified Linear Unit (LeakyReLu) activation function. The extracted features are concatenated and passed into three different attention modules (Na) to predict attention weight vectors (ai ) for each of the three feature set (fi ) as denoted below in equation 2,
ai = Na (CONCAT (f1, f2, f3 )),∀i= 1,2,3 equation 2
Step 4 - Each attention weight vector is multiplied with the corresponding feature set to obtain a plurality of attention multiplied signal. The attention mechanism

identifies interweaving patterns critical for prediction of future deformation maps. The predicted attention maps are multiplied with the encoder features (ƒi) toboost the desirable and suppress the redundant signals as denoted below in equation 3,

Step 5- The plurality of attention multiplied signal is concatenated and the concatenated attention multiplied signal are fed into long short-term memory (LSTM) recurrent cell. The attention multiplied signals (ƒ'i)are concatenated in the feature dimension and fed to LSTM recurrent cell having 64 units with Leaky Rectified Linear Unit (LeakyReLu) activation function.
Step 6 - Generating by a decoder, the time series deformation maps by combining the LSTM recurrent cell output with the plurality of feature set. To increase the forecasting accuracy, LSTM output features are combined with the input encoder features to generate future deformation maps (Y: t62to t91)using the decoder (Na)as described below in equation 4,
Y=Na(CONCAT(ƒ1,ƒ2,ƒ3,LATM(ƒ1,ƒ2′,ƒ3′)))),∀i=1,2,3-------------------------
equation 4 The final dense layer in the decoder has 30 units with linear activation function.
[028] Further, the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function. The network loss function is computed using the mean square loss (l2)between the predicted (Y)and ground truth deformation time series data to train the network as
described below in equation 5,
Loss= l2(Y Y')----------- equations 5
The generated loss function is used to train the deformation forecasting network in an end-to-end fashion. The trained network with Adam optimizer for 1000 epochs provides a learning rate of 10-5.
[029] Referring now to the steps of the method 300, at step 308, the one or more hardware processors 104 input (308), a SAR test data to the trained deformation forecasting network to process and forecast future deformation maps.

[030] FIG.4A and FIG.4B, illustrates an experimental graph representing qualitative comparison of deformation maps having aperiodic signals with randomly chosen locations using the system of FIG.1, according to some embodiments of the present disclosure. The trained network randomly selects locations to evaluate the performance during testing using 1000 randomly chosen locations that were not part of the training. The qualitative results for four randomly chosen points from 1000 datapoint as marked in FIG.4A and FIG.4B are irregular signals among the chosen points. Experimental graph analysis shows data captured for the input deformation signal from May 2015 to April 2018, comparing with the approach of the present disclosure from May 2018 to April 2019 with Hill.et.al. As observed from the results, output of the method of the present disclosure is robust to model periodic and aperiodic signals well with negligible variance. Referring now to FIG.4C approach has failed to capture the seasonal trend, instead, it has predicted a constant signal with perturbations. In contrast, the ground truth deformation trend examines the prediction quality for the regular predictable signals and it is observed that the approach exhibits superior modeling capacity compared to the existing Hill et al approach. Thus, introducing abrupt discontinuity between monitored and forecasted deformation signals. In contrast, with the help of the X3 signal, the method of the present disclosure accurately follows the immediate trend and generates a smooth transition between monitored and predicted deformation signals.
[031] FIG.4C and FIG.4D, illustrates an experimental graph representing qualitative comparison of deformation maps having periodic signals with randomly chosen locations using the system of FIG.1, according to some embodiments of the present disclosure. In one embodiment, the quantitative comparison between the of the present disclosure against the existing state-of-the-art method Hill et al. and several baseline ablation models on 1000 randomly selected locations, disjoint from the DInSAR time series training data is described in Table 1,

Table 1 : Quantitative results on ablation analysis

Models PSNR (dB) T Error Variance (mm) ↓
Hill et al. 66.9300 4.2
MLP 76.0875 3.8
Without LSTM cell 76.4209 3.1
Without LSTM cell and attention 77.6803 2.9
ƒ1+ LSTM cell 73.7096 3.2
f2 + LSTM cell 75.2721 3.2
ƒ3+ LSTM cell 75.3684 4.1
deformation forecasting network 79.4987 2.8
For comparison, the computed peak signal to noise ratio (PSNR) between the forecasted and ground truth are monitored for deformation maps. The error variance in millimeters indicates the error fluctuation in predicted scores. Not only Hill et al.'s method achieves a low PSNR score but also the highest error variance values. While the multi-layer perceptron network achieves high PSNR. Similarly, the network without the LSTM module is highly influenced by the short-term signals and fails to model the long-term seasonal changes. As expected, the ablation without attention module and LSTM achieves 2dB lower PSNR. An experiment without X3 signal achieves 1.3dB lower than the model, indicating the importance of dividing the signals into multiple scales. Further, each three skip connections are utilized for experimental analysis. All three experiments show a drop in PSNR, demonstrating the necessity to have skip connections.
[032] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.

[033] The embodiments of present disclosure herein addresses unresolved problem to forecast time series deformation maps. The embodiment, thus provides an efficient method to forecast method and system for forecasting deformation maps from SAR images using multi-scale attention guided RNN. Moreover, the embodiments herein further provide a novel attention-guided recurrent neural network performing functions such as 1) Multi-scale signal sampling strategy to learn the long-term signal data, the mid-range signal data, and the short-term signal data deformation patterns, 2) The attention module focusses on pivotal features and discard others and 3) finally, LSTM recurrent cell to learn long-term signal dependencies that enables the model to forecast deformation signals in advance. The method of the present disclosure has both periodic and aperiodic deformation patterns that forecasts deformation signals prior when compared to the existing approaches. Furthermore, the low mean internal error indicates that the method of the present disclosure is robust to various practical challenges and can forecast displacement signals precisely, thus has the potential to be deployed as an early warning system.
[034] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software

means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[035] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[036] The illustrated steps are set out to explain the exemplary
embodiments shown, and it should be anticipated that ongoing technological
development will change the manner in which particular functions are performed.
These examples are presented herein for purposes of illustration, and not limitation.
Further, the boundaries of the functional building blocks have been arbitrarily
defined herein for the convenience of the description. Alternative boundaries can
be defined so long as the specified functions and relationships thereof are
appropriately performed. Alternatives (including equivalents, extensions,
variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[037] Furthermore, one or more computer-readable storage media may
be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-

readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[038] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A processor implemented method (300) to forecast deformation maps from
synthetic aperture radar (SAR) images, the method comprising:
preprocessing (302), via one or more hardware processors, a plurality of SAR images acquired by means of a SAR sensor;
generating (304) from the plurality of SAR images, via the one or more hardware processors, a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period;
training (306) using the DInSAR time series training data, via the one or more hardware processors, a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function; and
inputting (308), via the one or more hardware processors, a SAR test data to the trained deformation forecasting network to process and forecast future deformation maps.
2. The processor implemented method of claim 1, wherein training the
deformation forecasting network comprises:
splitting, the plurality of DInSAR time series training data for grouping into at least one of a multi-scale feature sampler and feeding each group independently to a feature encoder corresponding to the multi-scale feature sampler;
extracting, a plurality of feature set from each feature encoder of the multi-scale feature sampler;
concatenating, the plurality of feature set into a single feature vector and feeding the single feature vector to each attention module of the attention guiding component corresponding to the multi-scale

feature sampler to predict attention weight vectors for each feature set;
multiplying, each attention weight vector with a corresponding feature set to obtain a plurality of attention multiplied signal;
concatenating, the plurality of attention multiplied signal and feeding the concatenated attention multiplied signal into a long short-term memory (LSTM) recurrent cell; and
generating by a decoder, the DInSAR time series deformation maps by combining LSTM recurrent cell output with the plurality of feature set.
3. The processor implemented method of claim 2, wherein the multi-scale feature sampler includes a long-term signal data, (ii) a mid-range signal data, and (iii) a short-term signal data.
4. The processor implemented method of claim 2, wherein the plurality of feature set comprises a first feature set, a second feature set, and a third feature set.
5. The processor implemented method of claim 2, wherein the first feature set is extracted from the long-term signal data of the multi-scale feature sampler.
6. The processor implemented method of claim 2, wherein the second feature set is extracted from the mid-range signal data of the multi-scale feature sampler.
7. The processor implemented method of claim 2, wherein the third feature set is extracted from the short-term signal data of the multi-scale feature sampler.

8. A system (100), for forecasting deformation maps from synthetic aperture
radar (SAR) images comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory
(102) via the one or more communication interfaces (106), wherein the
one or more hardware processors (104) are configured by the instructions
to:
preprocessing (302), via one or more hardware processors, a plurality of SAR images acquired by means of a SAR sensor;
generating (304) from the plurality of SAR images, via the one or more hardware processors, a differential interferometric SAR (DInSAR) time series training data for each location by fetching multiple deformation maps over a time period;
training (306) using the DInSAR time series training data, via the one or more hardware processors, a deformation forecasting network comprising a multi-scale feature extraction component, an attention guiding component, and a recurrent neural network (RNN) connected in sequence, wherein the deformation forecasting network performs optimization on the RNN with an activation function and a network loss function; and
inputting (308), via the one or more hardware processors, a SAR test data to the trained deformation forecasting network to process and forecast future deformation maps.
9. The system of claim 8, wherein training the deformation forecasting
network comprises:
splitting, the plurality of DInSAR time series training data for grouping into at least one of a multi-scale feature sampler and

feeding each group independently to a feature encoder corresponding to the multi-scale feature sampler;
extracting, a plurality of feature set from each feature encoder of the multi-scale feature sampler;
concatenating, the plurality of feature set into a single feature vector and feeding the single feature vector to each attention module of the attention guiding component corresponding to the multi-scale feature sampler to predict attention weight vectors for each feature set;
multiplying, each attention weight vector with a corresponding feature set to obtain a plurality of attention multiplied signal;
concatenating, the plurality of attention multiplied signal and feeding the concatenated attention multiplied signal into a long short-term memory (LSTM) recurrent cell; and
generating by a decoder, the DInSAR time series deformation maps by combining LSTM recurrent cell output with the plurality of feature set.
10. The system of claim 9, wherein the multi-scale feature sampler includes a long-term signal data, (ii) a mid-range signal data, and (iii) a short-term signal data.
11. The system of claim 9, wherein the plurality of feature set comprises a first feature set, a second feature set, and a third feature set.
12. The system of claim 9, wherein the first feature set is extracted from the long-term signal data of the multi-scale feature sampler.
13. The system of claim 9, wherein the second feature set is extracted from the mid-range signal data of the multi-scale feature sampler.

14. The system of claim 9, wherein the third feature set is extracted from the short-term signal data of the multi-scale feature sampler.

Documents

Application Documents

# Name Date
1 202121042759-STATEMENT OF UNDERTAKING (FORM 3) [21-09-2021(online)].pdf 2021-09-21
2 202121042759-REQUEST FOR EXAMINATION (FORM-18) [21-09-2021(online)].pdf 2021-09-21
3 202121042759-PROOF OF RIGHT [21-09-2021(online)].pdf 2021-09-21
4 202121042759-FORM 18 [21-09-2021(online)].pdf 2021-09-21
5 202121042759-FORM 1 [21-09-2021(online)].pdf 2021-09-21
6 202121042759-FIGURE OF ABSTRACT [21-09-2021(online)].jpg 2021-09-21
7 202121042759-DRAWINGS [21-09-2021(online)].pdf 2021-09-21
8 202121042759-DECLARATION OF INVENTORSHIP (FORM 5) [21-09-2021(online)].pdf 2021-09-21
9 202121042759-COMPLETE SPECIFICATION [21-09-2021(online)].pdf 2021-09-21
10 202121042759-FORM-26 [21-10-2021(online)].pdf 2021-10-21
11 Abstract1.jpg 2021-12-02
12 202121042759-FER.pdf 2023-09-21
13 202121042759-OTHERS [12-02-2024(online)].pdf 2024-02-12
14 202121042759-FER_SER_REPLY [12-02-2024(online)].pdf 2024-02-12
15 202121042759-DRAWING [12-02-2024(online)].pdf 2024-02-12
16 202121042759-COMPLETE SPECIFICATION [12-02-2024(online)].pdf 2024-02-12
17 202121042759-CLAIMS [12-02-2024(online)].pdf 2024-02-12
18 202121042759-PatentCertificate16-05-2024.pdf 2024-05-16
19 202121042759-IntimationOfGrant16-05-2024.pdf 2024-05-16

Search Strategy

1 SearchHistoryE_20-09-2023.pdf

ERegister / Renewals

3rd: 31 May 2024

From 21/09/2023 - To 21/09/2024

4th: 21 Aug 2024

From 21/09/2024 - To 21/09/2025

5th: 13 Aug 2025

From 21/09/2025 - To 21/09/2026