Sign In to Follow Application
View All Documents & Correspondence

Method And System For Composition Based Material Property Prediction Using Multi Task Learning

Abstract: ABSTRACT METHOD AND SYSTEM FOR COMPOSITION BASED MATERIAL PROPERTY PREDICTION USING MULTI-TASK LEARNING Materials property prediction via data-based models has been explored intensively by materials science community. Initial research around property prediction is focused on training traditional machine learning algorithms. The machine learning algorithms require enough hand engineered features thus restricting their applicability. Recently, deep learning models are also introduced for predicting single property associated with inorganic compounds. However, predicting stability of material is still not achieved using deep learning models. Present application provides method and system for composition based material property prediction using multi-task learning. The system receives input representation of chemical compounds in form of multi-dimensional vector representation. The input representation is then utilized by system to create individual single-task model for each chemical property that needs to be predicted. Thereafter, system use created individual single-task model as starting points to train corresponding multi-task model with different tasks in sequential manner for predicting one or more chemical properties at once. [To be published with FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 March 2022
Publication Number
36/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th floor, Nariman point, Mumbai 400021, Maharashtra, India

Inventors

1. Digvijay Gulabrao, Yadav
Tata Consultancy Services Limited, Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013, Maharashtra, India
2. JAIN, Deepak
Tata Consultancy Services Limited, Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013, Maharashtra, India
3. KARANDE, Shirish Subhash
Tata Consultancy Services Limited, Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013, Maharashtra, India
4. RAI, Beena
Tata Consultancy Services Limited, Tata Research Development & Design Centre, 54-B, Hadapsar Industrial Estate, Hadapsar, Pune 411013, Maharashtra, India

Specification

Claims:We Claim:
1. A processor implemented method, comprising:
receiving, by a material property prediction system (MPPS) via one or more hardware processors, an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds, wherein the input data comprises a multi-dimensional vector representation of each chemical compound of the one or more chemical compounds (302);
creating, by the MPPS via the one or more hardware processors, one or more single-task models for the input data, wherein each single-task model of the one or more single-task models is created for predicting a chemical property among the one or more chemical properties (304);
for each single-task model of the one or more single-task models, performing:
training, by the MPPS via one or more hardware processors, each single-task model to perform prediction of a first chemical property among the one or more chemical properties, wherein the first chemical property is randomly selected among the one or more chemical properties for each single-task model (306);
randomly selecting, by the MPPS via one or more hardware processors, a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned, wherein the second chemical property is different from the first chemical property for which each single-task model is trained (308);
fine-tuning, by the MPPS via one or more hardware processors, each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property (310);
identifying, by the MPPS via one or more hardware processors, each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model (312);
determining, by the MPPS via one or more hardware processors, whether all the chemical properties in the one or more chemical properties are selected (314);
upon determining that all the chemical properties in the one or more chemical properties are not selected, iteratively performing (316):
randomly selecting, by the MPPS via one or more hardware processors, a next chemical property among the one or more chemical properties for which each multi-task model is to be fine-tuned, wherein the next chemical property is different from the first chemical property and the second chemical property for which each multi-task model is trained (316a);
fine-tuning, by the MPPS via one or more hardware processors, each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the next chemical property (316b); and
identifying, by the MPPS via one or more hardware processors, combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property (316c),
until all the chemical properties in the one or more chemical properties are selected;
calculating, by the MPPS via the one or more hardware processors, a mean absolute error for each multi-task model of one or more multi-task models using a pre-defined formula (318); and
selecting, by the MPPS via the one or more hardware processors, a multi-task model among the one or more multi-task models based on the mean absolute error (320).

2. The processor implemented method of claim 1, further comprising:
using, by the MPPS via the one or more hardware processors, the selected multi-task model for predicting the one or more chemical properties associated with a chemical compound.

3. The processor implemented method of claim 2, wherein the selected multi-task model is a multi-layered deep neural network (DNN) architecture.

4. The processor implemented method of claim 3, wherein the step of fine-tuning, by the MPPS via one or more hardware processors, each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property comprises:
removing, by the MPPS via one or more hardware processors, an output node associated with the first chemical property from each single-task model;
adding, by the MPPS via one or more hardware processors, a new output node corresponding to each of the first chemical property and the next chemical property to obtain an updated single-task model corresponding to each single-task model; and
fine-tuning, by the MPPS via one or more hardware processors, the updated single-task model obtained corresponding to each single-task model to perform identification of the first chemical property and the second chemical property.

5. The processor implemented method of claim 4, wherein the one or more chemical properties comprises one or more of:
formation energy;
bandgap;
energy per atom;
volume per atom; and
magnetic moment.

6. A material property prediction system, comprising:
a memory (202) storing instructions;
one or more communication interfaces (206); and
one or more hardware processors (204) coupled to the memory (202) via the one or more communication interfaces (206), wherein the one or more hardware processors (204) are configured by the instructions to:
receive an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds, the input data comprising a multi-dimensional vector representation of each chemical compound of the one or more chemical compounds;
create one or more single-task models for the input data, wherein each single-task model of the one or more single-task models is created for predicting a chemical property among the one or more chemical properties;
for each single-task model of the one or more single-task models, perform:
train each single-task model to perform prediction of a first chemical property among the one or more chemical properties, wherein the first chemical property is randomly selected among the one or more chemical properties for each single-task model;
randomly select a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned, wherein the second chemical property is different from the first chemical property for which each single-task model is trained;
fine-tune each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property;
identify each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model;
determine whether all the chemical properties in the one or more chemical properties are selected;
upon determining that all the chemical properties in the one or more chemical properties are not selected iteratively perform:
randomly select a next chemical property among the one or more chemical properties for which each multi-task model is to be trained, wherein the next chemical property is different from the first chemical property and the second chemical property for which each multi-task model is trained;
fine-tune each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the next chemical property; and
identify the combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property,
until all the chemical properties in the one or more chemical properties are selected;
calculate a mean absolute error for each multi-task model of one or more multi-task models using a pre-defined formula; and
select a multi-task model among the one or more multi-task models based on the mean absolute error.

7. The system as claimed in claim 6, wherein the system is further caused to:
use the selected multi-task model for predicting the one or more chemical properties associated with a chemical compound.

8. The system as claimed in claim 7, wherein the multi-task model is a multi-layered deep neural network (DNN) architecture.

9. The system as claimed in claim 8, wherein to fine-tune each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property, the system is further caused to:
remove an output node associated with the first chemical property from each single-task model;
add a new output node corresponding to each of the first chemical property and the second chemical property to obtain an updated single-task model corresponding to each single-task model; and
fine-tune the updated single-task model obtained corresponding to each single-task model to perform identification of the first chemical property and the second chemical property.

10. The system as claimed in claim 9, wherein the one or more chemical properties comprises one or more of:
formation energy;
bandgap;
energy per atom;
volume per atom; and
magnetic moment.

Dated this 2nd day of March 2022


Tata Consultancy Services Limited
By their Agent & Attorney

(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086 , Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:

METHOD AND SYSTEM FOR COMPOSITION BASED MATERIAL PROPERTY PREDICTION USING MULTI-TASK LEARNING

Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to material property prediction, and, more particularly, to a method and a system for composition based material property prediction using multi-task learning.

BACKGROUND
Material property prediction using data-based models has been the foundation of material research for a long time. The data-based models were mainly focused around searching accurate representation for describing materials and training traditional machine learning algorithms, which were either specific to the class of materials or property but not both. Further, these models require a variety of hand engineered features thereby limiting their applicability.
Currently, multiple computation techniques, such as density functional theory, molecular dynamics etc., are available that are used by researchers for material property prediction. However, these calculations are computationally heavy and often, a system has to be simulated multiple times in order to predict the desired properties.
Recently, deep learning models have been used for material property predictions in inorganic compounds as they only require material composition information for predicting a chemical property, such as formation energy. However, predicting stability of the material is still not achieved using these models as stability prediction requires structural information of the material and these models works only on material composition information.

SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for composition based material property prediction using multi-task learning. The method includes receiving, by a material property prediction system (MPPS) via one or more hardware processors, an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds, the input data comprising an eighty-six dimensional vector representation of each chemical compound of the one or more chemical compounds; creating, by the MPPS via the one or more hardware processors, one or more single-task models for the input data, wherein each single-task model of the one or more single-task models is created for predicting a chemical property among the one or more chemical properties; for each single-task model of the one or more single-task models, performing: training, by the MPPS via one or more hardware processors, each single-task model to perform prediction of a first chemical property among the one or more chemical properties, wherein the first chemical property is randomly selected among the one or more chemical properties for each single-task model; randomly selecting, by the MPPS via one or more hardware processors, a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned, wherein the second chemical property is different from the first chemical property for which each single-task model is trained; fine-tuning, by the MPPS via one or more hardware processors, each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property; identifying, by the MPPS via one or more hardware processors, each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model; determining, by the MPPS via one or more hardware processors, whether all the chemical properties in the one or more chemical properties are selected; upon determining that all the chemical properties in the one or more chemical properties are not selected, iteratively performing: randomly selecting, by the MPPS via one or more hardware processors, a next chemical property among the one or more properties for which each multi-task model is to be trained, wherein the next chemical property is different from the first chemical property and the second chemical property for which each multi-task model is trained; fine-tuning, by the MPPS via one or more hardware processors, each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the next chemical property; and identifying, by the MPPS via one or more hardware processors, the combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property until all the chemical properties in the one or more chemical properties are selected; calculating, by the MPPS via the one or more hardware processors, a mean absolute error for each multi-task model of one or more multi-task models using a pre-defined formula; and selecting, by the MPPS via the one or more hardware processors, a multi-task model among the one or more multi-task models based on the mean absolute error.
In another aspect, there is provided a material property prediction system (MPPS) for composition based material property prediction using multi-task learning. The system includes a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds, the input data comprising an eighty-six dimensional vector representation of each chemical compound of the one or more chemical compounds; create one or more single-task models for the input data, wherein each single-task model of the one or more single-task models is created for predicting a chemical property among the one or more chemical properties; for each single-task model of the one or more single-task models, perform: train each single-task model to perform prediction of a first chemical property among the one or more chemical properties, wherein the first chemical property is randomly selected among the one or more chemical properties for each single-task model; randomly select a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned, wherein the second chemical property is different from the first chemical property for which each single-task model is trained; fine-tune each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property; identify each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model; determine whether all the chemical properties in the one or more chemical properties are selected; upon determining that all the chemical properties in the one or more chemical properties are not selected, iteratively perform: randomly selecting a next chemical property among the one or more properties for which each multi-task model is to be trained, wherein the next chemical property is different from the first chemical property and the second chemical property for which each multi-task model is trained; fine-tune each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property and the next chemical property; and identify the combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property until all the chemical properties in the one or more chemical properties are selected; calculate a mean absolute error for each multi-task model of one or more multi-task models using a pre-defined formula; and select a multi-task model among the one or more multi-task models based on the mean absolute error.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause a method for composition based material property prediction using multi-task learning. The method includes receiving, by a material property prediction system (MPPS) via one or more hardware processors, an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds, the input data comprising an eighty-six dimensional vector representation of each chemical compound of the one or more chemical compounds; creating, by the MPPS via the one or more hardware processors, one or more single-task models for the input data, wherein each single-task model of the one or more single-task models is created for predicting a chemical property among the one or more chemical properties; for each single-task model of the one or more single-task models, performing: training, by the MPPS via one or more hardware processors, each single-task model to perform prediction of a first chemical property among the one or more chemical properties, wherein the first chemical property is randomly selected among the one or more chemical properties for each single-task model; randomly selecting, by the MPPS via one or more hardware processors, a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned, wherein the second chemical property is different from the first chemical property for which each single-task model is trained; fine-tuning, by the MPPS via one or more hardware processors, each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property; identifying, by the MPPS via one or more hardware processors, each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model; determining, by the MPPS via one or more hardware processors, whether all the chemical properties in the one or more chemical properties are selected; upon determining that all the chemical properties in the one or more chemical properties are not selected, iteratively performing: randomly selecting, by the MPPS via one or more hardware processors, a next chemical property among the one or more properties for which each multi-task model is to be trained, wherein the next chemical property is different from the first chemical property and the second chemical property for which each multi-task model is trained; fine-tuning, by the MPPS via one or more hardware processors, each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the next chemical property; and identifying, by the MPPS via one or more hardware processors, the combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property until all the chemical properties in the one or more chemical properties are selected; calculating, by the MPPS via the one or more hardware processors, a mean absolute error for each multi-task model of one or more multi-task models using a pre-defined formula; and selecting, by the MPPS via the one or more hardware processors, a multi-task model among the one or more multi-task models based on the mean absolute error.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an example representation of an environment, related to at least some example embodiments of the present disclosure.
FIG. 2 illustrates an exemplary block diagram of a system for composition based material property prediction using multi-task learning, in accordance with an embodiment of the present disclosure.
FIGS. 3A, 3B and 3C, collectively, represent an exemplary flow diagram of a method for composition based material property prediction using multi-task learning, in accordance with an embodiment of the present disclosure.
FIG. 4 illustrates a schematic representation of a training process associated with the system of FIG. 2 or the MPPS 106 of FIG. 1 for training a multi-task model to predict one or more chemical properties associated with a chemical compound at once, in accordance with an embodiment of the present disclosure.
FIG. 5 illustrates an example representation of a table representing different combinations that are possible for three chemical properties, in accordance with an embodiment of the present disclosure.
FIG 6 is a tabular representation illustrating comparison of mean absolute errors of the multi-task model and single task models on an open quantum material database (OQMD) dataset, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
The discovery of new materials, or discovery of innovative uses of existing materials, is essential for development of any country as materials are used in almost every segment of human lives. And the discovery of the materials depends on the search for an appropriate candidate with desired chemical properties. Most of the important materials that are currently being used in technologies like solar cells, lithium-ion batteries, light emitting diodes (LEDs) etc., have been discovered rather serendipitously. However, now focus has been shifted to developing more informed approach towards material discovery and design, thus reducing trial and errors.
Researchers working in the field of material discovery have heavily adopted computational techniques like density functional theory, molecular dynamics etc., for performing first principal simulations of systems of interest. However, these calculations are computationally heavy and often, a system has to be simulated multiple times in order to calculate the desired chemical properties. Further, a lot of time and energy is utilized in performing first principal simulations which is again considered as a bottleneck by the researchers.
Recently, due to the combinatorial nature of the chemical space and the time and energy involved in first principal simulations, the researchers have started evaluating the efficacy of deep learning techniques for approximating physics simulations. In a deep learning technique, authors proposed a 7-layer multi-layer perceptron (MLP) network to predict stability of a compound based on just material composition. The MPL network was able to achieve a mean absolute error of 0.072 eV/atom, which is well within Density-functional theory (DFT)error bound of 0.1 eV/atom. Thereafter, some authors utilized a graph convolutional neural network (GCNN) for developing crystal graph convolution neural network (CGCNN) models in which crystal structures of inorganic compounds are mapped to an undirected multigraph. The CGCNN used 8 separate models to predict a specific materials property. In another work, authors proposed a 17-layer deep neural network model (referred as ElemNet) that takes an 86-dimensional sparse vector representation consisting of normalized elemental compositions as input to predict formation energy of inorganic compounds. ElemNet achieved 30 percent improvement in prediction accuracies as compared to its predecessor. The authors also demonstrated that initial layers of ElemNet were able to capture some periodic table trends that were not explicitly mentioned to the model. In another prediction technique, authors demonstrated that when a message passing neural network (MPNN) is trained by representing material’s composition as dense weighted graphs, the MPNN learned more relevant representations of elements. The authors proved that the MPNN can predict formation energies with 46% low errors as compared to ElemNet, while using significantly less training data. Further, in a new prediction technique, authors proposed an attention based neural network that is trained using very low number of parameters (nearly 60,000 parameters) on data obtained from automatic flow of materials (AFLOW) repository. The authors claimed that their self-attention based CrabNet architecture extracted more context-aware information from materials composition that can be effectively utilized to perform down-stream tasks like materials property prediction.
The above mentioned techniques have explored input representations (either based on materials composition or crystal structure) and various deep learning architectures ranging from DNN, CNN, GCNN and MPNN. The problem that exists with the above mentioned techniques is that it is not easy to determine which combination (representation + architecture) among a plurality of possible combinations is best suited for accelerating materials discovery, and which combination will capture enough material science knowledge so that it can make practically useful predictions.
Further, to overcome the above mentioned problems, some authors presented a detailed ablation of various featurization methods and recommended that the domain-free approaches perform reasonably well with large datasets. In another report, authors tested 7 machine learning models on stability prediction task using data from a materials project database. The authors concluded that the models with mere materials composition as input can learn practically useful mapping to formation energy. However, predicting stability of the compound may still not be possible as it requires structural information. The authors also mentioned that the composition-based formation energy models can be further improved by subtle changes in model architecture rather than searching for the most optimum set of elemental features (atomic radii, atomic electronegativities, elemental group etc.).
Based on the learnings obtained by the above mentioned techniques, some researchers devised a new prediction technique named multi-task learning (MTL) for simultaneous prediction of materials properties. The multi-task learning as the name suggests, deals with scenarios where multiple loss functions are being optimized with a goal of improving generalization by training related tasks. MTL is based on a principle that learning multiple tasks simultaneously as opposed to learning them individually can lead to better performance by benefitting from the shared information among different tasks. MTL has been successfully applied to problems in domains like natural language processing, pharma, speech recognition and so on. However, use of MTL in materials science domain is still at exploration stage. Some of the MTL techniques that are currently available in materials science domain are discussed below.
In a new MTL technique, the authors extended their compressed sensing-based methodology (SISSO) to a multi-task setting and demonstrated its efficacy for predicting relative stability among different crystal structures of binary octet compounds. In another MTL technique, the authors proposed a multitasking crystal graph convolution neural network (MT-CGCCN) for simultaneous prediction of formation energy, bangdap and fermi energy. The MT-CGCNN used some task specific layers at the output of CG-CNN to modify the original architecture into a multi-task one. In yet another MTL technique, authors tried to demonstrate advantage of multi-task learning on a dataset of 36 polymer properties spread across various categories like thermal, electronic, optical, mechanical etc., and trained over a dataset of approximately 13000 polymers. In the technique, authors experimented with two neural network architectures to enable multi-task learning, one of the neural networks was trained using a joint loss function for all 36 properties at a time, while the other was a concatenation based conditioned multi-task neural network that receives a selector vector corresponding to the property for which prediction is desired along with the polymer input representation. The authors also developed some baseline models in the single task setting for comparison and observed that the multi-task model outperformed single task in cases where correlations between properties among a category are high. So, it appeared that MTL can reduce the labor of training and tuning individual models, while improving the model performance. The problem that exists with the MTL is that authors are unable able to determine which tasks should and should not be learned together in one network.
Embodiments of the present disclosure overcome the disadvantages of the various prediction techniques present in the art by using a multi-task deep neural network that predicts one or more chemical properties associated with a material/chemical compound at once. A system and method of a present disclosure receives an input representation of the chemical compound in form of an 86-dimensional vector representation. The input representation is then utilized by the system and the method to create an individual DNN model for each chemical property that needs to be predicted. Thereafter, the system and the method use the created individual DNN models as starting points to train the multi-task deep neural network/model with different tasks in a sequential manner in a transfer learning setting. Further, once the multi-task deep neural network/model is trained, the system and the method use the trained multi-task deep neural network/model for predicting the one or more chemical properties.
In the present disclosure, the system and the method use mere material composition as input to predict one or more chemical properties at once. Further, the trained multi-task deep neural network/model explain the chemical properties better than any model that is trained on a single chemical property, thereby ensuring faster material discovery.
Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary representation of an environment 100 related to at least some example embodiments of the present disclosure. Although the environment 100 is presented in one arrangement, other embodiments may include the parts of the environment 100 (or other parts) arranged otherwise depending on, for example, creating one or more single-task models, identifying each single-task model trained one two or more properties as a multi-task model, fine-tuning each multi-task model etc. The environment 100 generally includes a user device, such as a user device 102, and a material property prediction system (hereinafter referred as ‘MPPS’) 106, each coupled to, and in communication with (and/or with access to) a network 104. It should be noted that one user device is shown for the sake of explanation; there can be more number of user devices.
The network 104 may include, without limitation, a light fidelity (Li-Fi) network, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, a virtual network, and/or another suitable public and/or private network capable of supporting communication among two or more of the parts or users illustrated in FIG. 1, or any combination thereof.
Various entities in the environment 100 may connect to the network 104 in accordance with various wired and wireless communication protocols, such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), 2nd Generation (2G), 3rd Generation (3G), 4th Generation (4G), 5th Generation (5G) communication protocols, Long Term Evolution (LTE) communication protocols, or any combination thereof.
The user device 102 is associated with a user (e.g., a user or an entity such as an organization) who wants to predict one or more chemical properties associated with a chemical compound(s) using the MPPS 106. Examples of the user device 102 include, but are not limited to, a personal computer (PC), a mobile phone, a tablet device, a Personal Digital Assistant (PDA), a voice activated assistant, a smartphone and a laptop.
The material property prediction system (MPPS) 106 includes one or more hardware processors and a memory. The MPPS 106 is configured to perform one or more of the operations described herein. The MPPS 106 is configured to receive an input data associated with one or more chemical compounds whose chemical properties needs to be predicted and one or more chemical properties associated with each chemical compound of the one or more chemical compounds via the network 104 from the user device 102. In general, the MPPS, for training a multi-task model to predict chemical properties associated with the chemical compounds, require composition-based representation of the chemical compounds and information about one or more chemical properties that needs to be predicted for each chemical compound. So, input data that includes an eighty-six dimensional vector representation of each chemical compound of the one or more chemical compounds works as composition based representation of the chemical compounds.
Thereafter, the MPPS 106 is configured to create one or more single-task models for the input data. Each single-task model of the one or more single-task models is created by the MPPS 106 to predict a chemical property among the one or more chemical properties. Once the one or more single-task models are created, the MPPS 106 is configured to randomly select a first chemical property among the one or more chemical properties for each single-task model of the one or more single-task models. Then, the MPPS 106 is configured to train each single-task model of the one or more single-task models to perform prediction of the selected first chemical property among the one or more chemical properties.
Further, once each single-task model is trained to perform prediction of the first chemical property, the MPPS 106 is configured to randomly select a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned. The second chemical property that is selected is different from the first chemical property for which each single-task model is already trained. Then, the MPPS 106 is configured to fine-tune each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property. Thereafter, the MPPS 106 is configured to identify each single-task model trained on the first chemical property and the second chemical property as a multi-task model. In particular, the created single-task models that are trained on at least two properties are now considered as multi-task models by the MPPS as they were used as a starting point to obtain multi-task models.
Thereafter, the MPPS 106 is configured to determine whether all the chemical properties in the one or more chemical properties are selected i.e., whether the multi-task model is fine-tuned for all chemical properties. If not, the MPPS 106 is configured to randomly select a next chemical property among the one or more properties for which each multi-task model is to be trained. It should be noted that the next chemical property that is selected for each multi-task model is different from the first chemical property and the second chemical property for which the respective multi-task model is already trained. Then, the MPPS 106 is configured to perform fine-tuning of each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the selected next chemical property. The process of performing fine-tuning is explained in detail with reference to FIG.3.
Once each multi-task model is trained to perform prediction of the first chemical property, the second chemical property and the next chemical property, the MPPS 106 is configured to identify another chemical property among the one or more properties for which each multi-task model is to be trained. The MPPS 106 is then configured to perform fine-tuning of each model trained on the first chemical property, the second chemical property and the next chemical property to perform prediction of the first chemical property, the second chemical property, the next chemical property, and another chemical property. The same process is repeated again and again by the MPPS 106 until all the chemical properties in the one or more chemical properties are selected.
In an embodiment, once each multi-task model of the one or more multi-task models is trained to perform prediction of the one or more chemical properties, the MPPS 106 is configured to calculate a mean absolute error (MAE) for each multi-task model of the one or more multi-task models using a pre-defined formula i.e.,
MAE=?_(i=1)^n¦| y-y_i |/n,

Where y represents actual value,
y_i represents predicted value, and
n represents number of data points.
Thereafter, the MPPS 106 is configured to determine a multi-task model among the one or more multi-task models that has the lowest MAE. The multi-task model with the lowest MAE is then selected and used by the MPPS 106 for performing prediction of the one or more chemical properties associated with any chemical compound.
The user associated with the user device 102 can now see one or more chemical properties predicted by the MPPS 106 for the chemical compound on the user device 102. The predicted chemical properties may further help the user in taking decision whether the chemical compound is usable or not for a particular purpose.
The number and arrangement of systems, devices, and/or networks shown in FIG. 1 are provided as an example. There may be additional systems, devices, and/or networks; fewer systems, devices, and/or networks; different systems, devices, and/or networks; and/or differently arranged systems, devices, and/or networks than those shown in FIG. 1. Furthermore, two or more systems or devices shown in FIG. 1 may be implemented within a single system or device, or a single system or device shown in FIG. 1 may be implemented as multiple, distributed systems or devices. Additionally, or alternatively, a set of systems (e.g., one or more systems) or a set of devices (e.g., one or more devices) of the environment 100 may perform one or more functions described as being performed by another set of systems or another set of devices of the environment 100 (e.g., refer scenarios described above).
FIG. 2 illustrates an exemplary block diagram of a material property prediction system (MPPS) for composition based material property prediction using multi-task learning, in accordance with an embodiment of the present disclosure. In an embodiment, the material property prediction system (MPPS) may also be referred as system and may be interchangeably used herein. The system 200 is similar to the MPPS 106 explained with reference to FIG. 1. In some embodiments, the system 200 is embodied as a cloud-based and/or SaaS-based (software as a service) architecture. In some embodiments, the system 200 may be implemented in a server system. In some embodiments, the system 200 may be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, and the like.
In an embodiment, the system 200 includes one or more processors 204, communication interface device(s) or input/output (I/O) interface(s) 206, and one or more data storage devices or memory 202 operatively coupled to the one or more processors 204. The one or more processors 204 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory.
The I/O interface device(s) 206 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 202 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 208 can be stored in the memory 202, wherein the database 208 may comprise, but are not limited to inputs received from one or more client devices (e.g., client nodes, target nodes, computing devices, user devices and the like) such as one or more chemical properties, and composition-based representation of one or more chemical compound. In an embodiment, the memory 202 may store information pertaining to each chemical compound of the one or more chemical compounds, multi-task model selection criteria, training algorithm, and the like. The memory 202 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 202 and can be utilized in further processing and analysis.
FIGS. 3A, 3B and 3C, with reference to FIGS. 1-2, collectively, represent an exemplary flow diagram of a method 300 for composition based material property prediction using multi-task learning using the MPPS 100 of FIG. 1 and system 200 of FIG. 2, in accordance with an embodiment of the present disclosure. In an embodiment, the system 200 comprises one or more data storage devices or the memory 202 operatively coupled to the one or more hardware processors 204 and is configured to store instructions for execution of steps of the method 300 by the one or more hardware processors 204. The sequence of steps of the flow diagram may not be necessarily executed in the same order as they are presented. Further, one or more steps may be grouped together and performed in form of a single step, or one step may have several sub-steps that may be performed in parallel or in sequential manner. The steps of the method 300 of the present disclosure will now be explained with reference to the components of the MPPS 100 as depicted in FIG. 1, and the system 200 of FIGS. 2.
In an embodiment of the present disclosure, at step 402, the one or more hardware processors 204 of the system 200 receive an input data associated with one or more chemical compounds and one or more chemical properties associated with each chemical compound of the one or more chemical compounds from a user device (e.g., the user device 102). The input data includes a multi-dimensional vector representation of each chemical compound of the one or more chemical compounds. In an embodiment, the multi-dimensional vector representation is an eighty-six dimensional vector representation that is created based on a fraction of elements present in the respective chemical compound. Basically, the eighty-six dimensional vector representation includes all the names of elements present in an open quantum material database (OQMD) dataset. An example representation of the elements present in the OQMD dataset is shown below:
['Cs', 'Ho', 'S', 'Si', 'Se', 'Yb', 'P', 'Sm', 'Zn', 'C', 'Ni', 'Ag', 'Ti', 'Te', 'Tm', 'U', 'Y', 'Bi', 'Li', 'B', 'Tc', 'Tl', 'Pr', 'Lu', 'Mg', 'Au', 'Mn', 'N', 'Ta', 'Er', 'Ir', 'Ca', 'Pb', 'H', 'Pd', 'Rh', 'V', 'Cu', 'Na', 'Fe', 'Ru', 'As', 'Sn', 'I', 'Sr', 'Nd', 'Sc', 'Nb', 'Os', 'Cd', 'Al', 'Hf', 'Tb', 'Hg', 'Rb', 'Dy', 'Sb', 'Ce', 'Pm', 'Eu', 'Gd', 'Zr','Ge', 'O', 'La', 'Cl', 'Pt', 'Cr', 'In', 'Be', 'Ga', 'Br', 'W', 'Ba', 'Mo', 'Re', 'K', 'F', 'Pa', 'Np', 'Th', 'Pu', 'Ac', 'Kr', 'Xe', 'Co']
The eighty-six dimensional vector representation that is provided as input is created based on the OQMD dataset. For example, let us consider a vector representation for a chemical compound named AuTiF3 is to be created. So, for creating the representation, first a fraction of elements present in the chemical compound are calculated. In case of AuTiF3, a fraction of Au and Ti present in the chemical compound is 0.2, a fraction of F present in the chemical compound is 0.6 and a fraction of other elements present in the compound is 0. Thereafter, the fractional values of all the elements that are present in OQMD dataset are inserted based on the calculated fraction of elements. Hence, an eighty-six dimensional vector representation created for AuTiF3 based on the OQMD dataset appears as:
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.2,0,0,0,0,0,0,0,0.2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
The non-zero values present in the vector representation indicate fraction of particular elements present in the chemical compound and zero value indicate absence of that particular element from the chemical compound.
So, the eighty-six dimensional vector representation is created for each chemical compound of the one or more chemical compounds and is provided as an input in form of the input data to the hardware processors 204.
In an embodiment, the one or more chemical properties that are received from the user device 102 for each chemical compound are density-functional theory (DFT) computed properties like formation energy, bandgap, energy per atom, volume per atom, magnetic moment etc., present in the OQMD database.
At step 304 of the present disclosure, the one or more hardware processors 204 of the system 200 create one or more single-task models for the input data. In at least one example embodiment, for creating the one or more single-task models, the hardware processors 204 first determines a number of single-task models to be created based on the one or more chemical properties using a permutation formula i.e., N!, where N represents number of chemical properties. Basically, the number of single-task models depends on different possible combination that are possible for the one or more chemical properties. For example, the system receives three properties as input, then six combinations are possible for three properties. An example representation of a table representing different combinations that are possible for three chemical properties is shown with reference to FIG. 5.
Once the number of single-task models that are to be created is determined, the hardware processors 204 create the one or more single-task models based on the determination.
Each single-task model of the one or more single-task models is created by the hardware processors 204 to predict a chemical property among the one or more chemical properties.
At step 306 of the present disclosure, the one or more hardware processors 204 of the system 200 train each single-task model of the one or more single-task models to perform prediction of a first chemical property among the one or more chemical properties. For each single-task model, the first chemical property is randomly selected among the one or more chemical properties. For example, lets us assume that each single-task model is to be trained for three properties i.e., formation energy, bandgap, and energy per atom, then at this step each single-task model will be trained for predicting a single property that is randomly selected among the formation energy, bandgap, and energy per atom. The step 306 ensures that each single-task model is first trained for predicting a single property.
At step 308 of the present disclosure, the one or more hardware processors 204 of the system 200 randomly select a second chemical property among the one or more chemical properties for which each single-task model is to be fine-tuned. The second chemical property is different from the first chemical property for which each single-task model is already trained. For example, with reference to previous example, let us assume that a multi-task model is already trained on formation energy then the second chemical property that will be selected for the multi-task model will be among the remaining two i.e., the bandgap and the energy per atom.
At step 310 of the present disclosure, the one or more hardware processors 204 of the system 200 fine-tune each single-task model trained on the first chemical property to perform prediction of the first chemical property and the second chemical property. The hardware processors 204 now fine-tune the single-task model so that the single-task model is able to perform prediction of both the first chemical property and the second chemical property.
At step 312 of the present disclosure, the one or more hardware processors 204 of the system 200 identify each single-task model fine-tuned on the first chemical property and the second chemical property as a multi-task model. In particular, each single-task model of the one or more single-task models is now identified as the multi-task model as it can now predict one or more chemical properties. So, there will be one or more multi-task models that are identified corresponding to the one or more single-task models.
In an embodiment, each multi-task model is a multi-layered deep neural network (DNN) architecture. In one embodiment, the multi-layered deep neural network is a 7-layered DNN architecture in which a first layer and a second layer includes 1024 nodes, a third layer and a fourth layer includes 512 nodes, a fifth layer and a sixth layer includes 128 nodes, and a seventh layer includes a single node.
As already discussed, each multi-task model consists of seven layers in which seventh layer is an output layer. So, to perform fine-tuning of each multi-task model trained on the first chemical property, the hardware processors 204 of the system 200 first removes an output node associated with the first chemical property from each single-task model. Thereafter, the hardware processors 204 of the system 200 adds a new output node corresponding to each of the first chemical property and the second chemical property to obtain an updated single-task model corresponding to each single-task model. Further, the hardware processors 204 of the system 200 fine-tune the updated single-task model obtained corresponding to each single-task model to perform identification of the first chemical property and the second chemical property. More specifically, parameters of each single-task model trained on the first chemical property are fine-tuned to predict first chemical property and the second chemical property by employing transfer learning setting.
At step 314 of the present disclosure, the one or more hardware processors 204 of the system 200 determine whether all the chemical properties in the one or more chemical properties are selected. Basically, each multi-task model is trained for all the chemical properties or not is determined at this step.
At step 316 of the present disclosure, the one or more hardware processors 204 of the system 200 fine-tune each multi-task model of the one or more multi-task models to perform prediction of the one or more chemical properties at once by iteratively performing a plurality of steps 316a through 316c until all the chemical properties in the one or more chemical properties are selected, upon determining that that all the chemical properties in the one or more chemical properties are not selected.
More specifically, at step 316a of the present disclosure, the one or more hardware processors 204 of the system 200 randomly select a next chemical property among the one or more chemical properties for which each multi-task model is to be trained. The next chemical property selected by the hardware processors 204 is different from the first chemical property and the second chemical property for which each multi-task model is already trained.
At step 316b of the present disclosure, the one or more hardware processors 204 of the system 200 fine-tune each multi-task model trained on the first chemical property and the second chemical property to perform prediction of the first chemical property, the second chemical property and the next chemical property.
At step 316c of the present disclosure, the one or more hardware processors 204 of the system 200 identify the combination of the first chemical property, the second chemical property and the next chemical property as the first chemical property and the second chemical property until each multi-task model is trained to predict the one or more chemical properties at the same time. For example, the system 200 has received ‘N’ number of properties, then each multi-task model is trained to predict the N properties of the compound.
At step 318 of the present disclosure, the one or more hardware processors 204 of the system 200 calculate a mean absolute error (MAE) for each multi-task model of the one or more multi-task models using the pre-defined formula. For calculating MAE of each multi-task model, first MAE of each property of the one or more properties is calculated. Thereafter, the MAE for each multi-task model is calculated by taking average of MAE’s calculated for each property of the one or more properties. Basically, MAE represent a loss function for a multi-task model. Once the MAE is calculated for each multi-task model, the hardware processors 204 determines a multi-task model among the one or more multi-task models that has optimum/lowest MAE.
At step 320 of the present disclosure, the one or more hardware processors 204 of the system 200 select a multi-task model among the one or more multi-task models based on the mean absolute error. In particular, the multi-task model with the optimum MAE is selected by the hardware processors 204 of the system 200. The selected multi-task model is then used by the hardware processors 204 of the system 200 for predicting the one or more chemical properties associated with any chemical compound.
FIG. 4, with reference to FIGS, 1 to 3A-3C, illustrates a schematic representation 400 of a training process associated with the system 200 of FIG. 2 or the MPPS 106 of FIG. 1 for training a multi-task model to predict one or more chemical properties associated with a chemical compound at once, in accordance with an embodiment of the present disclosure.
FIG 6 is a tabular representation illustrating comparison of MAE’s of multi-task model and single task models on the OQMD dataset, in accordance with an embodiment of the present disclosure.
As seen in the FIG. 6, the MAE obtained for all three properties i.e., formation energy, bandgap and energy per atom is lowest for the multi-task model (iElemNet) in comparison with the single task models that are available in the art.
In an embodiment, when the multi-task model is tested on some specific set of chemical compounds belonging to three different categories based on their chemistry’s, it is observed that the multi-task models can distinguish between alkali metal halides (Group 1), alkaline earth metal chalcogenides (Group 2) and group III-V compounds (Group 3) consistently across all layers as well.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
As known, materials have played a phenomenal role in the development of society, right from the early times of stone age to the upcoming era of quantum computers. With the current impetus on renewable energy, clean water, sustainable healthcare and eventually circular economy, the society is in dire need of newer, better, cheaper materials across various sectors. And this is only possible when techniques are available that help predict material property/ies. Multiple techniques, such as machine learning algorithms, deep learning models (as discussed earlier), and the like are available in the art for performing material property predictions. However, available techniques either require structural information of the compound as input for making prediction or required more computational power as prediction calculations were heavy or were unable to predict stability of the chemical compound. To overcome the disadvantages, embodiments of the present disclosure provide method and system for composition based material property prediction using multi-task learning. The system and the method use mere material composition as input for predicting multiple material properties at once. The method of the present disclosure explains material properties significantly better than any model trained on a single property. The method also ensures increased accuracy, improved performance, and reduced memory requirement in comparison with the methods available in the art.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

Documents

Application Documents

# Name Date
1 202221011287-STATEMENT OF UNDERTAKING (FORM 3) [02-03-2022(online)].pdf 2022-03-02
2 202221011287-REQUEST FOR EXAMINATION (FORM-18) [02-03-2022(online)].pdf 2022-03-02
3 202221011287-FORM 18 [02-03-2022(online)].pdf 2022-03-02
4 202221011287-FORM 1 [02-03-2022(online)].pdf 2022-03-02
5 202221011287-FIGURE OF ABSTRACT [02-03-2022(online)].jpg 2022-03-02
6 202221011287-DRAWINGS [02-03-2022(online)].pdf 2022-03-02
7 202221011287-DECLARATION OF INVENTORSHIP (FORM 5) [02-03-2022(online)].pdf 2022-03-02
8 202221011287-COMPLETE SPECIFICATION [02-03-2022(online)].pdf 2022-03-02
9 202221011287-FORM-26 [22-06-2022(online)].pdf 2022-06-22
10 Abstract1.jpg 2022-07-04
11 202221011287-Proof of Right [19-07-2022(online)].pdf 2022-07-19
12 202221011287-FER.pdf 2025-03-13
13 202221011287-FER_SER_REPLY [22-08-2025(online)].pdf 2025-08-22
14 202221011287-CLAIMS [22-08-2025(online)].pdf 2025-08-22

Search Strategy

1 searchE_18-06-2024.pdf