Abstract: The embodiments herein provide a method and system for obtaining mathematical expressions for a system of PDEs. Herein, a trained PINN is used to generate dataset upon which a Symbolic Regression (SR) is performed. The SR is a task of generating mathematical expression that best fits a given dataset. The SR helps to understand underlying relationships and patterns in data with an application in scientific discovery, engineering design, and financial forecasting. The SR reduces the complexity of models and provides interpretable solutions, thereby improving the transparency and accountability of artificial intelligence (AI) systems. Further, a differentiable program architecture (DPA) is used for performing SR over generated data points by PINNs. It is to be noted that the transparency of the symbolic expressions is improved by pruning DPA in depth-first manner, using magnitude of weights as the heuristic. The pruning allows to obtain sparser representations for PDEs that are easily interpretable.
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYMBOLIC REGRESSION FOR PARTIAL DEFFERENTIAL EQUATIONS USING A PRUNED DIFFERENTIAL PROGRAM
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian provisional patent application number 202321014236, filed on March 02, 2023. The entire content of the abovementioned application is incorporated herein by reference.
TECHNICAL FIELD
The disclosure herein generally relates to a field of computer systems, more particularly, to techniques for modeling, simulation, and problem solving using a computer system.
BACKGROUND
Computer system is used for performing a variety of tasks. The computer system may execute machine instructions, as may be generated, to perform modeling, simulation, and problem solving tasks. One technique which can be used in connection with modeling a particular system is to represent various physical aspects of the system in terms of equations or other type of quantifications. In turn, these equations may be solved using the computer system for one or more variables.
Further, an automatic technique can be used for combining one or more systems such that the combination of the systems together can be modeled and accordingly represented in terms of combined physical quantities and equations. Furthermore, different representations of equations that model the physical quantities of a particular system can allows different techniques to be utilized in connection with solving for the system of equations in a singular or combined system.
It may be advantageous and desirable to work with systems of partial differential equations (PDEs) having multiple geometries and also provide an efficient and flexible arrangement for defining various couplings between the partial differential equations within a single geometry as well as between different geometries. Physics-informed Neural Network (PINNs) have been widely used to obtain accurate neural surrogate for the system of PDEs. One of the major limitations of PINNs is that the neural solutions are challenging to interpret and are often treated as black-box solvers.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor-implemented method for modeling, simulation, and problem solving using a computer system. The processor-implemented method comprises receiving, via an Input/Output (I/O) interface, one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs) and pre-processing the received one or more user-defined input-output variables into a predefined homogenous standard format. Further, a pointwise data map statistics is generated based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver. Furthermore, the processor-implemented method comprises training one or more weights of a Differentiable Program Architecture (DPA) to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression. Further, the processor-implemented method includes removing one or more insignificant weights of the trained DPA using a depth first search with magnitude of absolute value of one or more weights as a heuristic and re-training the one or more weights of the DPA after removal of the one or more insignificant weights to obtain a sparser representation of the relationship between the pointwise data map statistics and one or more user-defined output variables. Finally, the obtained sparser representation is evaluated to validate the relationship between user-defined input-output variables.
In another aspect, a system for modeling, simulation, and problem solving using a computer system. The system comprises a memory storing a plurality of instructions and one or more Input/Output (I/O) interfaces to receive one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs) and pre-processing the received one or more user-defined input-output variables into a predefined homogenous standard format. Further, the one or more hardware processors are configured to generate a pointwise data map statistics based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver. A Differentiable Program Architecture (DPA) is trained to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression. Further, the one or more hardware processors are configured to remove one or more insignificant weights of the trained DPA using a depth first search with magnitude of absolute value of one or more weights as a heuristic and re-training the one or more weights of the DPA after removal of the one or more insignificant weights to obtain a sparser representation of the relationship between the pointwise data map statistics and one or more user-defined output variables. Finally, the obtained sparser representation is evaluated to validate the relationship between user-defined input-output variables.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors causes a method for modeling, simulation, and problem solving using a computer system. The processor-implemented method comprises receiving, via an Input/Output (I/O) interface, one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs) and pre-processing the received one or more user-defined input-output variables into a predefined homogenous standard format. Further, a pointwise data map statistics is generated based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver. Furthermore, the processor-implemented method comprises training one or more weights of a Differentiable Program Architecture (DPA) to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression. Further, the processor-implemented method includes removing one or more insignificant weights of the trained DPA using a depth first search with magnitude of absolute value of one or more weights as a heuristic and re-training the one or more weights of the DPA after removal of the one or more insignificant weights to obtain a sparser representation of the relationship between the pointwise data map statistics and one or more user-defined output variables. Finally, the obtained sparser representation is evaluated to validate the relationship between user-defined input-output variables.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
FIG. 1 illustrates a system to execute one or more instructions of a method for obtaining mathematical expressions for a system of partial differential equations (PDEs), in accordance with some embodiments of the present disclosure.
FIG. 2 is a flow diagram to illustrate a processor-implemented method for obtaining mathematical expressions for a system of PDEs, in accordance with some embodiments of the present disclosure.
FIG. 3 is a schematic diagram to represent a top-view of an Air-Preheater (APH), in accordance with some embodiments of the present disclosure.
FIG. 4(a), 4(b), 4(c), 4(d), and 4(e), collectively read as FIG. 4, are program derivation graphs of a DPA of depth 2 with sin, exp, log as operators, and x, y, c as leaf nodes, in accordance with some embodiments of the present disclosure.
FIG. 5 is a block diagram to illustrate a relationship extraction unit, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments.
Historically, the Symbolic Regression (SR) has been attempted using genetic programming methods, purely Deep-learning methods like sequence generation, tree search and a combination of both Deep-learning and Genetic programming methods. While the SR has been applied for PDE equation discovery using a genetic programming, a fast function extraction, replacing activation functions of neural networks (NNs) with primitive functions, sequence to sequence equation generation using transformers, very few works attempt to directly model the final analytical solution of the governing PDE.
The embodiments herein provide a method and system for obtaining mathematical expressions for a system of PDEs. Herein, a trained PINN is used to generate a dataset upon which a SR is performed. The SR is a task of generating the mathematical expression that best fits a given dataset. The SR helps to understand underlying relationships and patterns in data with an application in scientific discovery, engineering design, and financial forecasting. The SR reduces the complexity of models and provides interpretable solutions, thereby improving the transparency and accountability of artificial intelligence (AI) systems.
Further, a differentiable program architecture (DPA) is used for performing the SR over generated data points by PINNs. It is to be noted that the transparency of the symbolic expressions is improved by pruning the DPA in a depth-first manner, using the magnitude of weights as the heuristic. The pruning allows to obtain sparser representations for PDEs that are easily interpretable.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 5, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates a network diagram of a system 100 for executing one or more instruction of a method for obtaining mathematical expressions for systems of PDEs. Although the present disclosure is explained considering that the system 100 is implemented on a server, it may also be present elsewhere such as a local machine. It may be understood that the system 100 comprises one or more computing devices 102, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system 100 may be accessed through one or more input/output interfaces 104-1, 104-2... 104-N, collectively referred to as I/O interface 104. Examples of the I/O interface 104 may include, but are not limited to, a user interface, a portable computer, a personal digital assistant, a handheld device, a smartphone, a tablet computer, a workstation, and the like. The I/O interface 104 are communicatively coupled to the system 100 through a network 106.
In an embodiment, the network 106 may be a wireless or a wired network, or a combination thereof. In an example, the network 106 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 106 may interact with the system 100 through communication links.
The system 100 may be implemented in a workstation, a mainframe computer, a server, and a network server. In an embodiment, the computing device 102 further comprises one or more hardware processors 108, one or more memory 110, hereinafter referred as a memory 110 and a data repository 112, for example, a repository 112. The data repository 112 may also be referred as a dynamic knowledge base 112 or a knowledge base 112. The data repository 112 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s). Although the data repository 112 is shown external to the system 100, it will be noted that, in alternate embodiments, the data repository 112 can also be implemented internal to the system 100, and communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database and/or existing data may be modified and/or non-useful data may be deleted from the database.
The memory 110 is in communication with the one or more hardware processors 108, wherein the one or more hardware processors 108 are configured to execute programmed instructions stored in the memory 110, to perform various functions as explained in the later part of the disclosure. The repository 112 may store data processed, received, and generated by the system 100. The memory 110 further comprises a plurality of modules (not shown). Further, the plurality of modules can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 108, or by a combination thereof.
The system 100 supports various connectivity options such as BLUETOOTH®, Universal Serial Bus (USB), ZigBee and other cellular services. The network environment enables connection of various components of the system 100 using any communication link including Internet, WAN, LAN, and so on. In an exemplary embodiment, the system 100 is implemented to operate as a stand-alone device. In another embodiment, the system 100 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system 100 are described further in detail. Functions of the components of the system 100 are now explained with reference to FIG. 3 through steps of flow diagram in FIG. 2.
FIG. 2 is an exemplary flow diagrams illustrating a processor-implemented method 200 for obtaining mathematical expressions for systems of PDEs implemented by the system 100 of FIG. 1, according to some embodiments of the present disclosure. Initially, at step 202 of the processor-implemented method 200, the one or more hardware processors 108 are configured by the programmed instructions to receive, via an input/output interface, one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs).
As an example, in an air-preheater (APH), temperature information from different locations of (?, z) shown in computational domain are obtained from different sensors. All the sensors have temperature information unique to a specified location, while a data generation unit requires all the temperature information in one single file, along with the sensor locations. These information needs to be integrated together before being passed to the data generation unit.
It is to be noted that the APH is a component of an industrial unit which acts like a heat exchanger and governed by chemical reactions. There is damage of components due to excess heat exposure thus timely maintenance is necessary. The maintenance is conducted using temperature distributions in the interior of the equipment. Since, sensors are expensive, not resistant to excessive heat damage, the sensors cannot be placed in the interior of APH. Thus, a mathematical expression of the temperature distribution helps an operator understand the underlying dynamics of the APH, allowing proactive measures to be taken. This helps in cost-control and timely maintenance of the equipment.
FIG. 3, a schematic diagram 300, is a top view of the APH, in accordance with some embodiments of the present disclosure. For the APH, the inputs are governing equations, initial temperature conditions (T, in1), (T, in2), etc., the device specifications (beta1, beta2, beta3, height (H), Pe, Re), and the chemical compounds involved in the reaction. Herein, gas flows from top to bottom of the APH, while Air (Primary air and secondary air) flows from bottom to top. The cylinder keeps rotating, which defines the degree portion of gas (beta1), priair (beta2) and secair (beta3). Additional specifications include the height (H) of APH, and rotation speed of APH (omega). Governing equations of the APH –
(?Tm_j)/?f=NTUm_j (T_(j-) T_(m_j ) )+1/(Pe_(m_j ) ) (?^2 Tm_j)/(?z^2 ) (1)
?Tj/?z=NTUm_j (T_(m_j )-T_j ) j=1,2,3 (2)
T_j (f,z=0)= T_(?in,?_(j ) ) j=1,2,3 (3)
T_(m_1 ) (f=0,z)= T_(m_3 ) (f=1,1-z) (4)
T_(m_1 ) (f=1,z)= T_(m_2 ) (f=0,1-z) (5)
T_(m_2 ) (f=1,z)= T_(m_3 ) (f=0,z) (6)
(?Tm_j [z=0,1])/?z=0,j=1,2,3 (7)
wherein equation (1) represents conduction while equation (2) represents convection heat transfer within the APH. There are six outputs to the APH system of PDE, three fluid temperatures (T) and three metal temperature (Tm) for given co-ordinates (?, z). NTU and Pe stand for the number of transfer units and Peclet number respectively. The boundary conditions are imposed by gas inlet temperature (Tin,1), primary air inlet temperature (Tin,2), and secondary air inlet temperature (Tin,3) in Equation (3). Equations 4-7 impose continuity constraints on the metal temperature.
At the next step 204 of the processor-implemented method 200, the one or more hardware processors 108 are configured by the programmed instructions to pre-process the received one or more user-defined input-output variables into a predefined homogenous standard format.
In one embodiment, a partial differential equation (PDE) solver is used to solve for a given PDE setup. The PDE solver provide a solution for the differential equation mentioned by a user. All the user-defined data information is preprocessed into a predefined homogenous standard format (e.g.: sensor data from different locations of (?, z)) for smooth integration into a data generation unit. The PDE solver accepts information in a predefined format.
At the next step 206 of the processor-implemented method 200, the one or more hardware processors 108 are configured by the programmed instructions to generate a pointwise data map statistics based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver. Further, a dataset is prepared by generating input-output data points using a finite difference method or a pre-trained Physics Informed Neural Network (PINN). The PINN solves the input governing equations and provide a pointwise data maps statistics ((?, z)->(T,Tm)) on the user-defined input-output variables.
At the next step 208 of the processor-implemented method 200, the one or more hardware processors 108 are configured by the programmed instructions to train one or more weights of a Differentiable Program Architecture (DPA) to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression. The DPA is trained based on choice of an optimizer, number of iterations, learning rate of the optimizer and weights of the edges of the tree. Further, a Symbolic Regression (SR) is performed using the DPA.
For a given problem, during the training stage, an expression generation unit defines the complexity of the expression (depth of the DPA) based on the features like (operators: sin, exp, add, etc.) predefined by the user. It would be appreciated that the training procedure for the DPA is like a machine learning model. The parameters (the values of the edges of the tree below) are randomly initialized, and popular optimization algorithms (Adam) is used to iteratively improve the weights such that the final program architecture accurately satisfies the data which is obtained from the data generation unit.
FIG. 4(a), 4(b), 4(c), 4(d) and 4(e) are program derivation graphs of a DPA of depth 2 with sin, exp, log as operators, and x, y, c as leaf nodes, in accordance with some embodiments of the present disclosure. Herein, the weights of the DPA implicitly represents a symbolic expression. In FIG. 4(a) starting from the root, selecting node with a minimum magnitude recursively till it encounters a leaf. The path followed is root?exp?log?1. Further, the system 100 prunes the leaf and assumes after fine-tuning, the loss of newer architecture is at par with the original, thus the system 100 accepts the prune.
Further, the system 100 follows path root?exp?log, selects node x and tests for pruning. Let’s assume the pruning is accepted and resulting tree is shown in FIG. 4(b). Furthermore, the system 100 follows root?exp?log?y and tests node y. Let’s assume that the fine-tuned weight performance is worse, hence root?exp?log?y is not prunable as shown in FIG. 4(c) and the system 100 recursed back and continues recursively in a depth-first manner as shown in FIG. 4(d). Finally, let’s assume after all the nodes are visited the DPA obtained is as shown in FIG. 4(e). The translated symbolic expression for the DPA is 0.16exp(-1.37sin(0.39x)-0.05log(2.14y)).
At the next step 212, of the processor-implemented method 200, the one or more hardware processors 108 are configured by the programmed instructions to select one or more significant weights of the trained expression to remove one or more insignificant weights using a depth first search and magnitude of an absolute value of the weights. After removal of the insignificant weights, the system 100 re-trains the weights of the DPA using the training specifications mentioned in an expression generation unit to obtain a sparser representation of the relationship.
FIG. 5 is a block diagram to illustrate a relationship extraction unit, in accordance with some embodiments of the present disclosure. In the relationship extraction unit, after obtaining the trained expression from the DPA, the system 100 selects and removes the most insignificant weights of the trained expression using a Depth First Search and based on the magnitude of the absolute value of the weights as the criteria. The selection of the weights stops if it is a leaf of the DPA.
Finally, at the last step 214 of the processor-implemented method 200, after re-training the one or more weights of the DPA, the one or more hardware processors 108 are configured by the programmed instructions to evaluate the obtained sparser representation to validate the relationship between user-defined input-output variables. If the symbolic expression is more accurate or as accurate as the symbolic expression before pruning, the new symbolic expression, and updated parameters of DPA is accepted, otherwise rejected. This process happens repeatedly till no further components can be removed. The final expression obtained provides an interpretable and easy to understand relationship between user-defined inputs and outputs.
The solution expression provides a mapping between temperature and location at every interior point of the device. Now, the plant operator takes these temperatures and checks which location is more heated up which is not. If a critical area is more heated up, the operator concludes this device has to be sent for maintenance. If the expression was complicated enough or if the expression wasn’t present to the operator, the plant operator would have a significantly difficult time taking the corresponding action decisions based on the complicated physics. Thus, this leads to easier and more efficient functioning, easier interpretation of complex phenomena, more human-in-the-loop activities, and prevention of hazards due to incorrect decisions by limited knowledge of the operators.
Experiment:
Considering five systems of the PDEs such as a diffusion equation, Navier-Stokes: Kovasznay flow, Navier-Stokes: Taylor Green vortex equation, diffusion reaction equation, and two-dimensional conjugate heat transfer in APH. The diffusion reaction PDEs are important for modelling chemical reactions wherein there is a formation of new chemical products, and diffusion wherein there is a transfer of matter over a domain. Kovasznay flow is a two-dimensional steady-state Navier-Stokes equation with Reynold’s Number of 20. Taylor-Green Vortex flow is a two-dimensional unsteady Navier-Stokes equation with viscosity ? = 0.01. For both Kovasznay flow and Taylor-Green Vortex, boundary conditions are sampled from the ground-truth analytical solutions.
The APH is a heat exchanger deployed in thermal power plants to improve the thermal efficiency. Monitoring of internal temperature profiles of the APH is important to avoid failures, which arises due to complex thermal and chemical phenomena. The reference solution of APH is derived using the Finite-Difference method and doesn’t have a ground-truth analytical solution. Inspection of internal temperature profiles can significantly benefit from symbolic representations in contrast to NNs due to improved interpretability.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The embodiments of present disclosure herein address one of the major limitations of PINNs is that the neural solutions are challenging to interpret and are often treated as black-box solvers. While Symbolic Regression (SR) has been studied extensively, very few works exist which generate analytical expressions to directly perform SR for a system of PDEs. Herein, an end-to-end framework is introduced for obtaining mathematical expressions for solutions of PDEs. A trained PINN is used to generate a dataset upon which SR is performed. A differential program architecture defined using a context-free grammar is used to describe the space of symbolic regression.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device, which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purpose of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
,CLAIMS:
1. A processor-implemented method (200) comprising:
receiving (202), via an Input/Output (I/O) interface, one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs);
pre-processing (204), via one or more hardware processors, the received one or more user-defined input-output variables into a predefined homogenous standard format;
generating (206), via the one or more hardware processors, a pointwise data map statistics based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver;
training (208), via the one or more hardware processors, one or more weights of a Differentiable Program Architecture (DPA) to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression;
removing (210), via the one or more hardware processors, one or more insignificant weights of the trained DPA using a depth first search with magnitude of absolute value of one or more weights as a heuristic;
re-training (212), via the one or more hardware processors, the one or more weights of the DPA after removal of the one or more insignificant weights to obtain a sparser representation of the relationship between the pointwise data map statistics and one or more user-defined output variables; and
evaluating (214), via the one or more hardware processors, the obtained sparser representation to validate the relationship between user-defined input-output variables.
2. The processor-implemented method (200) as claimed in claim 1, wherein the user specifies one or more features of a relationship extraction unit.
3. The processor-implemented method (200) as claimed in claim 1, wherein the predefined numerical solver includes a finite difference technique and Physics Informed Neural Networks (PINNs).
4. The processor-implemented method (200) as claimed in claim 1, wherein one or more weights of the DPA are randomly initialized, and a prede-fined optimization technique is used to iteratively improve the weights.
5. The processor-implemented method (200) as claimed in claim 1, wherein the depth of the mathematical expression is predefined by the user.
6. A system (100), comprising:
an input/output interface (104) to receive one or more user-defined input-output variables to model an analytical solution of one or more governing Partial Differential Equation (PDEs);
a memory (110) in communication with the one or more hardware processors (108), wherein the one or more hardware processors (108) are configured to execute programmed instructions stored in the memory (110) to:
pre-process the received one or more user-defined input-output variables into a predefined homogenous standard format;
generate a pointwise data map statistics based on the predefined homogenous standard format associated with the user defined input-output variables by solving one or more specified governing equations using a predefined numerical solver;
train one or more weights of a Differentiable Program Architecture (DPA) to identify dependencies and relationship between the pointwise data map statistics and the one or more user-defined output variables for generating a mathematical expression;
remove one or more insignificant weights of the trained DPA using a depth first search with magnitude of absolute value of one or more weights as a heuristic;
re-train the one or more weights of the DPA after removal of the one or more insignificant weights to obtain a sparser representation of the relationship between the pointwise data map statistics and one or more user-defined output variables; and
evaluate the obtained sparser representation to validate the relationship between user-defined input-output variables.
7. The system (100) as claimed in claim 6, wherein the user specifies one or more features of a relationship extraction unit.
8. The system (100) as claimed in claim 6, wherein the predefined numerical solver includes a finite difference technique and Physics Informed Neural Networks (PINNs).
9. The system (100) as claimed in claim 6, wherein one or more weights of the DPA are randomly initialized, and a predefined optimization tech-nique is used to iteratively improve the weights.
10. The system (100) as claimed in claim 6, wherein the depth of the mathe-matical expression is predefined by the user.
| # | Name | Date |
|---|---|---|
| 1 | 202321014236-STATEMENT OF UNDERTAKING (FORM 3) [02-03-2023(online)].pdf | 2023-03-02 |
| 2 | 202321014236-PROVISIONAL SPECIFICATION [02-03-2023(online)].pdf | 2023-03-02 |
| 3 | 202321014236-FORM 1 [02-03-2023(online)].pdf | 2023-03-02 |
| 4 | 202321014236-DRAWINGS [02-03-2023(online)].pdf | 2023-03-02 |
| 5 | 202321014236-DECLARATION OF INVENTORSHIP (FORM 5) [02-03-2023(online)].pdf | 2023-03-02 |
| 6 | 202321014236-Proof of Right [10-04-2023(online)].pdf | 2023-04-10 |
| 7 | 202321014236-FORM-26 [31-05-2023(online)].pdf | 2023-05-31 |
| 8 | 202321014236-FORM 3 [01-12-2023(online)].pdf | 2023-12-01 |
| 9 | 202321014236-ENDORSEMENT BY INVENTORS [01-12-2023(online)].pdf | 2023-12-01 |
| 10 | 202321014236-DRAWING [01-12-2023(online)].pdf | 2023-12-01 |
| 11 | 202321014236-COMPLETE SPECIFICATION [01-12-2023(online)].pdf | 2023-12-01 |
| 12 | 202321014236-FORM 18 [07-12-2023(online)].pdf | 2023-12-07 |
| 13 | Abstract1.jpg | 2024-03-07 |
| 14 | 202321014236-FER.pdf | 2025-11-04 |
| 1 | 202321014236_SearchStrategyNew_E_SearchReport202321014236E_16-09-2025.pdf |