Sign In to Follow Application
View All Documents & Correspondence

An Approach For Solving Differential Equations And Variational Problems Based On Python Elvet Nn System

Abstract: Abstract Elvet is a Python module that we developed to solve differential equations and variation applications utilizing various machine learning techniques. Any solution of coupled ordinary, as well as partial differential equations with arbitrary beginning and boundary conditions, can be handled by Elvet. It can also minimize any function that depends on a collection of functions of numerous variables while placing constraints on them. It could be done while minimizing the effect of the constraints. A neural network trained to provide the required function can be seen as a representation of the approach to any of these challenges.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 June 2022
Publication Number
23/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
senanipindia@gmail.com
Parent Application

Applicants

1. Dr.A.Mythreye, Associate Professor/ Department of H&S, Stanley College of Engineering and Technology for Women.
Stanley College of Engineering and Technology for Women, Abids, Hyderabad, Telangana-500001.
2. Dr.B. Uma Maheswari, Professor / Department of Mathematics, ACE Engineering College.
ACE Engineering College, Ankushpur, Ghatkesar, Hyderabad, Telangana-501301
3. Dr.P. Prashanth kumar, Professor / Department of Mathematics, ACE Engineering College.
ACE Engineering College, Ankushpur, Ghatkesar, Hyderabad, Telangana-501301.
4. Dr. S. Vishwa Prasad Rao, Assistant professor / Department of Mathematics, Kakatiya Institute of Technology and Science.
Kakatiya Institute of Technology and Science, Warangal, Telangana-506015.
5. V V L Deepthi, Sr. Assistant Professor/ Department of H&S, CVR College of Engineering.
CVR College of Engineering, Ibrahimpatnam, Hyderabad, Telangana-501510.
6. Puchakayala Srinivasa Rao, Assistant Professor/ Department of H&S, Malla Reddy Engineering College (A).
Malla Reddy Engineering College (A), Dulapally, Maisammaguda, Hyderabad, Telangana-500100.
7. Dr. M. Sridevi, Assistant professor / Department of Mathematics, CMR College of Engineering & Technology.
CMR College of Engineering & Technology, Medchal, Hyderabad, Telangana-501401.
8. Dr. R. Srilatha, Assistant professor / Department of Mathematics, VNRVJIET.
VNRVJIET, Nizampet, Hyderabad, Telangana-500090.

Inventors

1. Dr.A.Mythreye, Associate Professor/ Department of H&S, Stanley College of Engineering and Technology for Women.
Stanley College of Engineering and Technology for Women, Abids, Hyderabad, Telangana-500001.
2. Dr.B. Uma Maheswari, Professor / Department of Mathematics, ACE Engineering College.
ACE Engineering College, Ankushpur, Ghatkesar, Hyderabad, Telangana-501301
3. Dr.P. Prashanth kumar, Professor / Department of Mathematics, ACE Engineering College.
ACE Engineering College, Ankushpur, Ghatkesar, Hyderabad, Telangana-501301.
4. Dr. S. Vishwa Prasad Rao, Assistant professor / Department of Mathematics, Kakatiya Institute of Technology and Science.
Kakatiya Institute of Technology and Science, Warangal, Telangana-506015.
5. V V L Deepthi, Sr. Assistant Professor/ Department of H&S, CVR College of Engineering.
CVR College of Engineering, Ibrahimpatnam, Hyderabad, Telangana-501510.
6. Puchakayala Srinivasa Rao, Assistant Professor/ Department of H&S, Malla Reddy Engineering College (A).
Malla Reddy Engineering College (A), Dulapally, Maisammaguda, Hyderabad, Telangana-500100.
7. Dr. M. Sridevi, Assistant professor / Department of Mathematics, CMR College of Engineering & Technology.
CMR College of Engineering & Technology, Medchal, Hyderabad, Telangana-501401.
8. Dr. R. Srilatha, Assistant professor / Department of Mathematics, VNRVJIET.
VNRVJIET, Nizampet, Hyderabad, Telangana-500090.

Specification

Description:An Approach for Solving Differential Equations and Variational Problems Based on Python Elvet NN System

Field and Background of the Invention
In the mathematical description of complex systems investigated in all quantitative disciplines and engineering, differential equations and variational problems are used extensively. The majority of the approaches that may be used to solve differential equations are based on partitioning the domain into a finite collection of points and then determining the values of the solution at each of these points. A few methods that fall into this category include the Runge-Kutta method, finite element analysis, and the linear multi-step approach. The Euler-Lagrange equations allow for the transformation of variational problems into differential equations, which can then be solved using the same methods used to solve the original issues. It is not something that can be done every time, nor is it always practical. The functional issue is not the integration of a local function, and even if it is, it may be challenging to derive the Euler-Lagrange equations even if it is the case. For instance, the incorporation of restrictions might significantly increase the difficulty of this task. As a result, it is advantageous to solve variational problems using a more straightforward technique.
An approach like this one has been utilized in the study of discretized systems to calculate solitons or instantons, determine the ground state of a quantum state with the assistance of a quantum computer, and calculate complex spin-lattice systems. In this scenario, the conventional numerical procedure is making educated guesses about the uncertain initial parameters of the Lagrange multipliers and then iteratively revising those guesses based on the results that were obtained. Suggestions have been made for more flexible methods to remove the bias toward the initial guess. However, the computing burden of such methods may still be significant. Technological advancements have boosted machine learning research in computational hardware over the past couple of decades. For picture and speech classification and recognition, it is a powerful tool. The use of Neural Networks (NN) to resolve differential equations has been demonstrated in mathematical modelling. NN-based approaches have advantages over discretization methods previously stated. With NNs, solutions can be calculated everywhere in the continuous domain instead of the discretized domain, where the standard methods compute the variables as a finite set of points. It is also possible to increase the precision of a NN solution by training it for further iterations or inserting additional instances in the training set. In this way, it is possible to increase the precision of a tentative solution by computing its values in new places and solving the problems from scratch, rather than needing to start from scratch with the most discrete approaches. NNs can be used to solve variational problems in the same way.
Brief description of the system
Numerous NN-based approaches for solving differential equations have been developed over time. The objective function for the physics-informed neural networks (PINN) is a squared differential equation. Networks learn approximate solutions for differential equations through this method. Using Galerkin methods in conjunction with NN techniques, a limited integration method has also been presented to reduce the computational costs of optimizing the neural network. The more profound Galerkin method similarly uses recurrent neural networks. The dNNsolve approach has recently been introduced to solve oscillatory differential equations, where an oscillatory and non-oscillatory network is used to approximate the solution. There are several different ways to solve a given problem, but they all have a limited range of applications. On the other hand, Elvet's strategy isn't geared toward a particular goal. It is more important to find solutions to general differential equations and variations. While the default network architecture relies on completely connected networks, users can customize it to meet their own needs. In machine learning, neural networks (NNs) are commonly employed because of their adaptability and effectiveness in regression and classification. Non-linear functions involving vector input and output make up the hidden layers of a NN, which is why NNs are called neural networks. An affine transformation is the easiest way to decompose a layer.
1
Thus, a NN having n inputs & m outputs is a parameterized function Where n and m are the width and depth of the NN, respectively. For any continuous function, The Universal Approximation Theorem states that, as long as either its breadth or depth is large enough, a NN would be able to approximate it with arbitrary precision.
With the help of neural networks, one can solve machine learning challenges.
• To solve a specific problem, selecting an appropriate network architecture is necessary.
• To better approximate the solution, one must find a loss function whose value decreases when the input parameters are changed.
• Reducing the network's losses is done by adjusting the network settings.
Summary of the Invention
An approximate solution to the given problem is achieved after the training method is completed. Summation over training instances is commonly used to determine the loss function .
2

3

Gradient descent algorithms like the Adam method are often used to train neural networks. Back propagation is the process of getting gradients and then changing the parameters.

4

The characteristic equation is, for the most part, it is not possible to have a function f's values and derivatives at a set of labelled training points directly influence .

Equation (5) contains , 5
For the limited functional minimization issue, a differential equation system is a degenerate instance where the restrictions imposed by the equations with boundary conditions are all that exist instead of a function to reduce. You can always write them as
6
It is being local functional.

7

The local measure loss can be written as

8

Equating real and imaginary parts

9
Using the Minimize class, one can solve the functional-minimization problem. So, in the Solver class, which derives from Minimizer, the loss function is defined for differential equations. The N " N " N is trained for functional minimization using the Minimizer fit approach. The training values, the loss, and the NN are all contained in a single Minimizer class instance. The epochs used in the fit approach are a set number of training steps.The loss function with gradients must be analyzed. Tensor Flow, a Python library, provides the means for its efficient computation on both CPU and GPU computers. Whenever possible, it can parallelize every step of the calculation—the tensor flow. The function decorator compiles a Python function into what is known as a static graph. It takes much less time to analyze this graph once it has been compiled. Because computing the gradients of a loss function for a functional constrained optimization problem or a differential equation can be computationally expensive, this library has an advantage over others that serve the same purpose. The suitable step technique does the optimization for each epoch. The tensor flow function is used to encapsulate this procedure. The core function derivative stack, which produces a class segment of all the function values, including derivatives at the training points, is called first. It then calculates the loss and associated gradients and adjusts the NN parameters using an optimizer, which the user can select from various options. If we summarize the above conditions, it reflects in the

10
The stack list is computed using the function derivative stack. The lower-order derivatives must be known to do the higher-order derivative calculation. It would take N-n times as many computations to determine the nth derivative if each derivative was computed individually. We prevent this inefficiency by simply creating all the derivatives in one step so that each derivative is computed precisely once. The tensor flow was used to calculate each derivative Gradient Tape. According to the Minimizer functional attribute, the loss function is considered to have the form of eq (1). There are no restrictions on what information Minimizer fit step methods can access because it has access to both the training points and the derivative stack elements ). Using Python 3.6 or above is required for this task. Tensor flow 2.4.0 or higher will be installed due to the earlier command. Elvet is pre-installed with a Mat plot lib-based internal plotting module. It is possible to use Elvet's capabilities other than charting without Mat plot lib. However, this is not required.
11
12
13

Schrodinger's equation is a function having four inputs for the domain and two derivatives of NN concerning the domain, as shown in this definition of the Schrodinger equation. A function's number of inputs is used to determine its order, and the number of arguments it requires is determined by the increasing order of the derivative. The NN's output is supposed to be a function that can be indefinitely differentiated. Line four defines the potential V and sets the energy E and frequency as constants. Line five of the function's output contains the final result of its computations. A Hessian matrix is used to represent the second-order d2 derivation of .
14
15

16

e.g., for 60000 epochs, to train the system for the loss function in Eq. (4). In Fig. 1, the analytic solution of the Schrodinger equation (solid red curve) and Elvet's prediction (dashed blue line) are depicted in the top plot. To assess the accuracy of Elvet's forecast, the centre plot shows the square error, which varies at a rate of (10(-5)). Finally, the loss density, which varies by order of (10(-4)), is shown in the bottom plot as an additional indicator of the approximation quality. With a 3.1 GHz Dual-Core Intel Core i7 processor, this calculation takes .
Functional minimization
17
The polynomial learning rate is given to

, Claims:We Claim
1. In contrast to typical numerical methods for solving differential equations and variational problems, NN-based approaches provide a continuous solution throughout the entire domain. The method we've developed is NN-agnostic, which means it can be applied to any issue involving general equations and constraints, boundary conditions, functions, and other variables.
2. Flexible and user-friendly, Elvet's implementation of this approach makes it easy to use. Complex problems can be solved using this way instead of standard methods. However, Elvet's unified framework can be used without tweaking, even for a specialized approach and maybe only for performance problems.
3. Elvet's functional minimization feature is an interface for the most fundamental operation a machine learning method may perform: minimizing a loss of function concerning the model's internal parameters. As a result, it can be used to investigate how machine learning might be applied to problems with solutions that are functions. Integral equations, for example, can be solved in Elvet by minimizing a functional defined by the square of the equation.
4. Elvet allows the user to control and personalize most of the training process. The Appendix contains a list of training management tools. For as long as they adhere to the predefined interface, Elvet will allow custom implementations.
5. With Elvet, you don't have to limit yourself to NNs because you can train it with any model you choose. Using Elvet, one can identify the decomposition that most closely approximates a solution given a family of functions, such as polynomial, Hermite, and spherical harmonic functions. Other machine learning models' applications can be examined in the same way.

Documents

Application Documents

# Name Date
1 202241031513-COMPLETE SPECIFICATION [01-06-2022(online)].pdf 2022-06-01
1 202241031513-STATEMENT OF UNDERTAKING (FORM 3) [01-06-2022(online)].pdf 2022-06-01
2 202241031513-DECLARATION OF INVENTORSHIP (FORM 5) [01-06-2022(online)].pdf 2022-06-01
2 202241031513-REQUEST FOR EARLY PUBLICATION(FORM-9) [01-06-2022(online)].pdf 2022-06-01
3 202241031513-DRAWINGS [01-06-2022(online)].pdf 2022-06-01
3 202241031513-POWER OF AUTHORITY [01-06-2022(online)].pdf 2022-06-01
4 202241031513-FORM 1 [01-06-2022(online)].pdf 2022-06-01
4 202241031513-FORM-9 [01-06-2022(online)].pdf 2022-06-01
5 202241031513-FORM 1 [01-06-2022(online)].pdf 2022-06-01
5 202241031513-FORM-9 [01-06-2022(online)].pdf 2022-06-01
6 202241031513-DRAWINGS [01-06-2022(online)].pdf 2022-06-01
6 202241031513-POWER OF AUTHORITY [01-06-2022(online)].pdf 2022-06-01
7 202241031513-DECLARATION OF INVENTORSHIP (FORM 5) [01-06-2022(online)].pdf 2022-06-01
7 202241031513-REQUEST FOR EARLY PUBLICATION(FORM-9) [01-06-2022(online)].pdf 2022-06-01
8 202241031513-COMPLETE SPECIFICATION [01-06-2022(online)].pdf 2022-06-01
8 202241031513-STATEMENT OF UNDERTAKING (FORM 3) [01-06-2022(online)].pdf 2022-06-01