Abstract: The application of Generative Adversarial Networks (GANs) has surged over the past several years in a wide range of industries. Using GANs to create fake records has received a lot of attention in the scientific community. Many studies are being conducted to improve Generative Adversarial Networks (GANs) and the data they generate. So that we may train the Generator with more accurate latent space representations, we use a Deep Convolutional Autoencoder. In order to gauge the model's effectiveness, we show that our generative model can generate data that is statistically near to the input dataset. In addition, we compare our fabricated dataset to the original. GAN is capable of creating realistic EHRs (EHRs). 4 claims & 3 Figures
Description: Field of Invention
Ever since they were first introduced in 2014, Generative Adversarial Networks (GANs) have grown in popularity as one of the most well-known generative models. There are a slew of studies examining the effectiveness of GANs in generating data. Some of them are concerned with the privacy implications of data collection, while others are looking for data-driven solutions to the current data scarcity. The generative model we're proposing uses GANs. Our work closely mirrors that of medGAN, a previous effort. When using medGAN, the EHR data is generated by means of GANs. A multilayer perceptron neural network can be used in their work and produce good results. Using Convolutional Neural Networks, we aim to improve their model by capturing spatial features of the data. Mode-collapse is a problem that impedes the training of GANs. During a minibatch, when the generator collapses, it continues to produce the same sample. Minibatch Discrimination is one of the proposed methods for preventing mode collapse (MD). It is penalised by the discriminator in MD if a batch is generated that is low in entropy. Minibatch Averaging (MA) was also introduced as an alternative to MD by the authors of medGAN.
Background of the Invention
To begin, we'll go over what neural networks are and how they work. To describe a wide variety of learning models, the term "neural network" is often used. This is an example of a multi-layer back propagation neural network. Multi-layer perceptron are described. We will begin by describing a single-layer back propagation network in order to get a sense of the network(US10867597B2). we will use the neural network description. Nodes in a neural network are commonly referred to as "neurons." The single-layer model depicted in the following diagram has three rows of nodes. This is the input layer, and it is represented by the layer numbers starting with X1,..., Xp (bottom to top). The second layer, known as Z1, Z2,..., ZM, is the hidden layer. The third layer, referred to as Y1,..., YK, is the output layer.
In the case of a GAN, a generator and a discriminator are coupled to form the network. At the end of each cycle, new samples are created and transmitted to the discrimination stage. When the discriminator starts to work, it searches for patterns in the input dataset to identify whether or not the item is a fake or a genuine sample of the item in question(US10460235B1). During the training phase, improved samples generated by the generator are used to deceive discriminators into thinking they are better. However, the discriminator attempts to distinguish between true samples and better-but-still-fake ones using machine learning techniques. As a result of this practise, the discriminator eventually becomes unable to distinguish between manufactured and genuine samples any longer. Even while GANs are extremely powerful, they are not without flaws in certain areas. When it comes to developing quality discrete datasets, GANs have a difficult time succeeding(US10592779B2). The healthcare industry generates a large volume of discrete data, which makes it a difficult problem to deal with. Due to the inaccurate nature of the backpropagation technique, it is either impossible or highly difficult to train GANs with discrete data. In order to deal with this, we employed the medGAN architecture to construct our model.
EHR data is described as X, which indicates that it is a valid input. In order to generate data, the generator (G) draws from a random distribution (z). The decoder, also known as the Dec in our autoencoder model, translates continuous data from the latent space to discrete data from the input space(US10726587B2). Using the discriminator D, we may determine if the authentic data X and the fictional data Dec(G(z)) can be separated from one another. Generally speaking, GANs do not perform well when dealing with discrete data. GANs have limitations in terms of their ability to generate discrete data, and researchers have come up with a few solutions to this problem. We are able to train the model in a continuous space before transferring it to the input space thanks to the usage of auto encoders.
Summary of the Invention
Generative Adversarial Network (GAN) is a machine learning framework that generates synthetic data from the original input dataset. Both the Generator and the discriminator are parts of this system. Essentially, the discriminator is a classifier that determines if the generated data is authentic or not. Data from the Generator and data from the discriminator are fed into the discriminator. During training, the discriminator employs both actual and created data as positive and negative/adversarial instances. As LD shows, the penalty for the discriminator grows and reduces depending on how well the discriminator can detect and differentiate between distinct sets of data. Backpropagation is used to update the discriminator's weights.
Brief Description of Drawings
Figure 1: Block diagram for Auto encoder in GAN
Figure 2: Proposed Architecture for Discriminator and Generator Block in GANs
Figure 3: Block diagram for the proposed High performance EHR generation
Detailed Description of the Invention
Fake records have recently been the subject of a great deal of study. A generative adversarial network was used to try and generate as many images that looked like the input image as possible. Images to images, text to images, semantic image to image translation, and creating more data for datasets are just some of the numerous innovative uses for GANs that have emerged since they were first developed and put into use in real-world applications in the 1990s. In 2004, GANs were introduced for the first time. As part of an adversarial nets framework, an adversarial framework, or a dis-criminative model, is pitted against a generative model. The discriminative model determines if a sample data point is drawn from the same distribution as the real data or from a different one. There are two types of models: one that creates fake data, and one that is trained to distinguish between actual and fake images. Using a multilayer perceptron as a generator, and another as a discriminator, random noise was generated.
There are two main sets of neural networks in the Generative Adversarial Network, called Generator and Discriminator. The two sets are competing against each other in the same way. Using an unsupervised style of learning, they grow adept in coping with noisy data. GAN is a powerful tool for creating photo-realistic visuals that may be used to visualise new designs. Using a Generator (G) instead of a training dataset results in incorrect model training. Its job is to deceive a discriminator by appearing to be someone it is not. To see if it can trick a discriminator, the model compares an image created by the model to its actual counterpart. Loss is the term used to describe the difference between two photographs. This loss can be used as a teaching tool to improve a generator. As long as G is comparing and calculating losses, and learning, it will continue to do so until it is producing a picture that is as close as possible to its corresponding training set image. A Discriminator (D) recognises a bogus data point produced by G with the help of D. A primary goal of this software is to identify between an original training picture and an image provided by G. The model determines the loss if G misleads D. D. uses this loss as a learning tool. The similar technique is repeated until D has developed the investigative skills necessary to spot a misleading image or data.
The Generator is yet another thing LG signifies. The discriminator provides feedback to the generator, which then creates a new collection of data that the discriminator recognises as the original. The Generator's training involves the following steps. An acoustic squeak or buzz. Second, a generator that uses random data to make data. To make a distinction between the input data and the output data, you'll need a device called a discriminator. The Generator gets fined if it is unable to provide data that can fool the discriminator. Block diagram for Auto encoder in GAN with functionalities explained in Fig:1, the work proposed Architecture for Discriminator and Generator Block in GANsin description with module functionalities shown in Fig:2, and the block diagram for the proposed High performance EHR generation explains about high performance of EHR generate with modules and its communication functionalities in detail explanation given in Fig:3.
4 Claims and 3 Figures , Claims: The scope of the invention is defined by the following claims:
Claim:
1. Generation of High performance Electronic Health Records using Generative Adversarial Networks writing comprising the steps of:
a) Medical images are transformed and reconstructed using the generative adversarial network and its variant, the conditional generative adversarial network.
b) Improve their model by capturing spatialfeaturesof the data. Mode-collapse is a problem that impedes the training of GANs.
c) Designed a new High performance EHR, Discriminator, Generator Block in GANs and introduce the Auto encoder in GAN.
2. The Generation of High performance Electronic Health Records using Generative Adversarial Networks as claimed in claim1, Medical images are transformed and reconstructed using the generative adversarial network and its variant, the conditional generative adversarial network.
3. The Generation of High performance Electronic Health Records using Generative Adversarial Networks as claimed in claim1, Improve their model by capturing spatial features of the data. Mode-collapse is a problem that impedes the training of GANs.
4. Generation of High performance Electronic Health Records using Generative Adversarial Networks as claimed in claim1, designed a new High performance EHR, Discriminator, Generator Block in GANs and introduce the Auto encoder in GAN.
| # | Name | Date |
|---|---|---|
| 1 | 202241025432-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-04-2022(online)].pdf | 2022-04-30 |
| 2 | 202241025432-FORM-9 [30-04-2022(online)].pdf | 2022-04-30 |
| 3 | 202241025432-FORM FOR SMALL ENTITY(FORM-28) [30-04-2022(online)].pdf | 2022-04-30 |
| 4 | 202241025432-FORM 1 [30-04-2022(online)].pdf | 2022-04-30 |
| 5 | 202241025432-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-04-2022(online)].pdf | 2022-04-30 |
| 6 | 202241025432-EVIDENCE FOR REGISTRATION UNDER SSI [30-04-2022(online)].pdf | 2022-04-30 |
| 7 | 202241025432-EDUCATIONAL INSTITUTION(S) [30-04-2022(online)].pdf | 2022-04-30 |
| 8 | 202241025432-DRAWINGS [30-04-2022(online)].pdf | 2022-04-30 |
| 9 | 202241025432-COMPLETE SPECIFICATION [30-04-2022(online)].pdf | 2022-04-30 |