Sign In to Follow Application
View All Documents & Correspondence

Identifying Family Members Of Refugees Using Deep Learning

Abstract: In many situations like natural calamities, conflicts between countries, accidents, missing cases, kidnappings and many other situations people often get separated from their families and they are called as refugees. They lose their children, and they don't, in any event, have a legal position to collect their assets. Every year around 40,000 missing bodies are found in the country. Although most of them happen frequently, mishaps, suicides and manslaughter are also diverse causes, particularly among young people. So it is very important to build a framework to help police in finding family members of refugees. This motivates us to invent a method and protocol for the creation of an assistive tool for the follow-up of missing people using artificial neural network. This method helps police officers in finding refugees family members by just uploading refugee photo in the mobile app. The app searches the database and finds the family members using pre trained set of images with good accuracy.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 April 2021
Publication Number
41/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
girishlingappa7@gmail.com
Parent Application

Applicants

Girish L
Girish L, Samrudhi, Near Vinayaka Temple, Aralimara playa, Siragate, Tumakuru

Inventors

1. Girish L
Girish L, Samrudhi, Near Vinayaka Temple, Aralimara playa, Siragate, Tumakuru
2. Thara D K
Thara D K, Sumedha, 2nd Main, Siddarameshwara Badawane west, Batawadi, Tumakuru 572103

Specification

Claims:We Claim:
1. A System and method for the face identification of refugees:
2. A system and method for the face identification of refugees: as claimed in claim 1 involves following stages:
a) Collecting the Image Dataset
b) Splitting the Dataset into Training and Testing
c) Image Augmentation
d) Building the CNN
e) Full Connection
f) Training the Network
g) Testing
3. Collecting the Dataset In order to train our machine, we need a huge amount of data so that our model can learn from them by identifying out certain relations and common features related to the objects.
4. Splitting the dataset- To use the powers of the libraries, we first need to import them. After importing the libraries, we need to split our data into two parts- taining_set and test_set.
5. To increase the number of images we are using augmentation methods like rotation, zooming, flipping, shearing and shifting.
6. Building the CNN- This is most important step for our network. It consists of three parts -
1. Convolution
2. Polling
3. Flattening
7. A fully connected layer that takes the output of convolution/pooling and predicts the best label to describe the image
8. After augmentation, the model is trained by training data i.e. 80% of data collected is used to train the model. During training the model extracts the features in different convolution layers there are 13 convolution layers each with the filter size (2,2) and five maxpooling layers and each with size (2,2).
9. The model is evaluated for the remaining 20% of the dataset and performance of the model is calculated using various performance metrics.
, Description:The first step in the algorithm implementation is the conversion of RGB image then resizes it to (224,224) in dimension. Then to increase the number of images we are using augmentation methods like rotation, zooming, flipping, shearing and shifting.
During algorithm implementation each image in the dataset undergoes 18 rotations by rotating 10 degrees each time randomly.
After augmentation the model is trained by training data i.e. 80% of data collected is used to train the model. During training the model extracts the features in different convolution layers there are 13 convolution layers each with the filter size (2,2) and five maxpooling layers and each with size (2,2). And each neuron in convolution layer has Relu activation function attached to it.
Here the convolution layer summarizes the presence of features in the input image and maps the features. And the Relu layer is a activation function F(x ) = max (0,x), this function changes all the negative activations to 0.
Then the pooling layer operates upon each feature map separately to create new set of same numbers of pooled feature map in proposed method we have applied maxpooling it calculates maximum value for each patch of the feature map. Then there is dense layer followed by the dense layer we have softmax classifier in the output layer which classifies and predicts the family.
Graphs represent the loss and accuracy variation during training and validation. As we go on training the model the loss decreases and the Accuracy increases. Then once the training is complete after 5 epochs the model will be saved as an h5 file.

Documents

Application Documents

# Name Date
1 202141017155-FORM 1 [12-04-2021(online)].pdf 2021-04-12
2 202141017155-FIGURE OF ABSTRACT [12-04-2021(online)].jpg 2021-04-12
3 202141017155-DRAWINGS [12-04-2021(online)].pdf 2021-04-12
4 202141017155-COMPLETE SPECIFICATION [12-04-2021(online)].pdf 2021-04-12
5 202141017155-FORM 3 [28-04-2021(online)].pdf 2021-04-28