Abstract: The present invention relates to detection of demographical information of an image, based on data science. The method may include capturing, through an image sensor, an image of a field of view, acquisition of the captured image from the image sensor, detection of one or more human faces within the acquired image to create one or more facial images corresponding to the detected one or more human faces, transformation of each of the created facial image into corresponding grey-scale version to create one or more grey-scaled facial images, extraction of the facial features from the each of the created grey-scale image, wherein the facial features are selected from a skin texture, skin tone, a facial geometry, a distance between eye-balls, facial hairs, a shape of eye, a size of eye, a shape of nose, a size of nose and wrinkles calculation of the biologically inspired feature (BIF) vectors by convolution of the each extracted facial features with a Gober filter and creation of a matrix for the calculated BIF vectors for the each created grey-scale image, generation of a mathematical model based on the pre-stored human face images and application of the generated mathematical model to the created matrix to estimate the demographical information for the each created grey-scale images, notification of the estimated demographical information to a computing device.
The present invention relates to a system and method for data science. More
particularly, the present invention relates to system and method for data science based detection
of demographical information.
Background
[0002] The background description includes information that may be useful in
understanding the present invention. It is not an admission that any of the information provided
herein is prior art or relevant to the presently claimed invention, or that any publication
specifically or implicitly referenced is prior art.
[0003] Statistical studies over the last decade have consistently revealed that crime
rates have risen for a variety of causes. Rural people have been migrating to cities all over the
world in pursuit of better pastures as a result of rapid urbanisation. In larger cities, the increased
rivalry for housing, work, and other essentials have created a dangerous scenario. Rising
unemployment and unhappiness among the population as a result of socioeconomic factors
present themselves in a variety of ways. Illiteracy, peer pressure, and circumstances can all be
blamed for adolescent crimes. The rising cost of living and the growing wealth gap between
the rich and the poor are also major contributors to crime in cities. Hence, it is the need of time
to take proper preventive measures against numerous criminal activities.
[0004] With the advent of technology, close circuit television (CCTV) based
surveillance system has proved to be the best endeavour to prevent crimes. CCTV based
4
surveillance system extends as security system in public and crowded places like schools,
colleges, offices, industries, traffic intersections, malls, etc. The CCTV cameras are capable of
capturing high quality image and video, which is utilized in recognising an unethical and
suspicious activity or person.
[0005] But CCTV based surveillance system has some limitations associated with
them, as they cannot be intelligently used to detect one or more individuals based on their age,
gender, etc. The CCTV based surveillance system has proved to be more efficient in case
solving after the event using modern technologies like face recognition and image processing,
rather than providing a detailed information regarding the offender in a live scenario. It is
therefore, desired to provide a method and system which provides a useful alternative that can
display the demographical information of one or more individuals.
[0006] In present disclosure, a demographical information-based surveillance system
is proposed which can notify under live unethical or suspicious circumstances. The system
utilizes image processing using data science, which could predict the demographical
information (age, gender, ethnicity etc.) of a person. Using such a system only specific people
(specific age and gender) will be authorized to enter the premises of an organisation or
institution. If anyone else apart from the specific gentry enters, an information regarding the
same will be available on the monitoring end.
[0007] All publications herein are incorporated by reference to the same extent as if
each individual publication or patent application were specifically and individually indicated
to be incorporated by reference. Where a definition or use of a term in an incorporated reference
is inconsistent or contrary to the definition of that term provided herein, the definition of that
term provided herein applies and the definition of that term in the reference does not apply.
5
[0008] In some embodiments, the numbers expressing quantities of ingredients,
properties such as concentration, reaction conditions, and so forth, used to describe and claim
certain embodiments of the invention are to be understood as being modified in some instances
by the term “about.” Accordingly, in some embodiments, the numerical parameters set forth in
the written description and attached claims are approximations that can vary depending upon
the desired properties sought to be obtained by a particular embodiment. In some embodiments,
the numerical parameters should be construed in light of the number of reported significant
digits and by applying ordinary rounding techniques. Notwithstanding that the numerical
ranges and parameters setting forth the broad scope of some embodiments of the invention are
approximations, the numerical values set forth in the specific examples are reported as
precisely as practicable. The numerical values presented in some embodiments of the invention
may contain certain errors necessarily resulting from the standard deviation found in their
respective testing measurements.
[0009] As used in the description herein and throughout the claims that follow, the
meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates
otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on”
unless the context clearly dictates otherwise.
[00010] The recitation of ranges of values herein is merely intended to serve as a
shorthand method of referring individually to each separate value falling within the range.
Unless otherwise indicated herein, each individual value is incorporated into the specification
as if it were individually recited herein. All methods described herein can be performed in any
suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The use of any and all examples, or exemplary language (e.g. “such as”) provided with respect
to certain embodiments herein is intended merely to better illuminate the invention and does
6
not pose a limitation on the scope of the invention otherwise claimed. No language in the
specification should be construed as indicating any non-claimed element essential to the
practice of the invention.
[00011] Groupings of alternative elements or embodiments of the invention disclosed
herein are not to be construed as limitations. Each group member can be referred to and claimed
individually or in any combination with other members of the group or other elements found
herein. One or more members of a group can be included in, or deleted from, a group for
reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the
specification is herein deemed to contain the group as modified thus fulfilling the written
description of all Markush groups used in the appended claims.
Objects of the Invention
[00012] An object of the present disclosure is to overcome one or more drawbacks
associated with conventional mechanisms.
[00013] An object of the present disclosure is to provide enhanced security.
[00014] An object of the present disclosure is to provide information regarding an
ongoing unethical activity.
[00015] Another object of the present disclosure is to prevent unauthorized entrance in
schools and colleges.
[00016] Yet another object of the present disclosure is to prevent child labour.
7
Summary
[00017] The present invention relates to a system and method for data science. More
particularly, the present invention relates to system and method for data science based detection
of demographical information.
[00018] In an aspect, the present invention provides a system to detect demographical
information, the system comprising: an image sensor, which is arranged to capture an image
of a field of view; a server arrangement comprises: a non-transitory storage device is
arranged to store a set of routines and an image database comprises pre-stored human images,
wherein each image is being tagged with age and gender, ethnicity information; one or more
microprocessors which are coupled to the non-transitory storage device and operable to execute
the one or more routines to; acquire the captured image from the image sensor; detect one or
more human faces within the acquired image to create one or more facial images corresponding
to the detected one or more human faces; transform the each of the created facial image into
corresponding grey-scale version to create one or more grey-scaled facial images; extract the
facial features from the each of the created grey-scale image, wherein the facial features are
selected from a skin texture, skin tone, a facial geometry, a distance between eye-balls, facial
hairs, a shape of eye, a size of eye, a shape of nose, a size of nose and wrinkles; calculate the
biologically inspired feature (BIF) vectors by convolution of the each extracted facial features
with a Gober filter; create a matrix for the calculated BIF vectors for the each created greyscale image; generate a mathematical model based on the pre-stored human face images; apply
the generated mathematical model to the created matrix to estimate the demographical
information for the each created grey-scale images; and notify the estimated demographical
information to a computing device.
8
[00019] In another aspect, the present invention provides a method to detect
demographical information, the method comprises: capturing, through an image sensor, an
image of a field of view; storing, in a non-transitory storage device, a set of routines and an
image database comprises pre-stored human images, wherein each image is being tagged with
age and gender, ethnicity information; processing, at one or more microprocessors which are
coupled to the non-transitory storage device and operable to execute the one or more routines
to; acquire the captured image from the image sensor; detect one or more human faces within
the acquired image to create one or more facial images corresponding to the detected one or
more human faces; transform the each of the created facial image into corresponding greyscale version to create one or more grey-scaled facial images; extract the facial features from
the each of the created grey-scale image, wherein the facial features are selected from a skin
texture, skin tone, a facial geometry, a distance between eye-balls, facial hairs, a shape of eye,
a size of eye, a shape of nose, a size of nose and wrinkles; calculate the biologically inspired
feature (BIF) vectors by convolution of the each extracted facial features with a Gober filter;
create a matrix for the calculated BIF vectors for the each created grey-scale image; generate a
mathematical model based on the pre-stored human face images; apply the generated
mathematical model to the created matrix to estimate the demographical information for the
each created grey-scale images; and notify the estimated demographical information to a
computing device.
[00020] In an embodiment, the image sensor is associated with a rotating unit, which
is arranged to enable rotation of the image sensor to change the field of view.
[00021] In an embodiment, the demographical information comprising age data,
gender data, ethnicity data.
9
[00022] In an embodiment, extraction of the facial features from the each of the created
grey-scale image is done to measure the nodal points.
[00023] In an embodiment, the mathematical model used for estimation of
demographical information is machine learning model.
[00024] Various objects, features, aspects and advantages of the inventive subject
matter will become more apparent from the following detailed description of preferred
embodiments, along with the accompanying drawing figures in which like numerals represent
like components.
Brief Description of the Drawings
[00025] Fig. 1 illustrates an exemplary architecture of a system to detect demographical
information by utilizing data science, in accordance with embodiment of present disclosure.
[00026] Fig. 2 represents an exemplary functional module to detect demographical
information by utilizing data science, in accordance with embodiments of the present
disclosure.
[00027] Fig. 3 illustrates the exemplary steps involved to detect demographical
information by utilizing data science, in accordance with embodiments of the present
disclosure.
Detailed Description
[00028] The following discussion provides many example embodiments of the
inventive subject matter. Although each embodiment represents a single combination of
inventive elements, the inventive subject matter is considered to include all possible
combinations of the disclosed elements. Thus, if one embodiment comprises elements A, B,
10
and C, and a second embodiment comprises elements B and D, then the inventive subject matter
is also considered to include other remaining combinations of A, B, C, or D, even if not
explicitly disclosed.
[00029] The present invention relates to a system and method for data science. More
particularly, the present invention relates to system and method for data science based detection
of demographical information.
[00030] Fig. 1 illustrates an exemplary architecture of a system 100 to detect the
demographical information from an image by utilizing data science, in accordance with
embodiment of present disclosure.
[00031] In an embodiment, the system 100 can comprise an image sensor 102 that can
be mounted at one or more places utilizing a rotational unit 102-A, a server arrangement 104
which can comprise a non-transitory storage device 104-A, a microprocessors 104-B, and a
computing device 106 for reception and depiction of notifications, data communication means
to communicate with different components or external computing environment, an electrical
energy source to provide electrical energy to the one or more components of system 100. It can
be appreciated that the aforementioned components of system 100 are communicably coupled
with each other.
[00032] In an embodiment, the image sensor 102 can be arranged to capture a video or
image of one or more humans. The image sensor 102 can be a sensor that may detect and
convey information used to make an image, by utilizing data science. Non limiting examples
of data science can be Linear Regression, Logistic Regression, Decision Trees, Naive Bayes,
K-Nearest Neighbors, Support Vector Machine (SVM), K-Means Clustering, Principal
Component Analysis, Neural Networks, Random Forests. It may do so by converting the
variable attenuation of light waves (as they pass through or reflect off objects) into signals,
11
small bursts of current that may convey the information. The waves can be one or more light
waves or other electromagnetic radiation. The image sensor 102 can be used in electronic
imaging devices of both analog and digital types, which may include digital cameras, camera
modules, camera phones, etc. The image sensor 102 can be a charge-coupled device (CCD) or
an active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors can be based on metal–
oxide–semiconductor (MOS) technology, with CCDs based on MOS capacitors and CMOS
sensors based on MOSFET (MOS field-effect transistor) amplifiers. The image sensor 102 can
be utilized in a closed-circuit camera for regular monitoring of activities in a field of view. The
close-circuit camera (comprising an image sensor 102) can be mounted on wall or any steady
support via the rotational unit 102-A, which rotate the close-circuit camera in order to change
the field of view. The image sensor 102 can provide the captured image to the server
arrangement in runtime or near runtime manner, through known communication network (e.g.,
wired or wireless network). Deployment of data science can combine domain expertise,
programming skills, and knowledge of mathematics and statistics to extract meaningful
insights from data.
[00033] In an embodiment, the configured server arrangement 104 can comprise the
non-transitory storage device 104-A and the microprocessor 104-B.
[00034] In an embodiment, the non-transitory storage device 104-A can be configured
to store one or more executable subroutines and an image database which may comprise one
or more pre-stored human images. The non-transitory storage device 104-A can store the image
received from the image sensor 102. The non-transitory storage device 104-A can be a flash
drive, RAM or other known types of electronic data storage devices.
[00035] In an embodiment, the microprocessor 104-B can be configured to execute one
or more subroutines stored in the non-transitory storage device 104-A to determine the
12
demographical information (age, gender, ethnicity, race, etc.) of one or more humans in an
image captured by the image sensor 102. The microprocessor 104-B can depict a notification
regarding the determined demographical information (age, gender, ethnicity, race, etc.) of one
or more humans captured in an image on the computing device 106. The microprocessor 102-
B can be a microprocessor, a set of microprocessors, or complex instruction set computing
(CISC) microprocessor, or reduced instruction set (RISC) microprocessor, or very long
instruction word (VLIW) microprocessor, or any other type of processing circuits.
[00036] In an embodiment, the computing device 106 can be utilized to receive
notification from the server arrangement 104. The received notification can be the determined
demographical information (age, gender, ethnicity, race, etc.) of one or more humans captured
in an image by the image sensor 102. The computing device 106 can be selected from a laptop,
a desktop, a mobile, etc.
[00037] Fig. 2 illustrates exemplary functional unit 200 to detect demographical
information by utilizing data science, in accordance with embodiments of the present
disclosure. As illustrated, the functional unit 200 can comprise an image sensing unit 202, an
image acquisition unit 204, a face detection unit 206 a pre-processing unit 208, a feature
extraction unit 210, an analysis unit 212 and an estimation unit 214. The functional module
200 can be executed by one or more microprocessors 104-B.
[00038] In an embodiment, the image sensing unit 202 (which can be integrated with
the image sensor 102) can be configured to capture an image of one or more humans. The
captured image may contain one or more humans or objects in background or foreground. The
image captured by the image sensing unit 202 can be utilized by the image acquisition unit 204.
[00039] In an embodiment, the image acquisition unit 204 can be arranged, that can
acquire the captured image from the image sensing unit in runtime or near runtime manner,
13
through known communication network (e.g., wired or wireless network). The acquired image
is then provided to the face detection unit 206 for further processing.
[00040] In an embodiment, the face detection unit 206 can be arranged to detect one or
more human faces in an image. The face detection unit 206 can utilize object detection
technology that deals with detecting instances of semantic objects of a certain class (such as
humans, buildings or cars) in digital images and videos. Face detection can be performed by
using one or more classifiers, wherein a classifier is essentially an algorithm that decides
whether a given image is positive(face) or negative (not a face). These classifiers may employ
a machine learning approach for visual object detection which is capable of processing images
extremely rapidly and achieving high detection rates. The face detection unit 206 can create
one or more facial images of one or more humans detected in an image. These facial images
can be provided to the pre-processing unit 208.
[00041] In an embodiment, the pre-processing unit 208 can be arranged to convert the
each created facial images into a grey-scale facial images in order to mitigate the influence of
inconsistent colour. The pre-processing unit 208 can perform routines to reduce the effect of
scale, rotation, and translation variations. The pre-processing unit 208 can finally suppress both
low-frequency illumination variation and high frequency noise (e.g., photon and sensor noise),
by applying Difference of Gaussians (DoG) filtering.
[00042] In an embodiment, the feature extraction unit 210 can be arranged to extract
numerous facial features from the each created grey-scale utilizing machine learning model
(e.g., HAAR cascade, ANN, CNN, HMM, artificial intelligence, deep learning etc.). These
features can be selected from a skin texture, skin tone, a facial geometry, a distance between
eye-balls, facial hairs, a shape of eye, a size of eye, a shape of nose, a size of nose, depth of
eye socket, hairs and wrinkles etc.
14
[00043] In an embodiment, the analysis unit 212 can be arranged to compute the
Biologically inspired feature (BIF) vectors for the each extracted facial feature. The analysis
unit 212 may convolute the each extracted facial feature with the Gobar filter to calculate the
BIF vectors for the each extracted facial feature. Each calculated BIF vectors can then be
arranged in a matrix form, which can be provided to the estimation unit 214.
[00044] In an embodiment, the estimation unit 214 can be arranged, which can estimate
the demographical information (age, gender, ethnicity, race, etc.) for the each created greyscale images of one more human by utilizing the pre-stored human face images stored in the
non-transitory storage device 104-A and the created matrix for the BIF vectors. The estimation
unit 214 may utilize machine learning model (e.g., ANN, CNN, HMM, artificial intelligence,
deep learning etc.) for the estimation of demographical information (age, gender, ethnicity,
race, etc.) for the each created grey-scale image of one more human. The estimated
demographical information (age, gender, ethnicity, race, etc.) can then be notified on the
computing device 106.
[00045] In an embodiment, the image sensor 102 can be associated with the rotating
unit 102-A, which can rotate the image sensor 102 (associated with the close-circuit camera)
within a range of 10 to 270 degrees to change the field of view, wherein the range of rotation
can be selected from a distinct exemplary range of 10–30-degree, 30–60-degree, 60–90-degree,
90–120-degree, 120-150-degree, 150 to 180 -degree, 180-to-210-degree, 210-240-degree and
240-270-degree.
[00046] In an embodiment, the demographical information can be selected from
gender, age, ethnicity, race, etc. These data can be utilized in various purposes, that may include
security, marketing, advertisement and web analysis etc.
15
[00047] In an embodiment, the extraction of facial features such as from a skin texture,
skin tone, a facial geometry, a distance between eye-balls, facial hairs, a shape of eye, a size of
eye, a shape of nose, a size of nose, depth of eye socket, hairs and wrinkles etc. can be
considered as nodal points, which can be used for face recognition and estimation of
demographical information of one or more humans in an image or video.
[00048] In an embodiment, the mathematical model used for face detection and
estimation of demographical information can be a machine learning model, which may include
artificial neural network (ANN), convolutional neural network (CNN), Hidden Markov model
(HMM), artificial intelligence, deep learning, etc.
[00049] Fig. 3 illustrate exemplarily steps to detect demographical information by
utilizing data science, in accordance with embodiments of the present disclosure. As illustrated
in flow diagram 300, the method may include steps of: at step (302) capturing, through an
image sensor, an image of a field of view; at step (304) acquisition of the captured image from
the image sensor; at step (306) detection of one or more human faces within the acquired image
to create one or more facial images corresponding to the detected one or more human faces, by
utilizing data science; at step (308) transformation of each of the created facial image into
corresponding grey-scale version to create one or more grey-scaled facial images; at step (310)
extraction of the facial features from the each of the created grey-scale image, wherein the
facial features are selected from a skin texture, skin tone, a facial geometry, a distance between
eye-balls, facial hairs, a shape of eye, a size of eye, a shape of nose, a size of nose and wrinkles;
at step (312) calculation of the biologically inspired feature (BIF) vectors by convolution of
the each extracted facial features with a Gober filter and creation of a matrix for the calculated
BIF vectors for the each created grey-scale image; at step (314) generation of a mathematical
model based on the pre-stored human face images and application of the generated
16
mathematical model to the created matrix to estimate the demographical information for the
each created grey-scale images; at step (316) notification of the estimated demographical
information to a computing device.
[00050] Throughout the present disclosure, the term ‘processing means’ or
‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a
microprocessor, a microcontroller, a complex instruction set computing (CISC)
microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word
(VLIW) microprocessor, or any other type of processing circuit.
[00051] In an aspect, any or a combination of data science/machine learning
mechanisms such as decision tree learning, Bayesian network, deep learning, random forest,
supervised vector machines, reinforcement learning, prediction models, Statistical Algorithms,
Classification, Logistic Regression, Support Vector Machines, Linear Discriminant Analysis,
K-Nearest Neighbours, Decision Trees, Random Forests, Regression, Linear Regression,
Support Vector Regression, Logistic Regression, Ridge Regression, Partial Least-Squares
Regression, Non-Linear Regression, Clustering, Hierarchical Clustering – Agglomerative,
Hierarchical Clustering – Divisive, K-Means Clustering, K-Nearest Neighbours Clustering,
EM (Expectation Maximization) Clustering, Principal Components Analysis Clustering
(PCA), Dimensionality Reduction, Non-Negative Matrix Factorization (NMF), Kernel PCA,
Linear Discriminant Analysis (LDA), Generalized Discriminant Analysis (kernel trick again),
Ensemble Algorithms, Deep Learning, Reinforcement Learning, AutoML (Bonus) and the like
can be employed to learn sensor/hardware components.
[00052] The term “non-transitory storage device” or “storage” or “memory,” as used
herein relates to a random access memory, read only memory and variants thereof, in which a
computer can store data or software for any duration.
17
[00053] The foregoing description of the specific embodiments will so fully reveal the
general nature of the embodiments herein that others can, by applying current knowledge,
readily modify and/or adapt for various applications such specific embodiments without
departing from the generic concept, and, therefore, such adaptations and modifications should
and are intended to be comprehended within the meaning and range of equivalents of the
disclosed embodiments. It is to be understood that the phraseology or terminology employed
herein is for the purpose of description and not of limitation. Therefore, while the embodiments
herein have been described in terms of preferred embodiments, those skilled in the art will
recognize that the embodiments herein can be practiced with modification within the spirit and
scope of the embodiments as described herein.
[00054] It should be apparent to those skilled in the art that many more modifications
besides those already described are possible without departing from the inventive concepts
herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the
appended claims. Moreover, in interpreting both the specification and the claims, all terms
should be interpreted in the broadest possible manner consistent with the context. In particular,
the terms “comprises” and “comprising” should be interpreted as referring to elements,
components, or steps in a non-exclusive manner, indicating that the referenced elements,
components, or steps may be present, or utilized, or combined with other elements,
components, or steps that are not expressly referenced. Where the specification claims refer to
at least one of something selected from the group consisting of A, B, C …. and N, the text
should be interpreted as requiring only one element from the group, not A plus N, or B plus N,
etc.
Advantages of the Invention
18
[00055] An advantage of the present disclosure is to overcome one or more drawbacks
associated with conventional mechanisms.
[00056] An advantage of the present disclosure is to provide enhanced security.
[00057] An advantage of the present disclosure is to provide information regarding an
ongoing unethical activity.
[00058] Another advantage of the present disclosure is to prevent unauthorized
entrance in schools and colleges.
[00059] Yet another advantage of the present disclosure is to prevent child labour.
Claims
I/We claim:
1. A system to detect demographical information based on a data science mechanism, the
system comprising:
an image sensor, which is arranged to capture an image of a field of view;
a server arrangement comprises:
a non-transitory storage device is arranged to store a set of routines and
an image database comprises pre-stored human images, wherein each
image is being tagged with age and gender, ethnicity information;
one or more microprocessors which are coupled to the non-transitory
storage device and operable to execute the one or more routines to:
acquire the captured image from the image sensor;
detect one or more human faces within the acquired image to
create one or more facial images corresponding to the detected
one or more human faces by utilizing the data science
mechanism;
transform the each of the created facial image into
corresponding grey-scale version to create one or more greyscaled facial images;
extract the facial features from the each of the created greyscale image, wherein the facial features are selected from a skin
texture, skin tone, a facial geometry, a distance between eye-
20
balls, facial hairs, a shape of eye, a size of eye, a shape of nose,
a size of nose and wrinkles;
calculate the biologically inspired feature (BIF) vectors by
convolution of the each extracted facial features with a Gober
filter;
create a matrix for the calculated BIF vectors for the each
created grey-scale image;
generate a mathematical model based on the pre-stored human
face images;
apply the generated mathematical model to the created matrix to
estimate the demographical information for the each created
grey-scale images; and
notify the estimated demographical information to a computing
device.
2. The system of claim 1, wherein the image sensor is associated with a rotating unit,
which is arranged to enable rotation of the image sensor to change the field of view.
3. The system of claim 1, wherein the demographical information comprising age data,
gender data and ethnicity data, which are utilized by data science.
4. The system of claim 1, wherein extraction of the facial features from the each of the
created grey-scale image is done to measure the nodal points.
5. The system of claim 1, wherein the mathematical model used for estimation of
demographical information is machine learning model.
21
6. A method to detect demographical information based on a data science mechanism,
the method comprises:
capturing, through an image sensor, an image of a field of view;
storing, in a non-transitory storage device, a set of routines and an image
database comprises pre-stored human images, wherein each image is
being tagged with age and gender, ethnicity information;
processing, at one or more microprocessors which are coupled to the
non-transitory storage device and operable to execute the one or more
routines to;
acquire the captured image from the image sensor;
detect one or more human faces within the acquired image to
create one or more facial images corresponding to the detected
one or more human faces, by using the data science mechanism;
transform the each of the created facial image into
corresponding grey-scale version to create one or more greyscaled facial images;
extract the facial features from the each of the created greyscale image, wherein the facial features are selected from a skin
texture, skin tone, a facial geometry, a distance between eyeballs, facial hairs, a shape of eye, a size of eye, a shape of nose,
a size of nose and wrinkles;
22
calculate the biologically inspired feature (BIF) vectors by
convolution of the each extracted facial features with a Gober
filter;
create a matrix for the calculated BIF vectors for the each
created grey-scale image;
generate a mathematical model based on the pre-stored human
face images;
apply the generated mathematical model to the created matrix to
estimate the demographical information for the each created
grey-scale images; and
notify the estimated demographical information to a computing
device.
7. The method of claim 6, wherein the image sensor is associated with a rotating unit, to
enable rotation of the image sensor to change the field of view.
8. The method of claim 6, wherein the demographical information comprising age data,
gender data and ethnicity data which are utilized by data science.
9. The method of claim 6, wherein extraction of the facial features from the each of the
created grey-scale image are the nodal points.
10. The method of claim 6, wherein the mathematical model used for estimation of
demographical information is machine learning model.
| # | Name | Date |
|---|---|---|
| 1 | 202111043329-COMPLETE SPECIFICATION [24-09-2021(online)].pdf | 2021-09-24 |
| 1 | 202111043329-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-09-2021(online)].pdf | 2021-09-24 |
| 2 | 202111043329-DECLARATION OF INVENTORSHIP (FORM 5) [24-09-2021(online)].pdf | 2021-09-24 |
| 2 | 202111043329-POWER OF AUTHORITY [24-09-2021(online)].pdf | 2021-09-24 |
| 3 | 202111043329-DRAWINGS [24-09-2021(online)].pdf | 2021-09-24 |
| 3 | 202111043329-FORM-9 [24-09-2021(online)].pdf | 2021-09-24 |
| 4 | 202111043329-FIGURE OF ABSTRACT [24-09-2021(online)].jpg | 2021-09-24 |
| 4 | 202111043329-FORM 1 [24-09-2021(online)].pdf | 2021-09-24 |
| 5 | 202111043329-FIGURE OF ABSTRACT [24-09-2021(online)].jpg | 2021-09-24 |
| 5 | 202111043329-FORM 1 [24-09-2021(online)].pdf | 2021-09-24 |
| 6 | 202111043329-DRAWINGS [24-09-2021(online)].pdf | 2021-09-24 |
| 6 | 202111043329-FORM-9 [24-09-2021(online)].pdf | 2021-09-24 |
| 7 | 202111043329-DECLARATION OF INVENTORSHIP (FORM 5) [24-09-2021(online)].pdf | 2021-09-24 |
| 7 | 202111043329-POWER OF AUTHORITY [24-09-2021(online)].pdf | 2021-09-24 |
| 8 | 202111043329-COMPLETE SPECIFICATION [24-09-2021(online)].pdf | 2021-09-24 |
| 8 | 202111043329-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-09-2021(online)].pdf | 2021-09-24 |