Abstract: “A METHOD AND SYSTEM FOR GENERATING VIRTUAL HAIRSTYLE” Disclosed herein is a method (400) and a system (102) for generating a virtual hairstyle to be used by a user/system to generate emoji/sticker/emoticon. The method (400) discloses extracting one or more hair segments from one or more hair components of the images. The method (400) also discloses generating one or more vector representations of corresponding one or more hair segments. The method (400) further groups the plurality of standardized hair segments to generate a plurality of clusters. A hair segment is then selected corresponding to each cluster to generate a virtual hairstyle.
TECHNICAL FIELD
[0001] The present disclosure relates to graphical user representations in computing
environments. Particularly, the present disclosure relates to techniques of generating
virtual hairstyles to be used while generating graphical user representation.
5
BACKGROUND OF THE INVENTION
[0002] Rapid growth in the field of digital portable device has provided significant
motivation for development in digital messaging. Generally, users communicate by
way of publishing a post, making a comment, or sending a message. Users use various
10 messaging services/systems which enable such communication between users. Each of
such messaging services/systems tries to make the interaction more interactive and
realistic. Some of such systems utilize user representation such as
avatar/stickers/emojis to make the users interaction easy and interactive. However,
such user representation is confined to limited set of user attributes. One such important
15 attribute is hair. Hairstyle plays an important role in representing a look and aesthetics
of a person.
[0003] Conventional techniques disclose use of limited set of predefined hairstyles
which are manually created by digital artists and are stored in a memory. Since these
20 hairstyles are manually created, the conventional systems are unable to upscale it or
automatically perform additionally modifications. Therefore, the set of hairstyles
provided by such techniques are limited and inefficient to represent actual appearance
of a user. With evolving fashion trends, the hairstyles are now so different and
innovative, the existing system fails to represent even a close representation of such
25 hairstyles.
[0004] Thus, there exist a need for the technology that automatically generates new
hairstyle vectors and allows a user/system to use such vectors to generate user
representation such as emoji, sticker, avatar etc.
30
OBJECTS OF THE INVENTION:
[0005] The main object of the present invention is to automatically generate one or
more virtual hairstyles from one or more images of one or more users.
3
[0006] Another main object of the present invention is to automatically generate emoji
using one or more automatically generated virtual hairstyles.
[0007] Yet another object of the present invention is to provide a user or a system with
5 an updated memory of automatically generated hairstyles.
SUMMARY OF THE INVENTION
[0008] The present disclosure overcomes one or more shortcomings of the prior art and
provides additional advantages discussed throughout the present disclosure. Additional
10 features and advantages are realized through the techniques of the present disclosure.
Other embodiments and aspects of the disclosure are described in detail herein and are
considered a part of the claimed disclosure.
[0009] In one non-limiting embodiment of the present disclosure, a method for
15 generating virtual hairstyle is disclosed. The method comprises receiving a plurality of
facial images. The method further comprises extracting a plurality of hair components
from the received facial images andprocessing the plurality of hair components to
generate a standardized hair segment corresponding to each of the plurality of hair
components. The method further comprises generating a vector representation
20 corresponding to each of the standardized hair segment. Further, as next step the
method discloses grouping the plurality of standardized hair segments to generate a
plurality of clusters. The method also comprises selecting a hair segment corresponding
to each cluster, wherein the selection of the hair segment is based on centroid of the
respective cluster and the method finally discloses generating a virtual hairstyle
25 corresponding to each of the selected hair segment.
[0010] In yet another non-limiting embodiment of the present disclosure, it is disclosed
that the standardization of hair segment comprise adjusting size and position of hair
component based on predefined standard size.
30 [0011] In still another non-limiting embodiment of the present disclosure, the
generated virtual hairstyle is assigned a name and stored in a memory.
[0012] In yet another non-limiting embodiment of the present disclosure, the step of
generating the vector representation for each of the standardized hair segment further
4
comprises assigning a data point to each hair vector corresponding to the standardized
hair segment.
[0013] In still another non-limiting embodiment of the present disclosure, the assigning
5 of the data point to the hair vector is based on the similarity between hair segments.
[0014] In yet another non-limiting embodiment of the present disclosure, the step of
generating the plurality of cluster comprises using a standard distance function to
identify similar hair segments based on the corresponding vector representation.
10
[0015] In another non-limiting embodiment of the present disclosure, a system for
generating a virtual hairstyle is disclosed. The system comprises a transceiver
configured to receive a plurality of facial images, a memory configured to store the
plurality of facial images and a processing unit operatively coupled to the memory and
15 configured to extract one or more hair segments from one or more images. The
processing unit is further configured to process the plurality of hair components to
generate a standardized hair segment corresponding to each of the plurality of hair
components and generate one or more vector representations of corresponding one or
more hair segments. The processing unit is also configured to generate one or more
20 clusters from the one or more vector representations. The processing unit is further
configured to group the plurality of standardized hair segments to generate a plurality
of clusters. The processing unit is further configured to select a hair segment
corresponding to each cluster, wherein the selection of the hair segment is based on
centroid of the respective cluster. Further, the processing unit is configured to generate
25 a virtual hairstyle corresponding to each of the selected hair segment.
[0016] In yet another non-limiting embodiment of the present disclosure, the
processing unit is configured to standardize hair segment by adjusting size and position
of hair component based on predefined standard size.
30
[0017] In still another non-limiting embodiment of the present disclosure, the
processing unit is configured to assign a name to the generated virtual hairstyle and
store same in the memory.
5
[0018] In yet another non-limiting embodiment of the present disclosure, the
processing unit is configured to generate the vector representation for each of the
standardized hair segment by assigning a data point to each hair vector corresponding
to the standardized hair segment.
5
[0019] In still another non-limiting embodiment of the present disclosure, the
processing unit is configured to generate the plurality of cluster using a standard
distance function to identify similar hair segments based on the corresponding vector
representation.
10
[0020] The foregoing summary is illustrative only and is not intended to be in any way
limiting. In addition to the illustrative aspects, embodiments, and features described
above, further aspects, embodiments, and features will become apparent by reference
to the drawings and the following detailed description.
15
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings, which are incorporated in and constitute a part of
this disclosure, illustrate exemplary embodiments and, together with the description,
serve to explain the disclosed embodiments. In the figures, the left-most digit(s) of a
20 reference number identifies the figure in which the reference number first appears. The
same numbers are used throughout the figures to reference like features and
components. Some embodiments of system and/or methods in accordance with
embodiments of the present subject matter are now described, by way of example only,
and with reference to the accompanying figures, in which:
25
[0022] Figure 1 illustrates an exemplary environment for a system for generating a
virtual hairstyle in accordance with an embodiment of the present disclosure;
[0023] Figure 2 illustrates a block diagram illustrating a system for generating a virtual
30 hairstyle in accordance with an embodiment of the present disclosure;
[0024] Figure 3 illustrates an exemplary embodiment representing generation of a
virtual hairstyle in accordance with an embodiment of the present disclosure; and
6
[0025] Figure 4 illustrates a method for generating a virtual hairstyle in accordance
with an embodiment of the present disclosure.
[0026] It should be appreciated by those skilled in the art that any block diagrams herein
5 represent conceptual views of illustrative systems embodying the principles of the
present subject matter. Similarly, it will be appreciated that any flow charts, flow
diagrams, state transition diagrams, pseudo code, and the like represent various
processes which may be substantially represented in computer readable medium and
executed by a computer or processor, whether or not such computer or processor is
10 explicitly shown.
DETAILED DESCRIPTION
[0027] The foregoing has broadly outlined the features and technical advantages of the
present disclosure in order that the detailed description of the disclosure that follows
15 may be better understood. It should be appreciated by those skilled in the art that the
conception and specific embodiment disclosed may be readily utilized as a basis for
modifying or designing other structures for carrying out the same purposes of the
present disclosure.
20 [0028] The novel features which are believed to be characteristic of the disclosure, both
as to its organization and method of operation, together with further objects and
advantages will be better understood from the following description when considered
in connection with the accompanying figures. It is to be expressly understood, however,
that each of the figures is provided for the purpose of illustration and description only
25 and is not intended as a definition of the limits of the present disclosure.
[0029] In the present document, the word "exemplary" is used herein to mean "serving
as an example, instance, or illustration." Any embodiment or implementation of the
present subject matter described herein as "exemplary" is not necessarily to be
30 construed as preferred or advantageous over other embodiments.
[0030] Further, the terms like “comprises”, “comprising”, or any other variations
thereof, are intended to cover non-exclusive inclusions, such that a setup, device that
comprises a list of components that does not include only those components but may
7
include other components not expressly listed or inherent to such setup or device. In
other words, one or more elements in a system or apparatus proceeded by “comprises…
a” does not, without more constraints, preclude the existence of other elements or
additional elements in the system or apparatus or device.
5
[0031] Furthermore, the terms like “emoji”, “emoticons”, “Avatar” and/or “sticker”
may be used interchangeably or in combination throughout the description.
[0032] Disclosed herein is a technique for generating a virtual hairstyle. According to
10 an aspect, the present disclosure provides a technique to automatically generate one or
more virtual hairstyles form one or more images/selfies. The present disclosure also
recites step of generating one or more emojis using these automatically generated one
or more virtual hairstyles. Particularly, the present disclosure provides a technique to
generate one or more emojis having updated hairstyles. Most importantly, the present
15 disclosure updates the hairstyle component database with recently in trend hairstyles.
[0033] In the following detailed description of the embodiments of the disclosure,
reference is made to the accompanying drawings that form a part hereof, and in which
are shown by way of illustration specific embodiments in which the disclosure may be
20 practiced. These embodiments are described in sufficient detail to enable those skilled
in the art to practice the disclosure, and it is to be understood that other embodiments
may be utilized and that changes may be made without departing from the scope of the
present disclosure. The following description is, therefore, not to be taken in a limiting
sense.
25
[0034] In the present document some of the terms may be used repeatedly throughout
the disclosure. For clarity said terms are illustrated below:
[0035] Emoji in context of the present application may be defined as a set of graphical
30 symbols or a simple pictorial representation that represents an idea or concept,
independent of any language and specific words or phrases. In particular, emoji may
be used to convey one’s thoughts and emotions through a messaging platform without
any bar of language. Further, the term emoji or emoticon may mean more or less same
in the context of the present application and may be used interchangeably throughout
8
the disclosure, without departing from the scope of the present application.
[0036] Sticker in context of the present application may relate to an illustration which
is available or may be designed (using various application) to be placed on or added to
5 a message. In simple words sticker is an elaborate emoticon, developed to allow more
depth and breadth of expression than what is possible by means of “emojis” or
“emoticons”. Stickers are generally used, on digital media platforms, to quickly and
simply convey an emotion or thought. In some embodiments, the stickers may be
animated, derived from cartoon-like characters or real-life peoples etc. and are often
10 intended to be witty, cute, irreverent or creative, but in a canned kind of way. In some
embodiments, stickers may also be designed to represent real-world events in more
interactive and fascinating form to be shared between users on various multimedia
messaging platforms.
15 [0037] Avatar in context of the present application relates to graphical representation
of a user, user’s image/selfie or the user's character. Thus, it may be said that an avatar
may be configured to represent emotion/expression/feeling of the user by means of an
image converted into avatar capturing such emotion/expression/feelings by various
facial expressions or added objects such as heart, kisses etc. Further, it is to be
20 appreciated that an avatar may take either a two-dimensional form as an icon on
platform such as messaging/chat platforms and or a three-dimensional form such as in
virtual environment. Further, the term avatar, profile picture, user pic means same in
context of the present application and may be used interchangeably throughout the
disclosure without departing from the scope of the present application.
25
[0038] Figure 1 illustrates an exemplary environment 100 for implementing a system
102 configured to generate a virtual hairstyle in accordance with an embodiment of the
present disclosure. The environment 100 includes a processing subsystem 102
(interchangeably referred to as “the system 102”), and one or more user devices 104a30 104n (interchangeably referred to as the user device 104), operatively coupled to each
other via a network 106.
9
[0039] In an aspect of the present invention, the user device 104a…104n may include
any suitable communication device such as, but not limited to, a mobile device, a
personal digital assistant, a laptop, a personal computer and so forth, which may allow
a user to interact with the system 102. The user device 104a…104n may also include
5 one or more cameras to take pictures or to allow users to take their selfies. In an
exemplary embodiment, it is to be appreciated that said user devices 104a…104n may
include other essential components required to perform various aspects of said
invention in accordance with the embodiments of the present invention and the same
are not explained for the sake of brevity.
10
[0040] In another aspect, the present invention disclose that said user device
104a…104n may include a mobile widget or a web platform (not shown) installed
therein that may allow the user of the user device 104a…104n share his/her image with
the system 102. It is to be appreciated, for sharing images with the system 102, the
15 user device 104a..104n may remain connected to the server/system 102 by means of
the mobile widget or the web platform.
[0041] The network 106 may include a data network such as, but not restricted to, the
Internet, Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area
20 Network (MAN), etc. In certain embodiments, the network 106 may include a wireless
network, such as, but not restricted to, a cellular network and may employ various
technologies including Enhanced Data rates for Global Evolution (EDGE), General
Packet Radio Service (GPRS), Global System for Mobile Communications (GSM),
Internet protocol Multimedia Subsystem (IMS), Universal Mobile
25 Telecommunications System (UMTS) etc. In other embodiments, the network 106 may
include or otherwise cover networks or subnetworks, each of which may include, for
example, a wired or wireless data pathway.
[0042] In an exemplary embodiment, the user device 104a..104n may be configured to
30 transmit one or more facial images of one or more users which may be processed by
the system 102 to generate one or more virtual hairstyle. A detailed explanation on
operation performed by the system 102 is provided in description to Figs. 2 and 3.
10
[0043] Figure 2 illustrates a block diagram 200 illustrating a processing
subsystem/system 102 for generating a virtual hairstyle in accordance with an
embodiment of the present disclosure. Whereas figure 3 illustrates exemplary
embodiment 300 representing generation of one or more virtual hairstyles from one or
5 more facial images in accordance with the system 200, as shown in figure 2 according
to one embodiment of the present disclosure. It must be understood to a person skilled
in art that the system may also be implemented in various environments, other than as
shown in figure 3.
10 [0044] Figures 2 and 3 are explained in detail in conjunction with each other.
[0045] In one implementation, the system 102 may comprise a processing unit 202, a
memory 204, a transceiver 206. The memory 204 may be communicatively coupled to
the processing unit 202. Further, the memory 204 may be configured to store a plurality
15 of user facial images. The memory 204 may be configured to store one or more
instructions. The processing unit 202 may be configured to execute the one or more
instructions stored in the memory 204.
[0046] In an exemplary aspect, the memory 204 may include a Random-Access
20 Memory (RAM) unit and/or a non-volatile memory unit such as a Read Only Memory
(ROM), optical disc drive, magnetic disc drive, flash memory, Electrically Erasable
Read Only Memory (EEPROM), a memory space on a server or cloud and so forth.
Further in an exemplary aspect, the at least one processing unit 202 may include, but
not restricted to, a general-purpose processor, a Field Programmable Gate Array
25 (FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor
(DSP), microprocessors, microcomputers, micro-controllers, digital signal processors,
central processing units, state machines, logic circuitries, and/or any devices that
manipulate signals based on operational instructions. In some embodiments, the
processing unit 202 may be implemented by one or more neural network.
30
[0047] Coming back to figure 2, the transceiver 206 may be configured to receive one
or more images of users. In an exemplary embodiment, the one or more images of users
may include selfie images retrieved from a storage media or a media capturing device
11
such camera. In some embodiments, the one or more images are stored in the memory
204.
[0048] Figure 3 depicts an exemplary embodiment 300 illustrating the training of the
5 processing subsystem system 102 to generate virtual hair styles corresponding to
various images/selfies. To train the processing subsystem 102, a plurality of human
facial images 208 are received by the processing subsystem 102 via the transceiver 206.
In the exemplary embodiment 300, a plurality of human facial images 208 are received
by the processing subsystem 102. In an exemplary embodiment, the plurality of human
10 facial images 208 are received from the user device 104. In alternative embodiment,
the plurality of human facial images 208 may be received from any of the suitably data
source, such as, but not limited to, a remote database, an internal memory and so forth.
[0049] It may also be noted that in the exemplary embodiment 300 only a few facial
15 images are shown being received by the processing subsystem 102 for the sake of
simplicity. However, the number of images received by the processing subsystem 102
for training/generating virtual hairstyles can be higher or lower. Further, it must also
be noted that for efficient training of the processing subsystem 102, human facial
images may correspond to humans with different hairstyles. Further, the memory 204
20 may store user facial images.
[0050] As shown in figures 2 and 3, once the plurality of images are retrieved, as
discussed in above paragraphs, the processing unit 202 may be configured to extract
hair components from the received facial images. In some embodiments, the one or
25 more hair components are extracted using Generative Adversarial Networks (GANs)
neural network. In other embodiments, the one or more hair components are extracted
using segmentation.
[0051] Based on the extracted hair components, the processing unit 202 processes the
30 hair components of each of the human facial images 208 to generate standardized hair
segments corresponding to each of the hair component. The hair segments may include
hair characteristics such as, but not limited to, hair color, length of hair, thickness of
hair, hair texture, hair symmetry and so forth. Hair segments are aligned appropriately
12
by adjusting size and position of hair components based on already defined standard
size. For example, if the hair component of one of the plurality of facial image 208 has
hair component with different hair lengths say (10, 11, 12) cm. Then, hair component
of said facial image is aligned to a standard size say 11 cm. Similarly, the hair
5 components of the other human facial images 208 are standardized.
[0052] The processing unit 202 may then be configured to generate vector
representation for each of the standardized hair segment by assigning a data point to
each hair vector corresponding to the standardized hair segment. Vector representations
10 may be generated based on the similarity between hair segments such that similar ones
get closer. For example, say two hair segments are black, but differs in shape and size.
Then, the comparison should be made between the hair segments in such a way that the
difference between these two hair segments should be greater than 0 and significantly
less than 1.
15
[0053] The processing unit 202 may then assign a number to each hair vector and
generate one or more clusters based on the numbers. The number may represent an
index of the corresponding hair vector. In some embodiments, the standard clustering
techniques (k-means) which applies a standard distance function is used to identify
20 appropriate clusters. In other embodiments, the distances between each of the hair
segments are compared to identify appropriate clusters. For exemplary embodiment
300, four clusters are generated based on similarities between hair segments of human
facial images 208. For example, cluster 1 is generated which consists of hair segments
black hair, curly hair and hair with length 11 cm. Cluster 2 is generated which consists
25 of hair segments black hair, thick hair and hair with length 12 cm. Cluster 3 is generated
which consists of hair segment short hair with length 10 cm. Cluster 4 is generated
which consists of hair segment long and spiky hair with length 14 cm.
[0054] Further, the processing unit 202 may select a hair image corresponding to each
30 cluster based on the centroid of the cluster and the corresponding hair segments. That
is the hair image is selected based on an average of one or more characteristics of one
or more images (vector representation and/or hair segment) of a cluster. In some
embodiments, the image may be a vector representation. In other embodiments, the
13
image may be a hair segment. The processing unit 202 then generate a virtual hairstyle
corresponding to each of the selected hair image (vector representation and/or hair
segment). In some embodiments, the processing unit 202 may assign a name to the
generated virtual hairstyle and store same in the memory. This way, number of
5 hairstyles are generated for different hair looks for user’s representation. This helps a
user to try a new hairstyle, as the new hairstyle template is stored in the memory 204
with previously stored hairstyle templates.
[0055] Further, the processing unit 202 may utilize thus created hairstyles to generate
10 an emoji/avatar/Animoji for a user by simply selecting from a vast repository of
hairstyles created by the system 102, thus giving the user a very dynamic and more
interactive experience.
[0056] Figure 4 shows a method 400 for generating emoji using one or more
15 automatically generated hair style according to an embodiment of the present
disclosure. As illustrated in figure 4, the method 400 includes one or more blocks
illustrating a method for generating a hairstyle for user’s representation. The method
400 may be described in the general context of computer executable instructions.
Generally, computer executable instructions can include routines, programs, objects,
20 components, data structures, procedures, modules, and functions, which perform
specific functions or implement specific abstract data types. Figure 4 is described in
view of figures 1-3.
[0057] The order in which the method 400 is described is not intended to be construed
25 as a limitation, and any number of the described method blocks can be combined in
any order to implement the method. Additionally, individual blocks may be deleted
from the methods without departing from the spirit and scope of the subject matter
described.
30 [0058] At block 402, the method 400 may include receiving a plurality of facial images
corresponding to a plurality of selfies taken over a period of time-interval. In an
exemplary embodiment, the system 102 consists of the transceiver 206 which is
configured to receive a plurality of facial images 208 from the user devices 104a..104n.
14
In another exemplary embodiment, the system 102 is configured to retrieve the plurality
of facial images 208 previously/already stored in the memory 204.
[0059] At block 404, the method 400 may include extracting a plurality of hair
5 components from the received facial images. In an exemplary embodiment, the system
102 consists of the processing unit 202 operatively coupled to the memory 204 which
is configured to extract hair component or hair part from each of the received facial
images 208.
10 [0060] At block 406, the method 400 may include processing the plurality of hair
components to generate a standardized hair segment corresponding to each of the
plurality of hair components. In an exemplary embodiment, the processing unit 202 is
configured to generate standardized hair segments such as, but not limited to, hair color,
length of hair, thickness of hair, hair texture, hair symmetry and so forth corresponding
15 to each of the hair component of each of the human facial images 208.
[0061] At block 408, the method 400 may include generating a vector representation
corresponding to each of the standardized hair segment. In an exemplary embodiment,
the processing unit 202 is configured to generate vector representation for each of the
20 standardized hair segment by assigning a data point to each hair vector based on the
similarity between hair segments.
[0062] At block 410, the method 400 may include grouping the plurality of
standardized hair segments to generate a plurality of clusters. In an exemplary
25 embodiment, the processing unit 202 is configured to generate a plurality of clusters by
using standard clustering techniques such as k-means.
[0063] At block 412, the method 400 may include selecting a hair segment
corresponding to each cluster, wherein the selection of the hair segment is based on
30 centroid of the respective cluster. In an exemplary embodiment, the processing unit 202
is configured to select a hair segment based on an average of one or more characteristics
of one or more images (vector representation and/or hair segment) of a cluster.
15
[0064] At block 414, the method 400 may include generating a virtual hairstyle
corresponding to each of the selected hair segment. In an exemplary embodiment, the
processing unit 202 is configured to generate different virtual hairstyles such that a user
may select any of the suitable hairstyles created by the system 102.
5
[0065] A description of an embodiment with several components in communication
with each other does not imply that all such components are required. On the contrary,
a variety of optional components are described to illustrate the wide variety of possible
embodiments of the invention.
10
[0066] When a single device or article is described herein, it will be clear that more
than one device/article (whether they cooperate) may be used in place of a single
device/article. Similarly, where more than one device or article is described herein
(whether they cooperate), it will be clear that a single device/article may be used in
15 place of the more than one device or article or a different number of
devices/articles may be used instead of the shown number of devices or programs. The
functionality and/or the features of a device may be alternatively embodied by one or
more other devices which are not explicitly described as having such
functionality/features. Thus, other embodiments of the invention need not include the
20 device itself.
[0067] Finally, the language used in the specification has been principally selected for
readability and instructional purposes, and it may not have been selected to delineate
or circumscribe the inventive subject matter. It is therefore intended that the scope of
25 the invention be limited not by this detailed description, but rather by any claims that
issue on an application based here on. Accordingly, the embodiments of the present
invention are intended to be illustrative, but not limiting, of the scope of the invention,
which is set forth in the following claims.
30 [0068] While various aspects and embodiments have been disclosed herein, other
aspects and embodiments will be apparent to those skilled in the art. The various aspects
and embodiments disclosed herein are for purposes of illustration and are not intended
to be limiting, with the true scope and spirit being indicated by the following claims.
We Claim:
1. A method for generating a virtual hairstyle, the method comprising:
receiving a plurality of facial images;
extracting a plurality of hair components from the received facial images;
processing the plurality of hair components to generate a standardized hair
segment corresponding to each of the plurality of hair components;
generating a vector representation corresponding to each of the standardized
hair segment;
grouping the plurality of standardized hair segments to generate a plurality of
clusters;
selecting a hair segment corresponding to each cluster, wherein the selection of
the hair segment is based on centroid of the respective cluster; and
generating a virtual hairstyle corresponding to each of the selected hair
segment.
2. The method as claimed in claim 1, wherein the standardization of hair segment
comprise adjusting size and position of hair component based on predefined standard
size.
3. The method as claimed in claim 1, wherein the generated virtual hairstyle is
assigned a name and stored in a memory.
4. The method as claimed in claim 1, wherein generating the vector representation
for each of the standardized hair segment further comprises assigning a data point to
each hair vector corresponding to the standardized hair segment.
5. The method as claimed in claim 4, wherein the assigning the data point to the
hair vector is based on the similarity between hair segments.
6. The method as claimed in claim 1, wherein generating the plurality of cluster
comprises using a standard distance function to identify similar hair segments based on
the corresponding vector representation.
7. A system to generate a virtual hairstyle, the system comprising:
a transceiver configured to receive a plurality of facial images;
18
a memory configured to store the plurality of facial images; and
a processing unit operatively coupled to the memory; the processing unit
configured to:
retrieve a plurality of facial images;
extract a plurality of hair components from the received facial images;
process the plurality of hair components to generate a standardized hair
segment corresponding to each of the plurality of hair components;
generate a vector representation corresponding to each of the
standardized hair segment;
group the plurality of standardized hair segments to generate a plurality
of clusters;
select a hair segment corresponding to each cluster, wherein the
selection of the hair segment is based on centroid of the respective cluster; and
generate a virtual hairstyle corresponding to each of the selected hair
segment.
8. The system as claimed in claim 7, wherein the processing unit is configured to
standardize hair segment by adjusting size and position of hair component based on
predefined standard size.
9. The system as claimed in claim 7, wherein the processing unit is configured to
assign a name to the generated virtual hairstyle and store same in the memory.
10. The system as claimed in claim 7, wherein the processing unit is configured to
generate the vector representation for each of the standardized hair segment by
assigning a data point to each hair vector corresponding to the standardized hair
segment.
11. The system as claimed in claim 7, wherein the processing unit is configured to
generate the plurality of cluster using a standard distance function to identify similar
hair segments based on the corresponding vector representation.
| # | Name | Date |
|---|---|---|
| 1 | 202011000471-FER.pdf | 2025-04-03 |
| 1 | 202011000471-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf | 2020-01-06 |
| 2 | 202011000471-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf | 2020-01-06 |
| 2 | 202011000471-FORM 18 [14-11-2023(online)].pdf | 2023-11-14 |
| 3 | 202011000471-COMPLETE SPECIFICATION [05-01-2021(online)].pdf | 2021-01-05 |
| 3 | 202011000471-POWER OF AUTHORITY [06-01-2020(online)].pdf | 2020-01-06 |
| 4 | 202011000471-FORM 1 [06-01-2020(online)].pdf | 2020-01-06 |
| 4 | 202011000471-CORRESPONDENCE-OTHERS [05-01-2021(online)].pdf | 2021-01-05 |
| 5 | 202011000471-DRAWINGS [06-01-2020(online)].pdf | 2020-01-06 |
| 5 | 202011000471-DRAWING [05-01-2021(online)].pdf | 2021-01-05 |
| 6 | 202011000471-Proof of Right [10-02-2020(online)].pdf | 2020-02-10 |
| 6 | 202011000471-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf | 2020-01-06 |
| 7 | abstract.jpg | 2020-01-17 |
| 8 | 202011000471-Proof of Right [10-02-2020(online)].pdf | 2020-02-10 |
| 8 | 202011000471-DECLARATION OF INVENTORSHIP (FORM 5) [06-01-2020(online)].pdf | 2020-01-06 |
| 9 | 202011000471-DRAWINGS [06-01-2020(online)].pdf | 2020-01-06 |
| 9 | 202011000471-DRAWING [05-01-2021(online)].pdf | 2021-01-05 |
| 10 | 202011000471-CORRESPONDENCE-OTHERS [05-01-2021(online)].pdf | 2021-01-05 |
| 10 | 202011000471-FORM 1 [06-01-2020(online)].pdf | 2020-01-06 |
| 11 | 202011000471-POWER OF AUTHORITY [06-01-2020(online)].pdf | 2020-01-06 |
| 11 | 202011000471-COMPLETE SPECIFICATION [05-01-2021(online)].pdf | 2021-01-05 |
| 12 | 202011000471-PROVISIONAL SPECIFICATION [06-01-2020(online)].pdf | 2020-01-06 |
| 12 | 202011000471-FORM 18 [14-11-2023(online)].pdf | 2023-11-14 |
| 13 | 202011000471-STATEMENT OF UNDERTAKING (FORM 3) [06-01-2020(online)].pdf | 2020-01-06 |
| 13 | 202011000471-FER.pdf | 2025-04-03 |
| 14 | 202011000471-FORM 3 [14-05-2025(online)].pdf | 2025-05-14 |
| 15 | 202011000471-OTHERS [03-10-2025(online)].pdf | 2025-10-03 |
| 16 | 202011000471-FER_SER_REPLY [03-10-2025(online)].pdf | 2025-10-03 |
| 1 | SearchHistory(5)E_03-07-2024.pdf |