Abstract: Present disclosure generally relates to digital image processing systems, particularly to method and system for recommending beauty product(s) for subject. Method includes acquiring, using image capturing unit, image depicting face of subject and causing, in real-time, display of options to select virtual makeup templates/virtual face effects to apply to face depicted in image. Further, method includes receiving selection of virtual makeup templates/virtual face effects to apply to face, and determining regions of interest in face in retrieved image of subject, and detecting features in regions of interest in face. Method includes analyzing characteristics of features in t regions of interest and mapping regions of interest to type of beauty product, and characteristics of features to beauty product. Method includes determining iteratively beauty products corresponding to type of beauty product and characteristics, and recommending beauty products with option to shop.
Description:FIELD OF INVENTION
[0001] The embodiments of the present disclosure generally relate to digital image processing systems. More particularly, the present disclosure relates to a method and a system for recommending beauty product(s) for a subject, based on determining a plurality of regions of interest in a face in an image of the subject.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Generally, most of the users using social media may predominantly use one or more face filters to enhance and try suitable looks. The face filters may use Augmented Reality (AR) technology to detect a face and instantaneously superimpose visual effects onto the face. The face filter suitable to the user may also tend to become an aspiration to look like in reality. Further, the users may use face filters provided on platforms while capturing a selfie. The face filters may often have makeup components (e.g., blush, lipstick) that enhance the face in an image of the user. The users, however, may not be able to shop for products from the face filters that do not have any product tagged on the face filters.
[0004] Conventionally, one or more try-on makeup filters may be provided using one or more shades from certain beauty products. The beauty products may be tagged to the try-on makeup filters to allow the user to shop for the product used in the try-on makeup filters. For example, try-on makeup filters may be implemented using a L brand blush shade, then the user can try the filter, view, and shop for the product such as the L brand blush shade used in the filter. In another conventional method, the method provides creating filters without using a particular brand's blush shade color. The conventional methos may pre-tag any blush with similar color as the one in the filter. Further, the conventional methods provide a method for virtual makeup try-on at home with the AR and without the AR technology. However, the user may not be able to shop from the try-on makeup/face filters which do not include products tagged to the try-on makeup/face filters. The conventional methods may restrict the user to try and shop from the filters specifically created using a specific product that is tagged to the filters. Further, while the face filters allow users to try on different filters created by other users, the face filters may not detect the makeup elements in the face filter, look up elements in the face filters, and display the relevant product matching the face filters which allows the user to shop for the product.
[0005] Therefore, there may be a need for a method and a system for solving the shortcomings of the conventional methods, by providing a method and a system for recommending beauty product(s) for a subject.
SUMMARY
[0006] This section is provided to introduce certain objects and aspects of the present invention in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter. In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present disclosure is to provide a technique for recommending beauty product(s) for a subject.
[0007] It is an object of the present disclosure to provide a method and a system for recommending beauty product(s) for a subject.
[0008] It is another object of the present disclosure to provide a method and a system for recommending beauty product(s) for a subject.
[0009] It is yet another object of the present disclosure to provide a method and a system for addressing a gap between the face filters/effects and associated products in the face filters/effects, by allowing the user to shop for makeup products that may help the user recreate the look of any face filters/effects that the user has tried.
[0010] It is another object of the present disclosure to eliminate the need for a specific filter to be created with beauty products for a user to be able to shop for their respective products.
[0011] It is another object of the present disclosure to provide a method and a system for allowing the users to shop from any face filters/effects, without restricting to face filters/effects created using particular products and/or tagged to particular products beforehand.
[0012] It is yet another object of the present disclosure to provide a method and a system for determining accurate matches for each product on one or more electronic commerce (e-commerce) platforms.
[0013] It is yet another object of the present disclosure to provide a method and a system for displaying one or more beauty product results with an option to shop.
[0014] In an aspect, the present disclosure provides a method for recommending beauty product(s) for a subject. The method includes acquiring, using an image capturing unit, an image depicting a face of a subject. Further, the method includes causing, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image. Furthermore, the method includes receiving a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and storing the image with the applied at least one of the virtual makeup templates and the virtual face effects. Additionally, the method includes retrieving the stored image of the subject. The stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup. Further, the method includes determining a plurality of regions of interest in the face in the retrieved image of the subject, and detecting one or more features in the plurality of regions of interest in the face in the image. Furthermore, the method includes analyzing one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image. Further, the method includes mapping the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product. Additionally, the method includes determining iteratively one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product. Further, the method includes recommending the one or more beauty products with an option to shop the recommended one or more beauty products for the subject.
[0015] In an embodiment, applying the at least one of the virtual makeup templates and the virtual face effects, further includes segregating the at least one of the virtual makeup templates and the virtual face effects, according to makeup looks and occasions in a gallery. Further, the method includes causing a display of one or more options to browse the at least one of the virtual makeup templates and the virtual face effects in the gallery according to the segregation.
[0016] In an embodiment, the at least one of the virtual makeup templates and the virtual face effects is applied to the face in the image using an Augmented Reality (AR)/ Virtual Reality (VR) technique, by superimposing the at least one of the virtual makeup templates and the virtual face effects to the face in the image.
[0017] In an embodiment, the one or more features comprise at least one of eyes, eyebrows, eyelashes, nose, mouth, lips, cheeks, chin, and hair.
[0018] In an embodiment, the one or more characteristics of the one or more features comprise at least one of a color, shade, tone, finish, dimension, shape, and texture.
[0019] In an embodiment, the one or more beauty products comprise at least one of makeup products, skin care products, and hair products.
[0020] In an embodiment, the one or more beauty products are determined iteratively using an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique, by matching a shade of the one or more beauty products to a similar shades mentioned in a catalog and a kind of the product.
[0021] In an embodiment, recommending the one or more beauty products further comprises segregating the one or more beauty products into one or more categories according to the recommended one or more beauty products.
[0022] In another aspect, the present disclosure provides a system for recommending beauty product(s) for a subject. The system acquires, using an image capturing unit, an image depicting a face of a subject. Further, the system causes, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image. Furthermore, the system receives a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and stores the image with the applied at least one of the virtual makeup templates and the virtual face effects. Additionally, the system retrieves the stored image of the subject. The stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup. Further, the system determines a plurality of regions of interest in the face in the retrieved image of the subject, and detects one or more features in the plurality of regions of interest in the face in the image. Furthermore, the system analyzes one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image. Additionally, the system maps the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product. Further, the system determines iteratively one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product. Furthermore, the system recommends the one or more beauty products with an option to shop the recommended one or more beauty products for the subject.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute a part of this invention, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry/sub-components of each component. It will be appreciated by those skilled in the art that the invention of such drawings includes the invention of electrical components, electronic components, or circuitry commonly used to implement such components.
[0024] FIG. 1 illustrates an exemplary block diagram representation of a network architecture implementing a proposed system for recommending beauty product(s) for a subject, according to embodiments of the present disclosure.
[0025] FIG. 2 illustrates an exemplary detailed block diagram representation of the proposed system, according to embodiments of the present disclosure.
[0026] FIGs. 3A and 3B illustrate exemplary schematic diagram representations of applying virtual makeup templates, virtual face effects, and recommending beauty product(s), according to embodiments of the present disclosure.
[0027] FIG. 4 illustrates a flow chart depicting a method of recommending beauty product(s) for a subject, according to embodiments of the present disclosure.
[0028] FIG. 5 illustrates a hardware platform for the implementation of the disclosed system according to embodiments of the present disclosure.
[0029] The foregoing shall be more apparent from the following more detailed description of the invention.
DETAILED DESCRIPTION OF INVENTION
[0030] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0031] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that, various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
[0032] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0033] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0035] As used herein, "connect", "configure", "couple" and its cognate terms, such as "connects", "connected", "configured", and "coupled" may include a physical connection (such as a wired/wireless connection), a logical connection (such as through logical gates of semiconducting device), other suitable connections, or a combination of such connections, as may be obvious to a skilled person.
[0036] As used herein, "send", "transfer", "transmit", and their cognate terms like "sending", "sent", "transferring", "transmitting", "transferred", "transmitted", etc. include sending or transporting data or information from one unit or component to another unit or component, wherein the content may or may not be modified before or after sending, transferring, transmitting.
[0037] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0038] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed products.
[0039] Various embodiments of the present disclosure provide a method and a system for recommending beauty product(s) for a subject. The present disclosure provides a method and a system for addressing a gap between the face filters/effects and associated products in the face filters/effects, by allowing the user to shop for makeup products that may help the user recreate the look of any face filters/effects that the user has tried. The present disclosure may eliminate the need for a specific filter to be created with beauty products for a user to be able to shop for their respective products. The present disclosure provides a method and a system for allowing the users to shop from any face filters/effects, without restricting to face filters/effects created using particular products and/or tagged to particular products beforehand. The present disclosure provides a method and a system for determining accurate matches for each product on an electronic commerce (e-commerce) platform. The present disclosure provides a method and a system for displaying one or more beauty product results with an option to shop.
[0040] FIG. 1 illustrates an exemplary block diagram representation of a network architecture 100 implementing a proposed system 110 for recommending beauty product(s) for a subject, according to embodiments of the present disclosure. The network architecture 100 may include an electronic device 108, the system 110, a centralized server 118, and an image capturing unit 120. The system 110 may be connected to the centralized server 118 via a communication network 106. The centralized server 118 may include, but is not limited to, a stand-alone server, a remote server, a cloud computing server, a dedicated server, a rack server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof, and the like. The centralized server 118 may be associated with an entity corresponding to an electronic commerce (e-commerce) environment. The communication network 106 may be a wired communication network or a wireless communication network. The wireless communication network may be any wireless communication network capable of transferring data between entities of that network such as, but are not limited to, a Bluetooth, a Zigbee, a Near Field Communication (NFC), a Wireless-Fidelity (Wi-Fi) network, a Light Fidelity (Li-FI) network, a carrier network including a circuit-switched network, a packet switched network, a Public Switched Telephone Network (PSTN), a Content Delivery Network (CDN) network, an Internet, intranets, Local Area Networks (LANs), Wide Area Networks (WANs), mobile communication networks including a Second Generation (2G), a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a Long-Term Evolution (LTE) network, a New Radio (NR), a Narrow-Band (NB), an Internet of Things (IoT) network, a Global System for Mobile Communications (GSM) network and a Universal Mobile Telecommunications System (UMTS) network, combinations thereof, and the like.
[0041] The system 110 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. For example, the system 110 may be implemented by way of a standalone device such as the centralized server 118, and the like, and may be communicatively coupled to the electronic device 108. In another example, the system 110 may be implemented in/ associated with the electronic device 108. In yet another example, the system 110 may be implemented in/ associated with respective computing device 104-1, 104-2, …..., 104-N (individually referred to as the computing device 104, and collectively referred to as the computing devices 104), associated with one or more user 102-1, 102-2, …..., 102-N (individually referred to as the user 102, and collectively referred to as the users 102). In such a scenario, the system 110 may be replicated in each of the computing devices 104. The users 102 may be a user of, but are not limited to, an electronic commerce (e-commerce) platform, a merchant platform, a hyperlocal platform, a super-mart platform, a media platform, a service providing platform, a social networking platform, a travel/services booking platform, a messaging platform, a bot processing platform, a virtual assistance platform, an Artificial Intelligence (AI) based platform, a blockchain platform, a blockchain marketplace, and the like. In some instances, the user 102 may correspond to an entity/administrator of platforms/services. Further, the subject may correspond to users 102.
[0042] The electronic device 108 may be at least one of, an electrical, an electronic, an electromechanical, and a computing device. The electronic device 108 may include, but is not limited to, a mobile device, a smart-phone, a Personal Digital Assistant (PDA), a tablet computer, a phablet computer, a wearable computing device, a Virtual Reality/Augmented Reality (VR/AR) device, a laptop, a desktop, a server, and the like. The electronic device 108 may be associated with/ include the image capturing unit 120. The system 110 may be implemented in hardware or a suitable combination of hardware and software. The system 110 or the centralized server 118 may be associated with entities (not shown). The entities may include, but are not limited to, an e-commerce company, a merchant organization, a travel company, an airline company, a hotel booking company, a company, an outlet, a manufacturing unit, an enterprise, a facility, an organization, an educational institution, a secured facility, a warehouse facility, a supply chain facility, and the like.
[0043] Further, the system 110 may include a processor 112, an Input/Output (I/O) interface 114, and a memory 116. The Input/Output (I/O) interface 114 of the system 110 may be used to receive user inputs, from the computing devices 104 associated with the users 102. Further, system 110 may also include other units such as a display unit, an input unit, an output unit, and the like, however the same are not shown in FIG. 1, for the purpose of clarity. Also, in FIG. 1 only a few units are shown, however, the system 110 or the network architecture 100 may include multiple such units or the system 110/ network architecture 100 may include any such numbers of the units, obvious to a person skilled in the art or as required to implement the features of the present disclosure. The system 110 may be a hardware device including the processor 112 executing machine-readable program instructions to recommend beauty product(s) for a subject.
[0044] Execution of the machine-readable program instructions by the processor 112 may enable the proposed system 110 to recommend beauty product(s) for a subject. The “hardware” may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, or other suitable hardware. The “software” may comprise one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code, or other suitable software structures operating in one or more software applications or on one or more processors. The processor 112 may include, for example, but is not limited to, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and any devices that manipulate data or signals based on operational instructions, and the like. Among other capabilities, the processor 112 may fetch and execute computer-readable instructions in the memory 116 operationally coupled with the system 110 for performing tasks such as data processing, input/output processing, feature extraction, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data.
[0045] In the example that follows, assume that a user 102 of the system 110 desires to improve/add additional features to recommend beauty product(s) for a subject. In this instance, the user 102 may include an administrator of a website, an administrator of an e-commerce site, an administrator of a social media site, an administrator of an e-commerce application/ social media application/other applications, an administrator of media content (e.g., television content, video-on-demand content, online video content, graphical content, image content, augmented/virtual reality content, metaverse content), an administrator of supply chain platform, an administrator of blockchain marketplace, an administrator of a travel/services booking platform, an administrator of merchant platform, among other examples, and the like. The system 110 when associated with the electronic device 108 or the centralized server 118 may include, but is not limited to, a touch panel, a soft keypad, a hard keypad (including buttons), and the like.
[0046] In an embodiment, the system 110 may acquire, using the image capturing unit 120, an image depicting a face of the subject (i.e., user 102).
[0047] In an embodiment, the system 110 may cause, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image.
[0048] In an embodiment, the system 110 may receive a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and store the image with the applied at least one of the virtual makeup templates and the virtual face effects. For applying the at least one of the virtual makeup templates and the virtual face effects, the system 110 may further segregate the at least one of the virtual makeup templates and the virtual face effects, according to makeup looks and occasions in a gallery. Further, the system 110 may cause a display of one or more options to browse the at least one of the virtual makeup templates and the virtual face effects in the gallery according to the segregation. In an embodiment, the virtual makeup templates and the virtual face effects may be applied to the face in the image using an Augmented Reality (AR)/ Virtual Reality (VR) technique, by superimposing the at least one of the virtual makeup templates and the virtual face effects to the face in the image.
[0049] In an embodiment, the system 110 may retrieve the stored image of the subject. In an embodiment, the stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup.
[0050] In an embodiment, the system 110 may determine a plurality of regions of interest in the face in the retrieved image of the subject, and detect one or more features in the plurality of regions of interest in the face in the image. In an embodiment, the plurality of regions of interest may be determined using a pre-defined table including marking of regions of the face with an application area of a product or a type/shade of product. In an embodiment, the one or more features include, but are not limited to, eyes, eyebrows, eyelashes, nose, mouth, lips, cheeks, chin, hair, and the like. In another embodiment, the system 110 may detect, in real-time, one or more features in the plurality of regions of interest in the face, when the user 102 is viewed in the image capturing unit 120. This way the user 102 can shop for products while trying them on before even clicking a photo.
[0051] In an embodiment, the system 110 may analyze one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image. In an embodiment, the one or more characteristics of the one or more features include, but are not limited to, a color, shade, tone, finish, texture, dimension, shape, and the like. For example, to identify in the plurality of regions of interest an eyeliner and not eyeshadow, the thickness and shape characteristics may be helpful.
[0052] In an embodiment, the system 110 may map the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product. In an embodiment, an Artificial Intelligence (AI) technique and the Machine Learning (ML) technique may be used to understand different faces and adapting different faces to the type of beauty product. In another example, the Artificial Intelligence (AI) technique and the Machine Learning (ML) technique may be used to understand different, but not limited to, facial parts, facial expressions, and the like, and adapting different facial parts, facial expressions, to the type of beauty product. In another example, the Artificial Intelligence (AI) technique and the Machine Learning (ML) technique may be used to understand and analyze characteristics of the face or a facial part such as, but not limited to, color, shade, tone, finish, dimension, shape, texture, and the like.
[0053] In an embodiment, the system 110 may determine iteratively one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product. In an embodiment, one or more beauty products may be determined iteratively using, but not limited to, an Artificial Intelligence (AI) technique, a Machine Learning (ML) technique, or using any other techniques without the use of AI and ML, and the like. The AI and ML techniques iteratively determine one or more beauty products, by matching a shade of the one or more beauty products to a similar shade mentioned in a catalog and a kind of the product. The AI and ML techniques iteratively determine one or more beauty products, by matching a shade of the one or more beauty products to a similar shades mentioned in a catalog and a kind of the product.
[0054] In an embodiment, the system 110 may recommend the one or more beauty products with an option to shop the recommended one or more beauty products for the subject. In an embodiment, the one or more beauty products include, but are not limited to, makeup products, skin care products, hair products, and the like. In an embodiment, for recommending the one or more beauty products, the system 110 may further segregate the one or more beauty products into one or more categories according to the recommended one or more beauty products.
[0055] FIG. 2 illustrates an exemplary detailed block diagram representation of the proposed system 110, according to embodiments of the present disclosure. The system 110 may include the processor 112, the Input/Output (I/O) interface 114, and the memory 116. In some implementations, the system 110 may include data 202, and modules 204. As an example, the data 202 may be stored in the memory 116 configured in the system 110 as shown in FIG. 2.
[0056] In an embodiment, the data 202 may include face data 206, options data 208, virtual makeup templates and face effects data 210, image data 212, region of interest data 214, features data 216, beauty products data 218, and other data 220. In an embodiment, the data 202 may be stored in the memory 116 in the form of various data structures. Additionally, the data 202 can be organized using data models, such as relational or hierarchical data models. The other data 218 may store data, including temporary data and temporary files, generated by the modules 204 for performing the various functions of the system 110.
[0057] In an embodiment, the modules 204, may include an acquiring module 222, a causing module 224, a receiving module 226, a retrieving module 228, a determining module 230, an analyzing module 232, a mapping module 234, a recommending module 236, and other modules 238.
[0058] In an embodiment, the data 202 stored in the memory 116 may be processed by the modules 204 of the system 110. The modules 204 may be stored within the memory 116. In an example, the modules 204 communicatively coupled to the processor 112 configured in the system 110, may also be present outside the memory 116, as shown in FIG. 2, and implemented as hardware. As used herein, the term modules refer to an Application-Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
[0059] In an embodiment, the acquiring module 222 may acquire, using the image capturing unit 120, an image depicting a face of the subject (i.e., user 102). The face of the subject may be stored as the face data 206.
[0060] In an embodiment, the causing module 224 may cause, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image. The display of one or more options may be stored as the options data 208. The selected at least one of virtual makeup templates and virtual face effects may be stored as the virtual makeup templates and face effects data 210.
[0061] In an embodiment, the receiving module 226 may receive a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image. The system 110 may store the image with the applied at least one of the virtual makeup templates and the virtual face effects. The image with the applied at least one of the virtual makeup templates and the virtual face effects may be stored as the image data 212. For applying the at least one of the virtual makeup templates and the virtual face effects, the system 110 may further segregate the at least one of the virtual makeup templates and the virtual face effects, according to makeup looks and occasions in a gallery. Further, the causing module 224 may cause a display of one or more options to browse the at least one of the virtual makeup templates and the virtual face effects in the gallery according to the segregation. In an embodiment, the virtual makeup templates and the virtual face effects may be applied to the face in the image using an Augmented Reality (AR)/ Virtual Reality (VR) technique, by superimposing the at least one of the virtual makeup templates and the virtual face effects to the face in the image.
[0062] In an embodiment, the retrieving module 228 may retrieve the stored image of the subject. In an embodiment, the stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup.
[0063] In an embodiment, the determining module 230 may determine a plurality of regions of interest in the face in the retrieved image of the subject. The determined plurality of regions of interest may be stored as the region of interest data 214. The system 110 may detect one or more features in the plurality of regions of interest in the face in the image. In an embodiment, the one or more features include, but are not limited to, eyes, eyebrows, eyelashes, nose, mouth, lips, cheeks, chin, hair, and the like. The detected one or more features in the plurality of regions of interest may be stored as the features data 216.
[0064] In an embodiment, the analyzing module 232 may analyze one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image. In an embodiment, the one or more characteristics of the one or more features include, but are not limited to, a color, shade, tone, finish, texture, dimension, shape, and the like.
[0065] In an embodiment, the mapping module 234 may map the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product. The type of beauty product may be stored as the beauty products data 218.
[0066] In an embodiment, the determining module 230 may determine iteratively one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product. In an embodiment, one or more beauty products are determined iteratively using an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique, by matching a shade of the one or more beauty products to a similar shades mentioned in a catalog and a kind of the product.
[0067] In an embodiment, the recommending module 236 may recommend the one or more beauty products with an option to shop the recommended one or more beauty products for the subject. One or more beauty products may be stored as the beauty products data 218. In an embodiment, the one or more beauty products include, but are not limited to, makeup products, skin care products, hair products, and the like. In an embodiment, for recommending the one or more beauty products, the system 110 may further segregate the one or more beauty products into one or more categories according to the recommended one or more beauty products.
Exemplary scenario:
[0068] Consider a scenario, the user 102 may have landed on a cosmetics category landing page, or in between the search results page, as shown in FIG. 3A (a). Further, the user 102 may click on the option displayed as “Now try out makeup filters”. Subsequently, a camera in the electronic device 108 may be enabled which includes options such as, but are not limited to, camera settings, an option to view yourself without any filter, different filters with filter names displayed at the bottom with an option to ‘favorite’ or share the filters with others, option to go to filter gallery to view all filters available, and the like, as shown in FIG. 3A (b).
[0069] Further, the user 102 may click a photo with an applied makeup filter. The user can save or share a photo. The electronic device 108 may recommend beauty products upon clicking the photo, as shown in FIG. 3A (c). Further, in an exemplary scenario, the user, in real-time, may view products without clicking/uploading a photo. The user 102 can shop for products that may help them create a look akin to the one in the photo clicked with the makeup filter. The user 102 can browse all the products in a regular grid format by clicking on a chevron icon as shown in FIG. 3A (c) and FIG. 3A (d). On clicking the chevron icon by the user 102, on the previous screen FIG. 3A (c), the user can view all products in grid form FIG. 3A (d). The user 102 can also filter products based on categories, e.g., lipstick.
[0070] Further, as depicted in FIG. 3B (a), a search icon may allow the user 102 to go to a makeup filter gallery. The users 102 can browse all the makeup filters on the makeup filter gallery page, where they can segregate filters by makeup looks, e.g., party or casual, as shown in FIG. 3B (b).
[0071] FIG. 4 illustrates a flow chart depicting a method 400 of recommending beauty product(s) for a subject, according to embodiments of the present disclosure.
[0072] At block 402, the method 400 includes acquiring, by the processor 112 associated with the system 110, using an image capturing unit, an image depicting a face of a subject 102.
[0073] At block 404, the method 400 includes causing, by the processor 112, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image.
[0074] At block 406, the method 400 includes receiving, by the processor 112, a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and storing the image with the applied at least one of the virtual makeup templates and the virtual face effects.
[0075] At block 408, the method 400 includes retrieving, by the processor 112, the stored image of the subject. The stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup.
[0076] At block 410, the method 400 includes determining, by the processor 112, a plurality of regions of interest in the face in the retrieved image of the subject, and detecting one or more features in the plurality of regions of interest in the face in the image.
[0077] At block 412, the method 400 includes analyzing, by the processor 112, one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image.
[0078] block 414, the method 400 includes mapping, by the processor 112, the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product.
[0079] block 416, the method 400 includes determining iteratively, by the processor 112, one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product.
[0080] block 418, the method 400 includes recommending, by the processor 112, the one or more beauty products with an option to shop the recommended one or more beauty products for the subject.
[0081] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined or otherwise performed in any order to implement the method 400 or an alternate method. Additionally, individual blocks may be deleted from the method 400 without departing from the spirit and scope of the present disclosure described herein. Furthermore, the method 400 may be implemented in any suitable hardware, software, firmware, or a combination thereof, that exists in the related art or that is later developed. The method 400 describes, without limitation, the implementation of the system 110. A person of skill in the art will understand that method 400 may be modified appropriately for implementation in various manners without departing from the scope and spirit of the disclosure.
[0082] FIG. 5 illustrates a hardware platform 500 for implementation of the disclosed system 110, according to an example embodiment of the present disclosure. For the sake of brevity, the construction, and operational features of the system 110 which are explained in detail above are not explained in detail herein. Particularly, computing machines such as but not limited to internal/external server clusters, quantum computers, desktops, laptops, smartphones, tablets, and wearables which may be used to execute the system 110 or may include the structure of the hardware platform 500. As illustrated, the hardware platform 500 may include additional components not shown, and that some of the components described may be removed and/or modified. For example, a computer system with multiple GPUs may be located on external-cloud platforms including Amazon® Web Services, or internal corporate cloud computing clusters, or organizational computing resources, and the like.
[0083] The hardware platform 500 may be a computer system such as the system 110 that may be used with the embodiments described herein. The computer system may represent a computational platform that includes components that may be in a server or another computer system. The computer system may execute, by the processor 505 (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions, and other processes described herein. These methods, functions, and other processes may be embodied as machine-readable instructions stored on a computer-readable medium, which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory). The computer system may include the processor 505 that executes software instructions or code stored on a non-transitory computer-readable storage medium 510 to perform methods of the present disclosure. The software code includes, for example, instructions to gather data and documents and analyze documents. In an example, the modules 204, may be software codes or components performing these steps. For example, the modules may include an acquiring module 222, a causing module 224, a receiving module 226, a retrieving module 228, a determining module 230, an analyzing module 232, a mapping module 234, a recommending module 236, and other modules 238.
[0084] The instructions on the computer-readable storage medium 510 are read and stored the instructions in storage 515 or in random access memory (RAM). The storage 515 may provide a space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM such as RAM 520. The processor 505 may read instructions from the RAM 520 and perform actions as instructed.
[0085] The computer system may further include the output device 525 to provide at least some of the results of the execution as output including, but not limited to, visual information to users, such as external agents. The output device 525 may include a display on computing devices and virtual reality glasses. For example, the display may be a mobile phone screen or a laptop screen. GUIs and/or text may be presented as an output on the display screen. The computer system may further include an input device 530 to provide a user or another device with mechanisms for entering data and/or otherwise interacting with the computer system. The input device 530 may include, for example, a keyboard, a keypad, a mouse, or a touchscreen. Each of these output devices 525 and input device 530 may be joined by one or more additional peripherals. For example, the output device 525 may be used to display the results such as bot responses by the executable chatbot.
[0086] A network communicator 535 may be provided to connect the computer system to a network and in turn to other devices connected to the network including other clients, servers, data stores, and interfaces, for instance. A network communicator 535 may include, for example, a network adapter such as a LAN adapter or a wireless adapter. The computer system may include a data sources interface 540 to access the data source 545. The data source 545 may be an information resource. As an example, a database of exceptions and rules may be provided as the data source 545. Moreover, knowledge repositories and curated data may be other examples of the data source 545.
[0087] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the invention and not as a limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0088] The present disclosure provides a method and a system for recommending beauty product(s) for a subject.
[0089] The present disclosure provides a method and a system for addressing a gap between the face filters/effects and associated products in the face filters/effects, by allowing the user to shop for makeup products that may help the user recreate the look of any face filters/effects that the user has tried.
[0090] The present disclosure may eliminate the need for a specific filter to be created with beauty products for a user to be able to shop for their respective products.
[0091] The present disclosure provides a method and a system for allowing the users to shop from any face filters/effects, without restricting to face filters/effects created using particular products and/or tagged to particular products beforehand.
[0092] The present disclosure provides a method and a system for determining accurate matches for each product on an electronic commerce (e-commerce) platform.
[0093] The present disclosure provides a method and a system for displaying one or more beauty product results with an option to shop.
, Claims:1. A method for recommending beauty product(s) for a subject (102), the method comprising:
acquiring, by a processor (112) associated with a system (110), using an image capturing unit (120), an image depicting a face of a subject (102);
causing, by the processor (112), in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image;
receiving, by the processor (112), a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and storing the image with the applied at least one of the virtual makeup templates and the virtual face effects;
retrieving, by the processor (112), the stored image of the subject (102), wherein the stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup;
determining, by the processor (112), a plurality of regions of interest in the face in the retrieved image of the subject (102), and detecting one or more features in the plurality of regions of interest in the face in the image;
analyzing, by the processor (112), one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image;
mapping, by the processor (112), the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product;
determining iteratively, by the processor (112), one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product; and
recommending, by the processor (112), the one or more beauty products with an option to shop the recommended one or more beauty products for the subject (102).
2. The method as claimed in claim 1, wherein applying the at least one of the virtual makeup templates and the virtual face effects, further comprises:
segregating, by the processor (112), the at least one of the virtual makeup templates and the virtual face effects, according to makeup looks and occasions in a gallery; and
causing, by the processor (112), a display of one or more options to browse the at least one of the virtual makeup templates and the virtual face effects in the gallery according to the segregation.
3. The method as claimed in claim 1, wherein the at least one of the virtual makeup templates and the virtual face effects is applied to the face in the image using an Augmented Reality (AR)/ Virtual Reality (VR) technique, by superimposing the at least one of the virtual makeup templates and the virtual face effects to the face in the image.
4. The method as claimed in claim 1, wherein the one or more features comprises at least one of eyes, eyebrows, eyelashes, nose, mouth, lips, cheeks, chin, and hair.
5. The method as claimed in claim 1, wherein the one or more characteristics of the one or more features comprises at least one of a color, shade, tone, finish, dimension, shape, and texture.
6. The method as claimed in claim 1, wherein the one or more beauty products comprises at least one of makeup products, skin care products, and hair products.
7. The method as claimed in claim 1, wherein the one or more beauty products are determined iteratively using an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique, by matching a shade of the one or more beauty products to a similar shades mentioned in a catalog and a kind of the product.
8. The method as claimed in claim 1, wherein recommending the one or more beauty products further comprises segregating the one or more beauty products into one or more categories according to the recommended one or more beauty products.
9. A system (110) for recommending beauty product(s) for a subject (102), the system (110) comprising:
a processor (112);
a memory (116) coupled to the processor (112), wherein the memory (116) comprises processor-executable instructions, which on execution, cause the processor (112) to:
acquire, using an image capturing unit (120), an image depicting a face of a subject (102);
cause, in real-time, a display of one or more options to select at least one of virtual makeup templates and virtual face effects to apply to the face depicted in the image;
receive a selection of at least one of the virtual makeup templates and the virtual face effects to apply to the face in the image, and store the image with the applied at least one of the virtual makeup templates and the virtual face effects;
retrieve the stored image of the subject (102), wherein the stored image corresponds to at least one of the image with the applied at least one of the virtual makeup templates and the virtual face effects, and a pre-stored image with a real makeup;
determine a plurality of regions of interest in the face in the retrieved image of the subject (102), and detect one or more features in the plurality of regions of interest in the face in the image;
analyze one or more characteristics of the one or more features in the plurality of regions of interest in the face in the image;
map the determined plurality of regions of interest in the face to a type of beauty product, and the analyzed characteristics of the one or more features to the beauty product;
determine iteratively one or more beauty products corresponding to the type of the beauty product and the characteristics of the beauty product; and
recommend the one or more beauty products with an option to shop the recommended one or more beauty products for the subject (102).
10. The system (110) as claimed in claim 9, wherein applying the at least one of the virtual makeup templates and the virtual face effects, the processor (112) is further configured to:
segregate the at least one of the virtual makeup templates and the virtual face effects, according to makeup looks and occasions in a gallery; and
cause a display of one or more options to browse the at least one of the virtual makeup templates and the virtual face effects in the gallery according to the segregation.
11. The system (110) as claimed in claim 9, wherein the at least one of the virtual makeup templates and the virtual face effects is applied to the face in the image using an Augmented Reality (AR)/ Virtual Reality (VR) technique, by superimposing the at least one of the virtual makeup templates and the virtual face effects to the face in the image.
12. The system (110) as claimed in claim 9, wherein the one or more features comprises at least one of eyes, eyebrows, eyelashes, nose, mouth, lips, cheeks, chin, and hair.
13. The system (110) as claimed in claim 9, wherein the one or more characteristics of the one or more features comprises at least one of a color, shade, tone, finish, dimension, shape, and texture.
14. The system (110) as claimed in claim 9, wherein the one or more beauty products comprises at least one of makeup products, skin care products, and hair products.
15. The system (110) as claimed in claim 9, wherein the one or more beauty products are determined iteratively using an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique, by matching a shade of the one or more beauty products to a similar shades mentioned in a catalog and a kind of the product.
16. The system (110) as claimed in claim 9, wherein for recommending the one or more beauty products, the processor (112) is further configured to segregate the one or more beauty products into one or more categories according to the recommended one or more beauty products.
| # | Name | Date |
|---|---|---|
| 1 | 202241059669-CLAIMS [15-06-2023(online)].pdf | 2023-06-15 |
| 1 | 202241059669-STATEMENT OF UNDERTAKING (FORM 3) [19-10-2022(online)].pdf | 2022-10-19 |
| 2 | 202241059669-COMPLETE SPECIFICATION [15-06-2023(online)].pdf | 2023-06-15 |
| 2 | 202241059669-REQUEST FOR EXAMINATION (FORM-18) [19-10-2022(online)].pdf | 2022-10-19 |
| 3 | 202241059669-REQUEST FOR EARLY PUBLICATION(FORM-9) [19-10-2022(online)].pdf | 2022-10-19 |
| 3 | 202241059669-CORRESPONDENCE [15-06-2023(online)].pdf | 2023-06-15 |
| 4 | 202241059669-POWER OF AUTHORITY [19-10-2022(online)].pdf | 2022-10-19 |
| 4 | 202241059669-DRAWING [15-06-2023(online)].pdf | 2023-06-15 |
| 5 | 202241059669-FORM-9 [19-10-2022(online)].pdf | 2022-10-19 |
| 5 | 202241059669-FER_SER_REPLY [15-06-2023(online)].pdf | 2023-06-15 |
| 6 | 202241059669-FORM-26 [15-06-2023(online)].pdf | 2023-06-15 |
| 6 | 202241059669-FORM 18 [19-10-2022(online)].pdf | 2022-10-19 |
| 7 | 202241059669-FORM 1 [19-10-2022(online)].pdf | 2022-10-19 |
| 7 | 202241059669-FER.pdf | 2022-12-22 |
| 8 | 202241059669-ENDORSEMENT BY INVENTORS [08-11-2022(online)].pdf | 2022-11-08 |
| 8 | 202241059669-DRAWINGS [19-10-2022(online)].pdf | 2022-10-19 |
| 9 | 202241059669-COMPLETE SPECIFICATION [19-10-2022(online)].pdf | 2022-10-19 |
| 9 | 202241059669-DECLARATION OF INVENTORSHIP (FORM 5) [19-10-2022(online)].pdf | 2022-10-19 |
| 10 | 202241059669-COMPLETE SPECIFICATION [19-10-2022(online)].pdf | 2022-10-19 |
| 10 | 202241059669-DECLARATION OF INVENTORSHIP (FORM 5) [19-10-2022(online)].pdf | 2022-10-19 |
| 11 | 202241059669-DRAWINGS [19-10-2022(online)].pdf | 2022-10-19 |
| 11 | 202241059669-ENDORSEMENT BY INVENTORS [08-11-2022(online)].pdf | 2022-11-08 |
| 12 | 202241059669-FER.pdf | 2022-12-22 |
| 12 | 202241059669-FORM 1 [19-10-2022(online)].pdf | 2022-10-19 |
| 13 | 202241059669-FORM 18 [19-10-2022(online)].pdf | 2022-10-19 |
| 13 | 202241059669-FORM-26 [15-06-2023(online)].pdf | 2023-06-15 |
| 14 | 202241059669-FER_SER_REPLY [15-06-2023(online)].pdf | 2023-06-15 |
| 14 | 202241059669-FORM-9 [19-10-2022(online)].pdf | 2022-10-19 |
| 15 | 202241059669-DRAWING [15-06-2023(online)].pdf | 2023-06-15 |
| 15 | 202241059669-POWER OF AUTHORITY [19-10-2022(online)].pdf | 2022-10-19 |
| 16 | 202241059669-CORRESPONDENCE [15-06-2023(online)].pdf | 2023-06-15 |
| 16 | 202241059669-REQUEST FOR EARLY PUBLICATION(FORM-9) [19-10-2022(online)].pdf | 2022-10-19 |
| 17 | 202241059669-COMPLETE SPECIFICATION [15-06-2023(online)].pdf | 2023-06-15 |
| 17 | 202241059669-REQUEST FOR EXAMINATION (FORM-18) [19-10-2022(online)].pdf | 2022-10-19 |
| 18 | 202241059669-STATEMENT OF UNDERTAKING (FORM 3) [19-10-2022(online)].pdf | 2022-10-19 |
| 18 | 202241059669-CLAIMS [15-06-2023(online)].pdf | 2023-06-15 |
| 19 | 202241059669-PatentCertificate20-08-2025.pdf | 2025-08-20 |
| 20 | 202241059669-IntimationOfGrant20-08-2025.pdf | 2025-08-20 |
| 1 | SearchHistoryE_22-12-2022.pdf |