Sign In to Follow Application
View All Documents & Correspondence

System And Method For Cosmetic Products Detection And Classification Using Densenet And Grey Wolf Algorithm

Abstract: SYSTEM AND METHOD FOR COSMETIC PRODUCTS DETECTION AND CLASSIFICATION USING DENSENET AND GREY WOLF ALGORITHM ABSTRACT The present invention discloses a system (100) designed for cosmetic products detection and classification. This system comprises an input unit (102) enabling users to input images of cosmetic products via a user device (106). The system (100) comprises a processing unit (108) to receive these images for classification. The system (100) utilizes a DenseNet architecture for performing feature extraction and classification on the received images using a dataset. Additionally, the system (100) employs a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture, including filter weights, biases, and learning rates, or combinations thereof. The system (100) generates classification results indicating the category or type of the cosmetic products based on learned features and optimized parameters. Claims: 10, Figures: 5 Figure 1 is selected.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 May 2024
Publication Number
22/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR University
SR University, Ananthasagar, Warangal Telangana India 506371 patent@sru.edu.in 08702818333

Inventors

1. Naresh Kumar Sripada
SR University, Ananthasagar, Warangal, Telangana-506371, India
2. Sirikonda Shwetha
Kakatiya University, Warangal, Telangana-506009, India
3. Kothakonda Chandhar
SR University, Ananthasagar, Warangal, Telangana-506371, India
4. P. Pramod Kumar
SR University, Ananthasagar, Warangal, Telangana-506371, India
5. V. Thirupathi
SR University, Ananthasagar, Warangal, Telangana-506371, India
6. CH Sandeep
SR University, Ananthasagar, Warangal, Telangana-506371, India

Specification

Description:BACKGROUND
FIELD OF INVENTION
[001] The present invention pertains to a field of computer vision and artificial intelligence, particularly in the domain of cosmetic product classification.
DESCRIPTION OF RELATED ART
[002] The cosmetic industry grapples with the intricate task of accurately classifying and identifying a myriad of cosmetic products, primarily due to their diverse nature and the rapid pace of market changes. Traditional methods predominantly rely on manual classification procedures or simplistic image processing algorithms, which often lead to inefficiencies and errors in the classification process. As cosmetic product lines continue to expand and diversify, these conventional approaches become increasingly inadequate in meeting the demands of modern cosmetic classification requirements.
[003] Deep learning techniques, particularly convolutional neural networks (CNNs), emerge as promising solutions for automating cosmetic classification tasks. CNNs possess the ability to automatically learn and extract relevant features from cosmetic product images, enabling them to discern intricate patterns and characteristics that are crucial for accurate classification. By leveraging large datasets of labeled cosmetic images, CNNs can be trained to recognize subtle differences in packaging, color, texture, and branding, which are pivotal factors in determining the category or type of each cosmetic product.
[004] However, despite their potential, optimizing CNN architectures for cosmetic classification tasks remains a significant challenge. The complexity of cosmetic images, coupled with the need for robust and efficient feature extraction, necessitates sophisticated network architectures and optimization techniques. Researchers and practitioners are continuously exploring innovative approaches to enhance the performance of CNNs, striving to achieve higher accuracy and efficiency in cosmetic product classification. Through ongoing research and development efforts, the cosmetic industry aims to harness the full potential of deep learning technologies to overcome existing challenges and revolutionize the process of cosmetic classification.
[005] There is thus a need for an improved and advanced system and method for identifying regions of the brains during neurosurgery that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a system for cosmetic products detection and classification, comprising: an input unit adapted to enable a user to input the images of cosmetic products using a user device; a processing unit connected to the input unit, configured to receive the images of the cosmetic products for classification from the input unit; utilize a DenseNet architecture for feature extraction and classification of the received images of the cosmetic product using a dataset; employ a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture, wherein the parameters are selected from filter weights, biases, learning rates, or a combination thereof; and generate classification results indicating a category or a type of the cosmetic products based on the learned features and the optimized parameters.
[007] Embodiments in accordance with the present invention further provide a method for cosmetic products detection and classification using a system, comprising: receiving images of the cosmetic products for classification from an input unit; utilizing, by a processing unit, a DenseNet architecture for feature extraction and classification of the received images of the cosmetic products; employing a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture, wherein the parameters are selected from filter weights, biases, learning rates, or a combination thereof; and generating classification results indicating a category or a type of the cosmetic products based on the learned features and the optimized parameters.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a system and method for cosmetic product detection and classification, ensuring accurate and efficient categorization of cosmetic products using advanced deep learning techniques and optimization algorithms.
[009] Next, embodiments of the present application may provide a system that effectively handles variations in cosmetic products, including a packaging, a color, a texture, and a brand, thereby enhancing the reliability and robustness of a classification process.
[0010] Next, embodiments of the present application may provide a system that integrates DenseNet architecture and the Grey Wolf Algorithm, allowing for efficient feature extraction, parameter optimization, and classification of cosmetic products with enhanced accuracy and computational effectiveness.
[0011] Next, embodiments of the present application may provide a system that streamlines cosmetic product categorization, leading to increased efficiency and reduced manual effort in tasks such as inventory control, retail functions, and e-commerce operations.
[0012] Next, embodiments of the present application may provide a system that scales effortlessly to accommodate larger datasets and diverse product categories, thereby catering to the evolving needs of the cosmetic industry and adapting to changing market trends.
[0013] Next, embodiments of the present application may provide a system and method for continuous improvement, wherein the model's performance is monitored in real-world scenarios, and feedback is incorporated to further refine and enhance the classification system.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein. The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible by utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0016] FIG. 1 illustrates a block diagram of a system for cosmetic product detection and classification, according to an embodiment of the present invention;
[0017] FIG. 2 illustrates a block diagram of a processing unit of the system for cosmetic product detection and classification, according to an embodiment of the present invention;
[0018] FIG. 3A depicts a confusion matrix representing the classification performance of the system for 16 samples of cosmetic products, according to an embodiment of the present invention;
[0019] FIG. 3B illustrates graphs of training and validation accuracy and loss of the system for cosmetic product detection and classification, according to an embodiment of the present invention; and
[0020] FIG. 4 depicts a flowchart of a method for cosmetic product detection and classification, according to an embodiment of the present invention.
[0021] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0022] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0023] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0024] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0025] FIG. 1 illustrates a block diagram of a cosmetic product classification system 100 (hereinafter referred to as the system 100) for cosmetic product detection and classification, according to an embodiment of the present invention. In an embodiment of the present invention, the system 100 may be adapted to receive input images of cosmetic products and classify them based on learned features and optimized parameters. According to embodiments of the present invention, the architecture of the system 100 may be, but not limited to, a distributed system, a cloud-based system, an edge computing system, and so forth. Embodiments of the present invention are intended to include or otherwise cover any architecture that may be used by the system 100, including known, related art, and/or later developed technologies.
[0026] According to embodiments of the present invention, the location of installation of the system 100 may be, but not limited to, a warehouse, a cosmetic shop, an e-commerce facility, a conveyor, and so forth. Embodiments of the present invention are intended to include or otherwise cover any location for installation of the system 100, including known, related art, and/or later developed technologies.
[0027] According to the embodiment of the present invention, the system 100 may comprise an input unit 102, an imaging unit 104, a user device 106, a processing unit 108, and an operative unit 110.
[0028] In an embodiment of the present invention, the input unit 102 may be adapted to enable a user to input the images of the cosmetic products from various sources, including the internet, camera devices, system storage, and other external sources. In an embodiment of the present invention, the cosmetic products may be, but not limited to a compact powder, an eyeliner, a foundation, a kajal, a lipstick, and so forth. Embodiments of the present invention are intended to include or otherwise cover any the cosmetic products, including known, related art, and/or later developed technologies.
[0029] The input unit 102 may be any device or interface capable of accepting the images of cosmetic products, such as a web browser, a camera application, a file upload interface, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the input unit 102 for the system 100, including known, related art, and/or later developed technologies.
[0030] The input unit 102 may further be in communication with a dedicated application or software module for receiving the images of the cosmetic products, in an embodiment of the present invention. Embodiments of the present invention are intended to include or otherwise cover any type for the input unit 102, including known, related art, and/or later developed technologies.
[0031] In an embodiment of the present invention, the imaging unit 104 may be arranged on the user device 106. The imaging unit 104 may be, but not limited to, a camera, a scanner, a depth sensor, and so forth, capable of capturing high-quality images and/or videos of the cosmetic products. By arranging the imaging unit 104 on the user device 106, the input unit 102 may receive the images of the cosmetic products without obstructing the user's interaction with the device. In an embodiment of the present invention, the imaging unit 104 may capture images and/or videos of the cosmetic products for further processing and analysis. In an embodiment of the present invention, the imaging unit 104 may be configured to record videos of a predefined duration, in an embodiment of the present invention. In an exemplary embodiment of the present invention, the predefined duration of the recorded video clips may be of 2 seconds. In another exemplary embodiment of the present invention, the predefined duration of the recorded video clips may be of 4 seconds. In yet another embodiment of the present invention, the video clips may be of any duration.
[0032] According to the other embodiments of the present invention, the imaging unit 104 may be, but not limited to, a still camera, a video camera, a color balancer camera, a thermal camera, an infrared camera, a telephoto camera, a wide-angle camera, a macro camera, a Close-Circuit Television (CCTV) camera, a web camera, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the imaging unit 104, including known, related art, and/or later developed technologies. According to the other embodiments of the present invention, a resolution for the captured images and/or videos of the region of the brain of the patient using the imaging unit 104 may be, but not limited to, 320 pixels by 240 pixels, 640 pixels by 480 pixels, 1024 pixels by 768 pixels, 1360 pixels by 768 pixels, 1920 pixels by 1080 pixels, and so forth. Embodiments of the present invention are intended to include or otherwise cover any resolution for the captured images and/or videos using the imaging unit 104, including known, related art, and/or later developed technologies.
[0033] In an embodiment of the present invention, the input unit 102 may be linked to the image capturing unit 104 for capturing the cosmetic product images using a user device 106. In an exemplary embodiment of the present invention, the system 100 may be capable of receiving the images of the cosmetic products from the user device 106 that may be a smartphone equipped with advanced imaging technology. This user device 106 may feature a powerful Mediatek Dimensity for ensuring efficient processing for image rendering and data handling.
[0034] In an embodiment of the present invention, the processing unit 108 may be connected to the input unit 102 through a wired or wireless connection to receive the images of the cosmetic products. In an embodiment of the present invention, the processing unit 108 may be configured to execute computer-executable instructions stored in the memory unit (not shown) to generate an output relating to the system 100. The processing unit 108 may employ various algorithms and techniques for feature extraction, classification, and optimization to analyze the received images and generate accurate classification results. In a preferred embodiment of the present invention, the processing unit 108 may employ a DenseNet architecture for feature extraction and classification of the received images of the cosmetic products. The DenseNet architecture facilitates efficient information flow and feature reuse through dense connectivity patterns between layers, leading to enhanced classification accuracy.
[0035] In a further preferred embodiment of the present invention, the processing unit 108 may further employ a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture. The Grey Wolf Algorithm dynamically adjusts the parameters, such as filter weights, biases, and learning rates, based on a fitness function evaluating classification accuracy on a training dataset. By iteratively updating the parameters of the DenseNet architecture, the Grey Wolf Algorithm may enhance a classification performance of the DenseNet architecture that may result in superior accuracy and efficiency in cosmetic product classification.
[0036] According to embodiments of the present invention, the memory unit may be, but not limited to, a Random-Access Memory (RAM), a Static Random-Access Memory (SRAM), a Dynamic Random-Access Memory (DRAM), a Read-Only Memory (ROM), an Erasable Programmable Read-only Memory (EPROM), an Electrically Erasable Programmable Read-only Memory (EEPROM), a NAND Flash, a Secure Digital (SD) memory, a cache memory, a Hard Disk Drive (HDD), a Solid-State Drive (SSD), and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the memory unit, including known, related art, and/or later developed technologies. According to embodiments of the present invention, the processing unit 108 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 108 including known, related art, and/or later developed technologies. In an embodiment of the present invention, components of the processing unit 108 may be explained in conjunction with FIG. 2.
[0037] In an embodiment of the present invention, the operative unit 110 may be configured to take action according to the classified images of the cosmetic products. In an embodiment of the present invention, the operative unit 110 may be configured to sort actual cosmetic products based on the classification results. In an embodiment of the present invention, the operative unit 110 may be configured to display the classified images of the cosmetic products for enabling manual sorting. According to embodiments of the present invention, the type of the operative unit 110 may be, but not limited to, a robotic arm, a conveyor belt system, a display interface, and so forth. Embodiments of the present invention are intended to include or otherwise cover any operative unit 110, including known, related art, and/or later developed technologies.
[0038] In an embodiment of the present invention, the power unit 112 may be connected to the processing unit 108. In an embodiment of the present invention, the power unit 112 may be adapted to provide an operational power to the processing unit 108. In an exemplary embodiment of the present invention, the power unit 112 may provide power from a battery. In another exemplary embodiment of the present invention, the power unit 112 may provide power from a wall-outlet power supply. In yet another exemplary embodiment of the power unit 112 may supply power from any source.
[0039] In an embodiment of the present invention, the battery power supply may be from a rechargeable battery. In another embodiment of the present invention, the battery power supply may be from a non-rechargeable battery. According to embodiments of the present invention, the battery for power supply may be of any composition such as, but not limited to, a Nickel – Cadmium battery, a Nickel – Metal Hydride battery, a Zinc – Carbon battery, a Lithium-Ion battery, and so forth. Embodiments of the present invention are intended to include or otherwise cover any composition of the battery, including known, related art, and/or later developed technologies.
[0040] In an embodiment of the present invention, the wall-outlet power supply may be from a grid power line supply. In another embodiment of the present invention, the wall-outlet power supply may be from a generator line power supply. According to embodiments of the present invention, the wall-outlet power supply may be in the range of a 90-volt supply to a 320-volt supply. Embodiments of the present invention are intended to include or otherwise cover any rating of the wall-outlet power supply, including known, related art, and/or later developed technologies. According to an embodiment of the present invention, the power unit 112 may supply an Alternating Current (AC) power supply. According to another embodiment of the present invention, the power unit 112 may supply a Direct Current (DC) power supply. According to yet another embodiment of the present invention, the power unit 112 may supply any type of power supply.
[0041] FIG. 2 illustrates a block diagram of the processing unit 108 of the system 100 for cosmetic product detection and classification, according to an embodiment of the present invention. In an embodiment of the present invention, the processing unit 108 may comprise programming instructions in the form of programming modules. The programming modules may include, but are not limited to, an input receiving module 200, a classification module 202, and an operative module 204.
[0042] In an embodiment of the present invention, the input receiving module 200 may be configured to receive images of cosmetic products from the input unit 102. The input receiving module 200 may temporarily store the received images in an associated memory (not shown), in an embodiment of the present invention. In another embodiment of the present invention, the input receiving module 200 may be configured to permanently store the received images in the associated memory. Upon successfully receiving the images, the input receiving module 200 may be configured to pass a classification signal to the classification module 202.
[0043] In an embodiment of the present invention, the classification module 202 may be configured to be activated upon receipt of the classification signal from the input receiving module 200. The classification module 202 may be configured to classify the images by utilizing the DenseNet architecture for feature extraction and classification, and employing the Grey Wolf Algorithm to optimize parameters to improve the classification performance.
[0044] For instance, the classification module 202 may be configured to employ convolutional neural networks (CNNs) based on the DenseNet architecture to classify cosmetic products accurately. Initially, the classification module 202 may be configured to utilize preprocessed images from the dataset as input to the CNNs. These images may undergo multiple convolutional and pooling layers for allowing the CNNs to extract hierarchical features, such as shapes, textures, and patterns, from the cosmetic product images. Through the dense connectivity patterns characteristic of the DenseNet architecture, the features from earlier layers are efficiently reused in subsequent layers to enhance the network's ability to capture intricate details and distinguishing characteristics of the different cosmetic products.
[0045] Furthermore, the classification module 202 may be configured to iteratively adjust the parameters using the Grey Wolf Algorithm. By optimizing the parameters such as the filter weights, the biases, and the learning rates, the network continually refines its understanding of the dataset and improves its classification accuracy. The Grey Wolf Algorithm dynamically adapts these parameters based on a fitness function evaluating classification accuracy on the training dataset. This iterative optimization process ensures that the network learns to accurately discriminate between different cosmetic product categories, effectively to enhance both the accuracy and efficiency of a classification process.
[0046] In an embodiment of the present invention, the classification module 202 may further be configured to organize the preprocessed images into a structured dataset with appropriate labels, split the dataset into training and validation sets to facilitate model training and evaluation, and continuously train the DenseNet architecture to optimize the parameters and to enhance the classification performance.
[0047] Upon successful classification of the images, the classification module 202 may be configured to transmit an operative signal to the operative module 204. In an embodiment of the present invention, the operative module 204 may be configured to be activated upon receipt of the operative signal from the classification module 202. The operative module 204 may be configured to take action based on the classified images of the cosmetic products, such as sorting actual cosmetic products or displaying the classified images for manual sorting.
[0048] FIG. 3A depicts a confusion matrix 300 representing the classification performance of the system for 16 samples of the cosmetic products, according to an embodiment of the present invention. Analyzing the confusion matrix 400 may allow identification of any patterns of misclassification or confusion between different cosmetic product categories. In an exemplary embodiment of the present invention, the 16 samples of the cosmetic products may be classified in five different classes: 'compact powder', 'eyeliner', 'foundation', 'kajal', and 'lipstick'.
[0049] The images of the cosmetic products may serve as a foundation for building and training the cosmetic classification model. The dataset may be organized into structured categories and labels for ensuring accurate identification and categorization of the cosmetic products. Moreover, the system 100 may incorporate the DenseNet architecture for feature extraction and classification. The DenseNet architecture may facilitates efficient learning of hierarchical features from the cosmetic images through the dense connectivity patterns between the layers. Additionally, to optimize parameters and enhance classification performance, the system may integrates the Grey Wolf Algorithm. This optimization technique may dynamically adjust the parameters based on the fitness function evaluating classification accuracy on a training dataset. In the Grey Wolf Algorithm, various formulas may be utilized to update positions of an alpha, beta, delta, and omega wolves. For instance, to update the position of the alpha wolf, formula Da=|C·a-X| is used, where X represents the current position of the wolf, and C is a random vector in the range of [0, 1]. Similarly, formulas such as Dß=|C·ß-X| and Dd=|C·d-X| are employed to update the positions of the beta and delta wolves, respectively. Additionally, the formulas like Di=|C·Xi-X| are utilized to update the positions of the omega wolves forensuring an efficient exploration and exploitation process within the search space.
[0050] FIG. 3B illustrates graphs for training and validation accuracy and loss of the system for cosmetic product detection and classification.
[0051] FIG. 4 depicts a flowchart of method 400 for cosmetic product detection and classification, according to an embodiment of the present invention.
[0052] At step 402, the system 100 receives the images of cosmetic products for classification from the input unit 102.
[0053] At step 404, the system 100 may utilize the DenseNet architecture for feature extraction and classification of the received images of the cosmetic products.
[0054] At step 406, the system 100 may employ the Grey Wolf Algorithm to optimize the parameters of the DenseNet architecture, wherein the parameters are selected from the filter weights, biases, learning rates, or a combination thereof.
[0055] At step 408, the system 100 may generate the classification results indicating the category or the type of the cosmetic products based on the learned features and the optimized parameters.
[0056] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0057] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A system (100) for cosmetic products detection and classification, comprising:
an input unit (102) adapted to enable a user to input images of the cosmetic products using a user device (106);
a processing unit (108) connected to the input unit (102), configured to:
receive the images of the cosmetic products for classification from the input unit (102);
utilize a DenseNet architecture for feature extraction and classification of the received images of the cosmetic product using a dataset;
employ a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture, wherein the parameters are selected from filter weights, biases, learning rates, or a combination thereof; and
generate classification results indicating a category or a type of the cosmetic products based on the learned features and the optimized parameters.
2. The system (100) as claimed in claim 1, wherein the input unit (102) is linked to an image capturing unit (104) for capturing the cosmetic product images using a user device (106).
3. The system (100) as claimed in claim 1, wherein the DenseNet architecture facilitates feature extraction and classification of the images of the cosmetic products through dense connectivity patterns between layers.
4. The system (100) as claimed in claim 1, wherein the Grey Wolf Algorithm dynamically adjusts the parameters to improve a classification performance based on a fitness function evaluating classification accuracy on a training dataset.
5. The system (100) as claimed in claim 1, wherein the category or the type of the cosmetic products is decided based on the learned features and the optimized parameters to handle variations in the cosmetic products based on a packaging, a color, a texture, a brand, or a combination thereof.
6. The system (100) as claimed in claim 1, wherein the processing unit (108) is configured to pre-process the received images and organize the preprocessed images into a structured dataset with appropriate labels.
7. The system (100 as claimed in claim 1, wherein the processing unit (108) is configured to split the dataset into training and validation sets to facilitate a model training and evaluation.
8. The system (100) as claimed in claim 1, wherein the processing unit (108) is configured to continuously train the DenseNet architecture to optimize parameters and enhance classification performance.
9. The system (100) as claimed in claim 1, wherein the cosmetic products are selected from a compact powder, eyeliner, foundation, kajal, lipstick, or a combination thereof.
10. A method (400) for cosmetic products detection and classification using a system (100), comprising:
receiving images of the cosmetic products for classification from an input unit (102);
utilizing, by a processing unit (108), a DenseNet architecture for feature extraction and classification of the received images of the cosmetic products;
employing a Grey Wolf Algorithm to optimize parameters of the DenseNet architecture, wherein the parameters are selected from filter weights, biases, learning rates, or a combination thereof; and
generating classification results indicating a category or a type of the cosmetic products based on the learned features and the optimized parameters.
Date: May 20, 2024
Place: Noida

Dr. Keerti Gupta
Agent for the Applicant
(IN/PA-1529)

Documents

Application Documents

# Name Date
1 202441040415-STATEMENT OF UNDERTAKING (FORM 3) [24-05-2024(online)].pdf 2024-05-24
2 202441040415-REQUEST FOR EARLY PUBLICATION(FORM-9) [24-05-2024(online)].pdf 2024-05-24
3 202441040415-POWER OF AUTHORITY [24-05-2024(online)].pdf 2024-05-24
4 202441040415-OTHERS [24-05-2024(online)].pdf 2024-05-24
5 202441040415-FORM FOR SMALL ENTITY(FORM-28) [24-05-2024(online)].pdf 2024-05-24
6 202441040415-FORM 1 [24-05-2024(online)].pdf 2024-05-24
7 202441040415-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-05-2024(online)].pdf 2024-05-24
8 202441040415-EDUCATIONAL INSTITUTION(S) [24-05-2024(online)].pdf 2024-05-24
9 202441040415-DRAWINGS [24-05-2024(online)].pdf 2024-05-24
10 202441040415-DECLARATION OF INVENTORSHIP (FORM 5) [24-05-2024(online)].pdf 2024-05-24
11 202441040415-COMPLETE SPECIFICATION [24-05-2024(online)].pdf 2024-05-24
12 202441040415-FORM-9 [27-05-2024(online)].pdf 2024-05-27
13 202441040415-FORM-26 [11-07-2024(online)].pdf 2024-07-11