Abstract: Disclosed are a system (100) and method for detecting and classifying pashmina shawl. The system (100) includes a memory (108) and a processor (110). The memory (108) stores a plurality of instructions pertaining to the detection and classification of the pashmina shawl. The processor (110) executes the plurality of instructions. The processor (110) is configured to: capture one or more images of a pashmina shawl; receive and upload the one or more images of the pashmina shawl to a data storage (102); process the one or more images of the pashmina shawl; and classify the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated. The most illustrative drawing: FIGURE 3
The present invention is generally related to a system and method for detecting and classifying the pashmina shawl. More particularly, the present invention relates to an artificial intelligence-based system and method for the identification and classification of different types of embroidery in Kashmiri pashmina shawls.
BACKGROUND OF INVENTION
[0002] The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.
[0003] The exquisite display of craftsmanship in the design and embroidery of Pashmina (Cashmere Shawls) gives them a unique antique look quite distinctive from other artworks. Products made of Pashmina are the ubiquitous choice of most elites. For years, there has been a lot of buzzes around “Pashmina” because many have become unknowing victims of unscrupulous counterfeiting and fake Pashminas in the market. The general population lacks the expertise in recognizing the intricacies of Kashmiri design and embroidery. Various studies are present in the literature on apparel classification in general but nothing could be found on the identification or classification of artworks over specific apparel. Pashmina shawls are categorized based on designs and embroidery patterns.
[0004] Currently, there are many methods and solutions that exist in the market to detect fake Pashminas such as a non-patent literature authored by Muhammad Ather Iqbal Hussain et al. talks about a deep learning model based on data augmentation and transfer learning approach for the classification and recognition of woven fabrics. The model uses the residual network (ResNet), where the fabric texture features are extracted and classified automatically in an end-to-end fashion.
[0005] Further, non-patent literature published by Aqsa Rasheed et al. describes various computer vision-based approaches with application in the textile industry to detect fabric defects.
[0006] Furthermore, non-patent literature published by Rajiv Kumar et al. talks about extracting quality DNA from textile materials, and a PCR-based technique using mitochondrial gene (12S rRNA) specific primers were developed for detection of the Pashmina in textile blends.
[0007] A non-patent literature authored by Jun-Feng Jing et al. talks about a powerful detection method for automatic fabric defect detection using a deep convolutional neural network (CNN). It consists of three main steps. First, the fabric image is decomposed into local patches and each local patch is labeled. Then the labeled patches are transmitted to the pre-trained deep CNN for transfer learning. Finally, defects are detected during the inspection phase by sliding over the whole image using the trained model, and the category and position of each defect are obtained.
[0008] Further, non-patent literature published by WENBIN OUYANG, BUGAO XU et al. describes a deep-learning algorithm for an on-loom fabric defect inspection system by combining the techniques of image pre-processing, fabric motif determination, candidate defect map generation, and convolutional neural networks (CNNs).
[0009] Thus, the existing prior arts and solutions lay focus on fabric defect detection during the production process to minimize the damage to the produced material due to loom malfunctions, etc to control the quality of the produced material and Chemical based molecular methods to detect the purity of pashmina fabrics using DNA sequencing. The above prior arts do not consider the use of artificial intelligence and computer vision for the identification of various designs and embroidery work on pashmina fabric.
[0010] This specification recognizes that there is a need for a system that can recognize the artworks in the design and embroidery of pashmina shawls which will be beneficial in digital asset management applications for automatic asset tagging and in fashion curation to identify particular types of pashmina shawls and find visual similarity of it.
[0011] Thus, in view of the above, there is a long-felt need in the textile industry to address the aforementioned deficiencies and inadequacies.
[0012] Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art through comparison of described systems with some aspects of the present disclosure, as set forth in the remainder of the present application and with reference to the drawings.
[0013] In some embodiments, the numbers expressing quantities or dimensions of items, and so forth, used to describe and claim certain embodiments of the invention are to be understood as being modified in some instances by the term “about.” Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
[0014] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context dictates otherwise.
[0015] The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
[0016] Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all groups used in the appended claims.
SUMMARY OF THE INVENTION
[0017] A system and method for detecting and classifying the pashmina shawl are provided substantially, as shown in and/or described in connection with at least one of the figures.
[0018] An aspect of the present disclosure relates to a system for detecting and classifying the pashmina shawl. The system includes a memory and a processor. The memory stores a plurality of instructions pertaining to the detection and classification of the pashmina shawl. The processor executes the plurality of instructions. The processor is configured to: capture one or more images of a pashmina shawl; receive and upload the one or more images of the pashmina shawl to a data storage; process the one or more images of the pashmina shawl; and classify the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated.
[0019] In an aspect, the processor is configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl.
[0020] In an aspect, the trained deep learning algorithm is developed using transfer learning over a residual neural network architecture.
[0021] In an aspect, the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl.
[0022] In an aspect, the extracted features are classified using a multi-layer perceptron.
[0023] Another aspect of the present disclosure relates to a method for detecting and classifying the pashmina shawl. The method includes a step of capturing, by one or more processors, one or more images of a pashmina shawl. The method includes a step of receiving and uploading, by the one or more processors, the one or more images of the pashmina shawl to a data storage. The method includes a step of processing, by one or more processors, the one or more images of the pashmina shawl. The method includes a step of classifying, by one or more processors, the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated.
[0024] In an aspect, the one or more processors are configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl.
[0025] In an aspect, the trained deep learning algorithm is developed using a transfer learning over a residual neural network architecture.
[0026] In an aspect, the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl.
[0027] In an aspect, the extracted features are classified using a multi-layer perceptron.
[0028] Accordingly, one advantage of the present invention is that it provides an expert system that can be designed to automatically identify and classify different Pashmina shawls.
[0029] Accordingly, one advantage of the present invention is that it provides a cost effective, and easy to use system to identify and classify the pashmina shawls without expert intervention.
[0030] These features and advantages of the present disclosure may be appreciated by reviewing the following description of the present disclosure, along with the accompanying figures wherein like reference numerals refer to like parts.
BRIEF DESCRIPTION OF DRAWINGS
[0031] The accompanying drawings illustrate the embodiments of systems, methods, and other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent an example of the boundaries. In some examples, one element may be designed as multiple elements, or multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another and vice versa. Furthermore, the elements may not be drawn to scale.
[0032] Various embodiments will hereinafter be described in accordance with the appended drawings, which are provided to illustrate, not limit, the scope, wherein similar designations denote similar elements, and in which:
[0033] FIG. 1 illustrates a network implementation of a system for detecting and classifying the pashmina shawl, in accordance with an embodiment of the present subject matter.
[0034] FIG. 2 illustrates a perspective view of various classes of the pashmina shawl, in accordance with an embodiment of the present subject matter.
[0035] FIG. 3 illustrates an operation block diagram of the present system for detecting and classifying the pashmina shawl, in accordance with an embodiment of the present subject matter.
[0036] FIG. 4 illustrates a block diagram of a classification module architecture, in accordance with an embodiment of the present subject matter.
[0037] FIG. 5 illustrates a flow diagram of a method for detecting and classifying the pashmina shawl, in accordance with at least one embodiment.
DETAILED DESCRIPTION
[0038] The present disclosure is best understood with reference to the detailed figures and description set forth herein. Various embodiments have been discussed with reference to the figures. However, those skilled in the art will readily appreciate that the detailed descriptions provided herein with respect to the figures are merely for explanatory purposes, as the methods and systems may extend beyond the described embodiments. For instance, the teachings presented and the needs of a particular application may yield multiple alternative and suitable approaches to implement the functionality of any detail described herein. Therefore, any approach may extend beyond certain implementation choices in the following embodiments.
[0039] References to “one embodiment,” “at least one embodiment,” “an embodiment,” “one example,” “an example,” “for example,” and so on indicate that the embodiment(s) or example(s) may include a particular feature, structure, characteristic, property, element, or limitation but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element, or limitation. Further, repeated use of the phrase “in an embodiment” does not necessarily refer to the same embodiment.
[0040] Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The term “method” refers to manners, means, techniques, and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques, and procedures either known to or readily developed from known manners, means, techniques, and procedures by practitioners of the art to which the invention belongs. The descriptions, examples, methods, and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only. Those skilled in the art will envision many other possible variations within the scope of the technology described herein.
[0041] The present disclosure provides an artificial intelligence-based computer vision system for identification and classification of Kashmiri pashmina shawls on the basis of designs and embroidery woven on them. For the approach disclosed herein, the present system uses a deep learning model which has been trained to recognize various types of pashmina shawls. The system receives the image of a pashmina shawl and applies a trained deep learning algorithm on received input for appropriate classification by providing the probability of a specific class to which the input image is associated. The present invention provides an automatic identification and classification mechanism which would be helpful in the Pashmina fiber industry.
[0042] FIG. 1 illustrates a network implementation of system 100 for detecting and classifying the pashmina shawl, in accordance with an embodiment of the present subject matter. The system 100 includes a memory 108 and a processor 110. The memory 108 stores a plurality of instructions pertaining to the detection and classification of the pashmina shawl. The processor 110 executes the plurality of instructions.
[0043] The memory 108 and processor 110 are connected with a server and the data storage 102 and one or more computing devices 104 over a network 106. Network 106 can be implemented as one of the different types of networks, such as a local area network (LAN), a wide area network (WAN), and the like. Network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0044] Memory 108 may be a non-volatile memory or a volatile memory. Examples of non-volatile memory may include, but are not limited to flash memory, a Read Only Memory (ROM), a Programmable ROM (PROM), Erasable PROM (EPROM), and Electrically EPROM (EEPROM) memory. Examples of volatile memory may include but are not limited to Dynamic Random-Access Memory (DRAM), and Static Random-Access memory (SRAM). The processor 110 may include at least one data processor for executing program components for executing user- or system-generated requests. Processor 110 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating-point units, graphics processing units, digital signal processing units, etc. Processor 110 may include a microprocessor, such as AMD® ATHLON® microprocessor, DURON® microprocessor OR OPTERON® microprocessor, ARM's application, embedded or secure processors, IBM® POWERPC®, INTEL'S CORE® processor, ITANIUM® processor, XEON® processor, CELERON® processor or other line of processors, etc. Processor 110 may be implemented using a mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some embodiments may utilize embedded technologies like application-specific integrated circuits (ASICs), digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), etc. In an embodiment, the processor 110 is based on Qualcomm® Technologies which has a RAM of at least 3 GB and memory of 32 GB.
[0045] Processor 110 may be disposed of in communication with one or more input/output (I/O) devices via an I/O interface. I/O interface may employ communication protocols/methods such as, without limitation, audio, analog, digital, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.n/b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
[0046] The computing devices 104 enable the users to access the various features of the present invention and feed the images of the pashmina shawl. In an embodiment, the computing device 104 includes one or more computing devices 104-1, 104-2, 104-3, and 104-N, hereinafter referred to as 104. Examples of computing devices 104 include but are not limited to a smartphone, a laptop, a computer, a tablet.
[0047] In an embodiment, the one or more computing devices 104 comprises a user interface that is integrated with a communication application and a web page. In an embodiment, the communication application is a mobile application that is executable on the computing device 104 and implemented on one or more operating systems such as Android®, iOS®, Windows®, etc.
[0048] The processor 110 is configured to capture one or more images of a pashmina shawl. The processor 110 is configured to receive and upload the one or more images of the pashmina shawl to a data storage 102. The processor 110 is configured to process the one or more images of the pashmina shawl. The processor 110 is configured to classify the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated. FIG. 2 illustrates a perspective view of various classes of the pashmina shawl, in accordance with an embodiment of the present subject matter. Examples of the classes of the pashmina shawl include but are not limited to embroidered 202, solid 204, patterned 206, reversible 208, ombre 210, and printed 212.
[0049] In an embodiment, the processor 110 is configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl. In an embodiment, the trained deep learning algorithm is developed using transfer learning over a residual neural network architecture. In an embodiment, the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl. In an embodiment, the extracted features are classified using a multi-layer perceptron.
[0050] FIG. 3 illustrates an operation block diagram 300 of the present system for detecting and classifying the pashmina shawl, in accordance with an embodiment of the present subject matter. FIG. 3 is explained in conjunction with FIG. 2. In an embodiment, the system includes an image capturing module 302, a display screen or user interface 304 integrated with the computing device, an input interface module 306, a classification module 308, and an output interface module 312. The image capturing module 302 captures one or more images of a pashmina shawl 301. In an embodiment, the image capturing module 302 is a video camera or a camera integrated with the computing device. The captured images of the pashmina shawl 301 will be displayed on the display screen 304. The input interface module 306 receives the images of the pashmina shawl captured by the image capturing module 302, wherein the input interface module 306 uploads the images of the pashmina shawl to a data storage that includes a pre-processing module to process the one or more images of the pashmina shawl.
[0051] The classification module 308 classifies the images of the pashmina shawl processed by the pre-processing module to obtain one or more outputs. In an embodiment, the classification module 308 comprises a convolution neural network 310 or a trained deep learning algorithm for the classification of the images of the pashmina shawl and prediction of a Pashmina type 311.
[0052] In an embodiment, the outputs from the classification module 308 are indicative of a probability of a specific class with which the images of the pashmina shawl are associated. The output interface module 312 is configured to present the pashmina type classified and predicted by the classification module 308. Examples of the type of the pashmina include but are not limited to embroidered, solid, patterned, reversible, ombre, and printed.
[0053] FIG. 4 illustrates a block diagram 400 of a classification module architecture, in accordance with an embodiment of the present subject matter. FIG. 4 is explained in conjunction with FIG. 3. In an embodiment, the trained deep learning algorithm is developed using a transfer learning over a residual neural network architecture (ResNet50). The residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl. The extracted features are classified using a multi-layer perceptron. Convolutional layers of feature extractor perform necessary filtering to build a feature map, which is then handled by ReLu to manage the non-linearity of features. The pooling layer is used after convolution to reduce dimensionality and prevent overfitting. The retrieved feature matrices from ResNet are then flattened into arrays which are used by our classifier as input. The classifier part of the model consists of a global average pooling layer, two dense layers with ReLu activations, and an output layer with softmax activation.
[0054] As used herein, and unless the context dictates otherwise, the term “configured to” or “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “configured to”, “configured with”, “coupled to” and “coupled with” are used synonymously. Within the context of this document terms “configured to”, “coupled to” and “coupled with” are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[0055] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.
[0056] FIG. 5 illustrates a flow diagram of a method for detecting and classifying the pashmina shawl, in accordance with at least one embodiment. The method includes a step 502 of capturing, by one or more processors, one or more images of a pashmina shawl. The method includes a step 504 of receiving and uploading, by the one or more processors, the one or more images of the pashmina shawl to a data storage. The method includes a step 506 of processing, by one or more processors, the one or more images of the pashmina shawl. The method includes a step 508 of classifying, by one or more processors, the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated. In an embodiment, the one or more processors are configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl. In an embodiment, the trained deep learning algorithm is developed using a transfer learning over a residual neural network architecture. In an embodiment, the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl. In an embodiment, the extracted features are classified using a multi-layer perceptron.
[0057] In implementation, the present invention utilizes visual analysis of clothing, particularly an artificial intelligence-based computer vision system for identification and classification of pashmina shawls. The system distinguishes different types of pashmina shawls based on design and embroidery work. The system takes an image of a pashmina shawl as input and categorizes it into one of the six types of pashmina shawls viz. solid, reversible, patterned, ombre, printed and embroidered.
[0058] As shown in FIG. 3, the image capturing module takes the image of a pashmina shawl and uploads to the system via input interface module. The data storage component consists of a preprocessing module and a classification module. Data preprocessing module performs the necessary transformation to input data (image) and passes it to the classifier module where a trained deep learning algorithm is used for the image classification task. The results from the classification module indicate the probability of a specific class to which the input image is associated. The algorithm in the classification module is developed using transfer learning over a deep convolutional neural network architecture.
[0059] Residual Neural Network Architecture is used to extract necessary features from pashmina shawl images. The extracted features are then classified using a multi-layer perceptron. Architecture of the classification module is shown in FIG. 4. Convolutional layers of feature extractor perform necessary filtering to build a feature map, which is then handled by ReLu to manage non linearity of features. Pooling layer is used after convolution to reduce dimensionality and prevent overfitting. The retrieved feature matrices from ResNet are then flattened into arrays which are used by our classifier as input. The classifier part of the model consists of a global average pooling layer, two dense layers with ReLu activations and output layer with softmax activation.
[0060] In an experimental setup, a dataset of 1585 images were collected which includes six categories of Kashmiri pashmina shawls for training and testing of the proposed system; some samples of data are shown in FIG. 2. The present system is tested on 317 images of pashmina shawls and results showed that our proposed system can classify pashmina shawls with classification accuracy of 93.24%.
[0061] Thus, the present artificial intelligence (AI) based system and method can be used by individuals to identify different types of pashmina shawls as they appear in social media applications or in fashion modeling contexts. Further, the present system and method can be used in digital asset management applications for automatic asset tagging. Furthermore, the present system is helpful in fashion curation to identify particular types of pashmina shawls and find visual similarity of it (find similar item for recommendation).
[0062] No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
[0063] Unless otherwise defined, all terms (including technical and scientific terms) used in this disclosure have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It is to be understood that the phrases or terms employed of the present invention are for the purpose of description and not of limitation. As will be appreciated by one of skill in the art, the present disclosure may be embodied as a device, system, method, or computer program product. Further, the present invention may take the form of a computer program product on a computer-readable storage medium having computer-usable program code embodied in the medium.
[0064] The present systems and methods have been described above with reference to specific examples. However, other embodiments and examples than the above description are equally possible within the scope of the present invention. The scope of the disclosure may only be limited by the appended patent claims. Even though modifications and changes may be suggested by the persons skilled in the art, it is the intention of the inventors and applicants to embody within the patent warranted heron all the changes and modifications as reasonably and properly come within the scope of the contribution the inventors and applicants to the art. The scope of the embodiments of the present invention is ascertained with the claims to be submitted at the time of filing the complete specification.
WE CLAIMS:
1. A system (100) for detecting and classifying pashmina shawl, comprising:
a memory (108) to store a plurality of instructions pertaining to detection and classification of the pashmina shawl; and
a processor (110) to execute the plurality of instructions, wherein the processor (110) is configured to:
capture one or more images of a pashmina shawl;
receive and upload the one or more images of the pashmina shawl to a data storage (102);
process the one or more images of the pashmina shawl; and
classify the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated.
2. The system (100) as claimed in claim 1, wherein the processor (110) is configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl.
3. The system (100) as claimed in claim 2, wherein the trained deep learning algorithm is developed using a transfer learning over a residual neural network architecture.
4. The system (100) as claimed in claim 3, wherein the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl.
5. The system (100) as claimed in claim 4, wherein the extracted features are classified using a multi-layer perceptron.
6. A method for detecting and classifying pashmina shawl, comprising:
capturing, by one or more processors (110), one or more images of a pashmina shawl;
receiving and uploading, by the one or more processors (110), the one or more images of the pashmina shawl to a data storage;
processing, by the one or more processors (110), the one or more images of the pashmina shawl; and
classifying, by the one or more processors (110), the one or more images of the pashmina shawl to obtain one or more outputs, wherein the outputs are indicative of a probability of a specific class with which the one or more images of the pashmina shawl is associated.
7. The method as claimed in claim 6, wherein the one or more processors (110) are configured to utilize a trained deep learning algorithm for the classification of the one or more images of the pashmina shawl.
8. The method as claimed in claim 7, wherein the trained deep learning algorithm is developed using a transfer learning over a residual neural network architecture.
9. The method as claimed in claim 8, wherein the residual neural network architecture functions as a feature extractor to extract one or more features from the images of the pashmina shawl.
10. The method as claimed in claim 9, wherein the extracted features are classified using a multi-layer perceptron.
| # | Name | Date |
|---|---|---|
| 1 | 202211018901-FORM-9 [30-03-2022(online)].pdf | 2022-03-30 |
| 1 | 202211018901-Proof of Right [22-04-2022(online)].pdf | 2022-04-22 |
| 2 | 202211018901-FORM-26 [11-04-2022(online)].pdf | 2022-04-11 |
| 2 | 202211018901-FORM FOR SMALL ENTITY(FORM-28) [30-03-2022(online)].pdf | 2022-03-30 |
| 3 | 202211018901-FORM FOR SMALL ENTITY [30-03-2022(online)].pdf | 2022-03-30 |
| 3 | 202211018901-COMPLETE SPECIFICATION [30-03-2022(online)].pdf | 2022-03-30 |
| 4 | 202211018901-FORM 3 [30-03-2022(online)].pdf | 2022-03-30 |
| 4 | 202211018901-DRAWINGS [30-03-2022(online)].pdf | 2022-03-30 |
| 5 | 202211018901-EDUCATIONAL INSTITUTION(S) [30-03-2022(online)].pdf | 2022-03-30 |
| 5 | 202211018901-FORM 1 [30-03-2022(online)].pdf | 2022-03-30 |
| 6 | 202211018901-ENDORSEMENT BY INVENTORS [30-03-2022(online)].pdf | 2022-03-30 |
| 6 | 202211018901-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-03-2022(online)].pdf | 2022-03-30 |
| 7 | 202211018901-ENDORSEMENT BY INVENTORS [30-03-2022(online)].pdf | 2022-03-30 |
| 7 | 202211018901-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-03-2022(online)].pdf | 2022-03-30 |
| 8 | 202211018901-EDUCATIONAL INSTITUTION(S) [30-03-2022(online)].pdf | 2022-03-30 |
| 8 | 202211018901-FORM 1 [30-03-2022(online)].pdf | 2022-03-30 |
| 9 | 202211018901-DRAWINGS [30-03-2022(online)].pdf | 2022-03-30 |
| 9 | 202211018901-FORM 3 [30-03-2022(online)].pdf | 2022-03-30 |
| 10 | 202211018901-FORM FOR SMALL ENTITY [30-03-2022(online)].pdf | 2022-03-30 |
| 10 | 202211018901-COMPLETE SPECIFICATION [30-03-2022(online)].pdf | 2022-03-30 |
| 11 | 202211018901-FORM-26 [11-04-2022(online)].pdf | 2022-04-11 |
| 11 | 202211018901-FORM FOR SMALL ENTITY(FORM-28) [30-03-2022(online)].pdf | 2022-03-30 |
| 12 | 202211018901-Proof of Right [22-04-2022(online)].pdf | 2022-04-22 |
| 12 | 202211018901-FORM-9 [30-03-2022(online)].pdf | 2022-03-30 |