Abstract: A Method for Detection and Classification of Objects of An Artwork Present On A Packaging Abstract The present invention discloses a system (100) and a method (600) for detecting and classifying objects in packaging artworks. The system (100) includes a user device (102) monitored by a user (104) and an object detection and classification unit (106). The object detection and classification unit (106) is configured to receive an image of a packaging artwork from a user device (102) and to detect a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes. The object detection and classification unit (106) is further configured to generate a confidence score and a bounding box for each of the detected plurality of objects. The object detection and classification unit (106) is configured to classify the detected plurality of objects based on the confidence score and the bounding box generated for each of the detected plurality of objects. FIG. 1
DESC:A Method for Detection and Classification of Objects of An Artwork Present On A Packaging
FIELD OF INVENTION
The present invention relates to detection of objects in an artwork. More specifically, the present invention relates to a method and a system for detecting and classifying objects in packaging artworks.
BACKGROUND OF INVENTION
In dynamic manufacturing industries, packaging is a first impression of a product and a brand. Therefore, investments in packaging are increasing day-by-day in various industries, especially in pharmaceutical, food and beverages, etc. Sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. The content present on the packaging is referred to as an artwork related to the packaged good, wherein the artwork includes images, brand name, text, composition, nutritional value table, etc. related to the packaged good.
However, any discrepancy in a product packaging – be it a defective or incorrect labelling – that goes unnoticed, may prove disastrous for the product. Further, defective, or incorrect labelling may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged product or not. This is true, especially in case of medicines, as the customers need to buy the right composition and quantity of medicine. Further, defective, or incorrect labelling may lead to harsh financial penalties and even litigation risk to the manufacturing industries when product specifications such as quantities, packaging dimensions, ingredients, and allergen or toxicity warnings present on the packaging artworks are incorrect. Hence, the manufacturing industries need to provide accurate information about the product on the packaging artworks.
Further, manually detecting the defective, or incorrect labelling on packaging artworks is incredibly time-consuming and error prone. Also, the problem of defective, or incorrect labelling on packaging artworks tends to grow on a large scale as a manufacturing industry may be providing products in different countries, and hence needs to integrate labelling on packaging artworks in multiple languages. Therefore, there is a need for a method and a system for detecting objects in artworks automatically, thereby providing efficient and effective detection of defective or incorrect labelling in artworks. Further, there is a need for a method and a system for automatically detecting and classifying objects in packaging artworks, without any human intervention.
OBJECT OF INVENTION
The object of the present invention is to provide a method and a system for detecting objects in packaging artworks automatically, thereby providing efficient and effective detection of defective or incorrect labelling in artworks. More specifically, the object of the present invention is to provide a method and a system for automatically detecting and classifying objects in packaging artworks, without any human intervention.
SUMMARY
The present application discloses a system for detecting and classifying objects in packaging artworks. The present application discloses that the system includes a user device monitored by a user, and an object detection and classification unit. The object detection and classification unit is configured to receive an image of a packaging artwork from a user device and to detect a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes. The object detection and classification unit is further configured to assign a prediction label to each of the detected plurality of objects. The object detection and classification unit is also configured to generate a confidence score and a bounding box for each of the detected plurality of objects.
Further, the object detection and classification unit is configured to determine whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of each of the predetermined object classes to obtain a plurality of not rejected prediction labels. The object detection and classification unit is configured to detect a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction labels to determine intersections of bounding boxes.
Furthermore, the object detection and classification unit is configured to apply a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes and convert the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
Also, the object detection and classification unit is configured to store the classified objects along with their permanent labels as a json file. The object detection and classification unit is configured to output the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user on the user device, wherein the object detection and classification unit outputs the classified objects along with their permanent label, bounding box coordinates and the confidence score by highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device.
The present disclosure further discloses a method for detecting and classifying objects in packaging artworks. The method includes receiving, by an object detection and classification unit, an image of a packaging artwork from a user device. The method includes detecting, by the object detection and classification unit, a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes. The method further includes assigning, by the object detection and classification unit, a prediction label to each of the detected plurality of objects.
Further, the method includes generating, by the object detection and classification unit, a confidence score and a bounding box for each of the detected plurality of objects. The method includes determining, by the object detection and classification unit, whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of each of the predetermined object classes to obtain a plurality of not rejected prediction labels. The method further includes detecting, by the object detection and classification unit, a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction labels to determine intersections of bounding boxes.
Furthermore, the method includes applying, by the object detection and classification unit, a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes. The method includes converting, by the object detection and classification unit, the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
Also, the method includes storing, by the object detection and classification unit, the classified objects along with their permanent labels as a json file. The method includes outputting, by the object detection and classification unit, the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user on the user device, wherein outputting the classified objects along with their permanent label, bounding box coordinates and the confidence score comprises highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device.
BRIEF DESCRIPTION OF DRAWINGS
The novel features and characteristics of the disclosure are set forth in the description. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following description of an illustrative embodiment when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawings wherein like reference numerals represent like elements and in which:
FIG. 1 illustrates a system 100 for detecting and classifying objects in packaging artworks, in accordance with an embodiment of the present disclosure.
FIG. 2 illustrates an exemplary image 200 of a packaging artwork inputted by the user 104, in accordance with an embodiment of the present disclosure.
FIG. 3A illustrates an exemplary nutritional table present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3B illustrates an exemplary barcode present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3C illustrates an exemplary nutria-score present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3D illustrates an exemplary front of panel declaration (FOP) present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3E illustrates an exemplary plurality of lines present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3F illustrates an exemplary symbol present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3G illustrates an exemplary image present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3H illustrates an exemplary text present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 3I illustrates exemplary pantone colours present on a packaging artwork, in accordance with an embodiment of the present disclosure.
FIG. 4 illustrates an exemplary scenario where two bounding boxes are generated for a predicted label, in accordance with an embodiment of the present disclosure.
FIG. 5A and FIG. 5B illustrate an exemplary display screen 500 and 502 of the user device 102 displaying the classified objects along with their permanent label, bounding box coordinates and the confidence score, in accordance with an embodiment of the present disclosure.
FIG. 6 illustrates a method 600 for detecting and classifying objects in packaging artworks, in accordance with an embodiment of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the assemblies, structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other, sub-systems, elements, structures, components, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this invention belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying figures.
The present invention focusses on providing a method and a system for detecting and classifying objects in packaging artworks used for packaging goods or products produced by diverse industries, such as consumer packaged goods, pharmaceuticals, etc. In dynamic manufacturing industries, packaging is a first impression of a product and a brand. Therefore, sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. The content present on the packaging is referred to as an artwork related to the packaged good, wherein the artwork includes images, brand name, text, composition, nutritional value table, etc. related to the packaged good. However, any discrepancy in a product packaging – be it a defective or incorrect labelling – that goes unnoticed, may prove disastrous for the product. Further, defective, or incorrect labelling may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged product or not. Also, defective, or incorrect labelling may lead to harsh financial penalties and even litigation risk to the manufacturing industries when product specifications such as quantities, packaging dimensions, ingredients, and allergen or toxicity warnings present on the packaging artworks are incorrect.
Therefore, the present disclosure discloses a method and a system for detecting objects in packaging artworks automatically, thereby providing efficient and effective detection of defective or incorrect labelling in packaging artworks. Further, the present disclosure discloses a method and a system for automatically detecting and classifying objects in packaging artworks, without any human intervention.
FIG. 1 illustrates a system 100 for detecting and classifying objects in packaging artworks, in accordance with an embodiment of the present disclosure. The system 100 includes a user device 102 monitored by a user 104, and an object detection and classification unit 106. The user device 102 relates to hardware component such as a keyboard, mouse, etc which accepts data from the user 104 and also relates to a hardware component such as a display screen of a desktop, laptop, tablet, etc. which displays data to the user 104. The user device 102 is configured to allow the user 104 to input an image of a packaging artwork for which objects are to be identified and classified.
FIG. 2 illustrates an exemplary image 200 of a packaging artwork inputted by the user 104, in accordance with an embodiment of the present disclosure. The image inputted by the user 104 may include a plurality of objects, such as text explaining about the products that is inside the packaged artwork, name of a product, brand name, composition, barcode, ingredients, storage instructions, nutrition facts, allergy information, dosage and precaution instructions, manufacturing and expiry dates, etc. The user 104 may be, but not limited to, any employ of an industry monitoring labelling or generation of packaging artworks, a third-party monitoring labelling or generation of packaging artworks, etc.
The user device 102 is further configured to send the image of the packaging artwork to the object detection and classification unit 106. The object detection and classification unit 106 is a hardware component which is capable for processing any data or information received by them. In certain embodiments, the object detection and classification unit 106 may be part of any regularly devices, such as laptops, desktops, tablets, mobile devices, etc. The object detection and classification unit 106 is configured to detect and classify objects on a packaging artwork, and to create a label for each of the objects detected.
The object detection and classification unit 106 is configured to receive the image of the packaging artwork, and to detect a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes. In an embodiment of the present disclosure, the object detection and classification unit 106 uses the following predetermined plurality of object classes for detecting the plurality of objects on the received image of the packaging artwork:
• artworks – an artwork may include packaging information such as graphics, logos, bar-codes and textual information.
• tables - tables may include nutritional tables (USA and Europe food market) and supplementary tables. Nutritional table may include information about selected basic foods, weight in grams, calories, amount of protein, carbohydrates, dietary fiber, fat, and saturated fat information for each food. FIG. 3A illustrates an exemplary nutritional table present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• barcodes – a barcode may include a way of representing data in a machine-readable form. FIG. 3B illustrates an exemplary barcode present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• nutri-scores – a nutri-score may include nutritional quality of food and beverages and may use 5 different colours for categorizing food products packaged inside the packaging artwork. FIG. 3C illustrates an exemplary nutria-score present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• Front of panel declarations (FOP) – a FOP is a key policy way to regulate food products and indicate the consumers about the excessive amounts of sugars, total fats, saturated fats, trans fats, and sodium in a food product. FIG. 3D illustrates an exemplary front of panel declaration (FOP) present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• lines – a line may include a boundary line which separates colour and monochromatic areas or differently coloured areas of printing. A packaging artwork may include different types of lines, where a line also helps to differentiate between a front panel and a back panel of a packaging artwork. FIG. 3E illustrates an exemplary plurality of lines present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• symbols – a symbol may be a common feature for every packaging artwork. A symbol may span from food allergy symbols to recycle information, hazard symbols, vegan symbols, safety standard symbols etc. Symbols provide important information to customers and comply with Government regulations. FIG. 3F illustrates an exemplary symbol present on a packaging artwork, in accordance with an embodiment of the present disclosure. The symbol illustrated in FIG. 3F represents that the product packaged inside the packaging artwork is vegetarian.
• images – an image may include multiple images present on a packaging artwork to draw attention of a customer to that particular product packaged inside the packaged artwork. FIG. 3G illustrates an exemplary image present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• text - text may include any text present on the packaging artwork that provides customers information about products packaged inside the packaging artwork. FIG. 3H illustrates an exemplary text present on a packaging artwork, in accordance with an embodiment of the present disclosure.
• others - others may include anything which is there on a packaging artwork and not categorized in any of the above-mentioned classes, for example pantone colours. FIG. 3I illustrates exemplary pantone colours present on a packaging artwork, in accordance with an embodiment of the present disclosure.
In another embodiment of the present disclosure, the object detection and classification unit 106 may use any other object classes known for detecting the plurality of objects on the received image of the packaging artwork.
In an embodiment of the present disclosure, the object detection and classification unit 106 may use a transfer learning (TL) model for detecting the plurality of objects on the received image of the packaging artwork. The transfer learning (TL) model is a methodology where a pre-trained model is used to transfer a knowledge already gained, to a new task for improvement in learning. For example, the object detection and classification unit 106 uses a faster RCNN ResNet50 transfer learning model because learned features (edges, colour) of a TL model are also features of packaging artwork objects. Also, a TL model gives accurate and reliable results for a small dataset. Thus, a last layer of the TL model helps in achieving a semi-automatic learning for accurate object detection in packaging artworks.
Further, the TL model is trained for the object classes mentioned above and for small dataset (with only 1700 samples), where each sample is stored as an XML (extensible markup language) file for better processing. However, if new samples of packaging artworks are used, which may or may not include new object classes, the new samples may be added to the TL model. Once the new samples are added in the TL model, an XML of the added samples is generated for further processing and saved in the dataset.
After detecting the plurality of objects on the received image of the packaging artwork, the object detection and classification unit 106 is configured to assign a prediction label to each of the detected plurality of objects. The object detection and classification unit 106 is further configured to generate a confidence score for each of the detected plurality of objects. In an embodiment of the present disclosure, a confident score may be a number that lies between 0 and 1 and represents correctness of the confidence score generated. In another embodiment of the present disclosure, a confidence score may be any number.
Once the confidence score has been generated, the object detection and classification unit 106 generates a bounding box for each of the detected plurality of objects. Thereafter, the object detection and classification unit 106 determines whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of the predetermined object classes to obtain a plurality of not rejected prediction labels. The object detection and classification unit 106 rejects the predicted label of an object if the confidence score of the object is less than the predefined threshold value of the object classes. The object detection and classification unit 106 keeps the predicted label of an object if the confidence score of the object is more than the predefined threshold value of the object classes and adds the kept predicted label to the plurality of not rejected prediction labels.
In an embodiment of the present disclosure, a predefined threshold value may be 0.5 for each of the object classes. In another embodiment of the present disclosure, a predefined threshold value may be any similar value for each of the object classes. In yet another embodiment of the present disclosure, a predefined threshold value may have different values for different object classes.
Further, the object detection and classification unit 106 is configured to detect a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction label to determine intersections of bounding boxes. If an intersection of bounding boxes is determined by the object detection and classification unit 106, the object detection and classification unit 106 applies a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes. In an embodiment of the present disclosure, the confidence threshold may be 75%. In another embodiment of the present disclosure, the confidence threshold may be any other number. Thereafter, the object detection and classification unit 106 converts the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
For example, let us consider a scenario where two bounding boxes are generated for a predicted label. FIG. 4 illustrates an exemplary scenario where two bounding boxes are generated for a predicted label, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 4, two bounding boxes A and B with coordinates as (x1, x2, y1, y2) for A and (X1, X2, Y1, Y2) for B, respectively for a prediction label. The two bounding boxes A and B are intersecting each other. Therefore, the object detection and classification unit 106 applies a confidence threshold (70%) to the intersecting bounding boxes A and B and selects the bounding box which is larger as compared to the other. The object detection and classification unit 106 converts the prediction label of the selected bounding box into a permanent label.
Further, if same prediction label has two bounding boxes where one bounding box is present inside the other bounding box, the object detection and classification unit 106 selects the outer bounding box and not the inner bounding box. Similarly, the object detection and classification unit 106 checks for bounding boxes for text inside table or text inside text. Also, if a table and text intersect, the object detection and classification unit 106 calculates area of line encompassed inside table and compares with a confidence threshold (70%). If the area is greater than the confidence threshold (70 %), then the object detection and classification unit 106 ignores the text block.
The object detection and classification unit 106 then stores the classified objects along with their permanent labels as a json file. Also, the object detection and classification unit 106 is configured to output the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user 104 on the user device 102, wherein the object detection and classification unit 106 outputs the classified objects along with their permanent label, bounding box coordinates and the confidence score by highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device 102. FIG. 5A and FIG. 5B illustrate an exemplary display screen 500 and 502 of the user device 102 displaying the classified objects along with their permanent label, bounding box coordinates and the confidence score, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 5B, the system 100 detects and classifies Braille imprints also, if they are present on the packaging artworks.
FIG. 6 illustrates a method 600 for detecting and classifying objects in packaging artworks, in accordance with an embodiment of the present disclosure. At step 602, the method includes receiving, by an object detection and classification unit 106, an image of a packaging artwork from a user device 102.
At step 604, the method includes detecting, by the object detection and classification unit 106, a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes.
At step 606, the method includes assigning, by the object detection and classification unit 106, a prediction label to each of the detected plurality of objects.
At step 608, the method includes generating, by the object detection and classification unit 106, a confidence score and a bounding box for each of the detected plurality of objects.
At step 610, the method includes determining, by the object detection and classification unit 106, whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of each of the predetermined object classes to obtain a plurality of not rejected prediction labels.
At step 612, the method includes detecting, by the object detection and classification unit 106, a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction labels to determine intersections of bounding boxes.
At step 614, the method includes applying, by the object detection and classification unit 106, a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes.
At step 616, the method includes converting, by the object detection and classification unit 106, the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
At step 618, the method includes storing, by the object detection and classification unit 106, the classified objects along with their permanent labels as a json file.
At step 620, the method includes outputting, by the object detection and classification unit 106, the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user 104 on the user device 102. Outputting the classified objects along with their permanent label, bounding box coordinates and the confidence score includes highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device 102.
The system and method for detecting and classifying objects in packaging artworks disclosed in the present disclosure have numerous advantages. The system and method disclosed in the present disclosure is used for detecting objects in artworks automatically, thereby providing efficient and effective detection of defective or incorrect labelling in packaging artworks. Further, the system and method disclosed in the present disclosure is used for automatically detecting and classifying objects in packaging artworks, without any human intervention.
Furthermore, the system and method disclosed displays the detected and classified objects in packaging artworks to the user on the user device, thereby allowing the user to visually see the objects and correct any defective or incorrect labelling of the packaging artworks well before the packaging artworks are printed and labelled in bulk. This helps in saving money, time and resources. It also helps in protecting the manufacturing industries from harsh financial penalties and litigation risks. Also, it helps in smooth correction of defective or incorrect labels across different countries and different languages.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments.
It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
Any discussion of documents, acts, materials, devices, articles and the like that has been included in this specification is solely for the purpose of providing a context for the disclosure.
It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
The numerical values mentioned for the various physical parameters, dimensions or quantities are only approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
While considerable emphasis has been placed herein on the particular features of this disclosure, it will be appreciated that various modifications can be made, and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other modifications in the nature of the disclosure or the preferred embodiments will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
,CLAIMS:I/We Claim:
1. A system (100) for detecting and classifying objects in packaging artworks, the system (100) comprising:
an object detection and classification unit (106) configured to:
receive an image of a packaging artwork from a user device (102);
detect a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes;
assign a prediction label to each of the detected plurality of objects;
generate a confidence score and a bounding box for each of the detected plurality of objects;
determine whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of each of the predetermined object classes to obtain a plurality of not rejected prediction labels;
detect a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction labels to determine intersections of bounding boxes;
apply a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes; and
convert the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
2. The system (100) as claimed in claim 1, wherein the object detection and classification unit (106) is configured to store the classified objects along with their permanent labels as a json file.
3. The system (100) as claimed in claim 1, wherein the object detection and classification unit (106) is configured to output the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user (104) on the user device (102), wherein the object detection and classification unit (106) outputs the classified objects along with their permanent label, bounding box coordinates and the confidence score by highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device (102).
4. The system (100) as claimed in claim 1, wherein the predetermined plurality of object classes comprises: artworks, tables, barcodes, nutri-scores, front of panel declarations (FOP), lines, symbols, images, text or pantone colours.
5. The system (100) as claimed in claim 1, wherein the object detection and classification unit (106) uses a transfer learning (TL) model for detecting the plurality of objects on the received image of the packaging artwork.
6. The system (100) as claimed in claim 5, wherein the object detection and classification unit (106) uses a faster RCNN ResNet50 transfer learning model.
7. The system (100) as claimed in claim 1, wherein the confident score is a number that lies between 0 and 1.
8. The system (100) as claimed in claim 1, wherein the predetermined threshold of each of the object classes is 0.5.
9. The system (100) as claimed in claim 1, wherein the confidence threshold is 70%.
10. A method (600) for detecting and classifying objects in packaging artworks, the method (600) comprising:
receiving, by an object detection and classification unit (106), an image of a packaging artwork from a user device (102);
detecting, by the object detection and classification unit (106), a plurality of objects on the received image of the packaging artwork based on a predetermined plurality of object classes;
assigning, by the object detection and classification unit (106), a prediction label to each of the detected plurality of objects;
generating, by the object detection and classification unit (106), a confidence score and a bounding box for each of the detected plurality of objects;
determining, by the object detection and classification unit (106), whether to reject a predicted label of each of the detected plurality of objects by comparing the generated confidence score of each of the detected plurality of objects with a predefined threshold value of each of the predetermined object classes to obtain a plurality of not rejected prediction labels;
detecting, by the object detection and classification unit (106), a size of the bounding box generated for each of the objects of the obtained plurality of not rejected prediction labels to determine intersections of bounding boxes;
applying, by the object detection and classification unit (106), a confidence threshold to the intersecting bounding boxes for each of the objects of the obtained plurality of not rejected prediction labels to select one bounding box out of the intersecting boxes; and
converting, by the object detection and classification unit (106), the prediction label of the selected bounding box into a permanent label for each of the objects of the plurality of not rejected prediction labels, wherein the objects of the permanent labels of the plurality of not rejected prediction labels represent the objects classified.
11. The method (600) as claimed in claim 10, wherein the method (600) comprises storing, by the object detection and classification unit (106), the classified objects along with their permanent labels as a json file.
12. The method (600) as claimed in claim 10, wherein the method (600) comprises outputting, by the object detection and classification unit (106), the classified objects along with their permanent label, bounding box coordinates and the confidence score to the user (104) on the user device (102), wherein outputting the classified objects along with their permanent label, bounding box coordinates and the confidence score comprises highlighting the classified objects along with their permanent label, bounding box coordinates and the confidence score at the user device (102).
13. The method (600) as claimed in claim 10, wherein detecting, by the object detection and classification unit (106), the plurality of objects on the received image of the packaging artwork comprises useing a transfer learning (TL) model, wherein the TL model uses a faster RCNN ResNet50 transfer learning model.
| # | Name | Date |
|---|---|---|
| 1 | 202241029639-STATEMENT OF UNDERTAKING (FORM 3) [23-05-2022(online)].pdf | 2022-05-23 |
| 2 | 202241029639-PROVISIONAL SPECIFICATION [23-05-2022(online)].pdf | 2022-05-23 |
| 3 | 202241029639-PROOF OF RIGHT [23-05-2022(online)].pdf | 2022-05-23 |
| 4 | 202241029639-POWER OF AUTHORITY [23-05-2022(online)].pdf | 2022-05-23 |
| 5 | 202241029639-FORM FOR SMALL ENTITY(FORM-28) [23-05-2022(online)].pdf | 2022-05-23 |
| 6 | 202241029639-FORM FOR SMALL ENTITY [23-05-2022(online)].pdf | 2022-05-23 |
| 7 | 202241029639-FORM 1 [23-05-2022(online)].pdf | 2022-05-23 |
| 8 | 202241029639-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-05-2022(online)].pdf | 2022-05-23 |
| 9 | 202241029639-DRAWINGS [23-05-2022(online)].pdf | 2022-05-23 |
| 10 | 202241029639-DECLARATION OF INVENTORSHIP (FORM 5) [23-05-2022(online)].pdf | 2022-05-23 |
| 11 | 202241029639-DRAWING [18-05-2023(online)].pdf | 2023-05-18 |
| 12 | 202241029639-CORRESPONDENCE-OTHERS [18-05-2023(online)].pdf | 2023-05-18 |
| 13 | 202241029639-COMPLETE SPECIFICATION [18-05-2023(online)].pdf | 2023-05-18 |
| 14 | 202241029639-PostDating-(12-06-2023)-(E-6-196-2023-CHE).pdf | 2023-06-12 |
| 15 | 202241029639-APPLICATIONFORPOSTDATING [12-06-2023(online)].pdf | 2023-06-12 |