Abstract: Method And System For Inspecting Printing Defects Abstract The present invention discloses a system (100) and a method (700) for inspecting printing defects in packaging artworks. The system (100) includes a user device (102) monitored by a user (104) and a printing defects inspection unit (106). The printing defects inspection unit (106) is configured to inspect printing defects in the received target image based on the received source image. The printing defects inspection unit (106) receives a source image and a target image and analyzes the source image and the target image to identify a plurality of printing errors in the target image based on the source image. The printing defects inspection unit (106) then aligns a wrapped image of the target image with the source image and analyses the aligned wrapped target image to identify a plurality of image defects in the wrapped target image. Further, the printing defects inspection unit (106) identifies the printing defects in the aligned wrapped target image as a set of contour points or the bounding box coordinates. FIG. 1
DESC:Method And System For Inspecting Printing Defects
FIELD OF INVENTION
The present invention relates to the field of image processing. More specifically, the present invention relates to a system and a method for inspecting printing defects in scanned copy of printed samples of packaging artworks.
BACKGROUND OF INVENTION
Investments in packaging are increasing day-by-day in various industries, especially in pharmaceutical, food and beverages, etc. Sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. The content present on the packaging is referred to as an artwork related to the packaged good, wherein the artwork includes images, brand name, text, composition, nutritional value table, etc. related to the packaged good.
However, during printing of such packaging artworks, lots of versions of a packaging artwork may be formed, where all the versions of the packaging artwork may have some or many defects or dissimilarities with respect to the actual version of the packaging artwork. Therefore, such defects or dissimilarities may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged good or not. This is true, especially in case of medicines, as the customers need to buy the right composition and quantity of medicine. Hence, printing defects management is very crucial in packaging industries.
However, most of the printing defects management systems known detect defects in the printed samples of packaging artworks either manually or by scanning the packaging artworks. In such a scenario, the packaging artworks can only be compared when the packaging artworks have been printed in bulk and are ready for packaging the products to be shipped to the market. This causes a lot of money and time wastage, as if the printed packaging artworks have defects or dissimilarities, there is no way of correcting such defects or dissimilarities. The industry officials either have to ship the packaged products as it or will have to repeat the entire printing and packaging process, thereby leading to monetary losses. Also, manual inspection of print quality of the packaging artworks does not capture the print defects impacting quality of the packaging artworks and hence may lead to expensive reprinting and increase in waste of packaging artworks. Henceforth lowering the profits and turning down customer satisfaction.
Further, quality and consistency in printing of packaging artworks is a basic requirement for any business. There are many errors which may be found during print inspections in packaging artworks, such as incorrect orientation, colour variances, defective text and other distortions like, ink marks, etc. Hence, the demand for the need of print inspection system or defect management system is increasing day- by-day by many industries. Therefore, there is a need for a system and a method for inspecting printing defects in printed samples of packaging artworks automatically, thereby providing efficient and effective inspection of defects. Further,
there is a need for a system and a method for inspecting printing defects in printed copies of packaging artworks well before the packaging artworks are printed in bulk and that too without any human intervention.
OBJECT OF INVENTION
The object of the present invention is to provide a system and a method for inspecting printing defects in printed samples of packaging artworks automatically, thereby providing efficient and effective inspection of defects. More specifically, the object of the present invention is to provide a system and a method for inspecting printing defects in printed copies of packaging artworks well before the packaging artworks are printed in bulk and that too without any human intervention.
SUMMARY
The present application discloses a system for inspecting printing defects in packaging artworks. The present application discloses that the system includes a user device monitored by a user, and a printing defects inspection unit. The printing defects inspection unit includes an image pre-processing unit, an image registration unit, an image processing unit, an image analysis unit, an image compare unit, and an output unit. The image pre-processing unit is configured to receive a source image and a target image from a user device, and to analyze the source image and the target image to identify a plurality of printing errors in the target image based on the source image. The source image and the target image are two images of a same packaging artwork, where the source image is a digitally approved master image of a packaging artwork, and the target image is a scanned copy of printed sample of the digitally approved master image of the packaging artwork.
The image registration unit is configured to align the target image with the source image. The image registration unit identifies key points on the source image and the target image that are stable under a plurality of image transformations, and converts each of the identified key points into a binary descriptor. Further, the image registration unit identifies alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Thereafter, the image registration unit applies homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities, and aligns the wrapped target image with the source image.
The image processing unit is configured to analyze the aligned wrapped target image to identify a plurality of image defects in the wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image. The image analysis unit is configured to determine if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit.
The image compare unit is configured to compare a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates.
The output unit is configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device to display dissimilarities between the source image and the aligned wrapped target image to the user, wherein the set of contour points or the bounding box coordinates displayed on the aligned wrapped target image represent the printing defects of the packaging artworks.
The present disclosure further discloses a method for inspecting printing defects in packaging artworks. The method includes receiving, at an image pre-processing unit, a source image and a target image from a user device. The method further includes analysing, at the image pre-processing unit, the source image and the target image to identify a plurality of printing errors in the target image based on the source image.
Further, the method includes aligning, at an image registration unit, the target image with the source image. The aligning comprises identifying key points on the source image and the target image that are stable under a plurality of image transformations, and converting each of the identified key points into a binary descriptor. Further, aligning comprises identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Also, it comprises applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities, and aligning the wrapped target image with the source image.
Furthermore, the method analyzing, at an image processing unit, the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image. The method includes determining, at an image analysis unit, if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit.
Also, the method includes comparing, at an image compare unit, a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates. The method includes outputting, at an output unit, the set of contour points or the bounding box coordinates in pixel coordinates at the user device to display dissimilarities between the source image and the aligned wrapped target image to the user, wherein the set of contour points or the bounding box coordinates displayed on the aligned wrapped target image represent the printing defects of the packaging artworks.
BRIEF DESCRIPTION OF DRAWINGS
The novel features and characteristics of the disclosure are set forth in the description. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following description of an illustrative embodiment when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawings wherein like reference numerals represent like elements and in which:
FIG. 1 illustrates a system 100 for inspecting printing defects in packaging artworks, in accordance with an embodiment of the present disclosure.
FIG. 2 illustrates an exemplary source image 202 and target image 204 of a packaging artwork 200 inputted by the user 102, in accordance with an embodiment of the present disclosure.
FIGS. 3A-3D illustrate a pictorial representation of various printing errors that may be present in the target image, in accordance with an embodiment of the present disclosure.
FIG. 4A and FIG. 4B illustrate an exemplary image of a packaging artwork before and after bilateral filtering and adaptive thresholding, in accordance with an embodiment of the present application.
FIG. 5A and FIG. 5B illustrate an exemplary image of a packaging leaflet before and after Otsu thresholding, in accordance with an embodiment of the present disclosure.
FIG. 6 illustrates an exemplary display screen 600 of the user device 102 illustrating dissimilarities between a source image 602 and an aligned wrapped target image 604 highlighted by the output unit 118, in accordance with an embodiment of the present disclosure.
FIG. 7 illustrates a method 700 for inspecting printing defects in packaging artworks, in accordance with an embodiment of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the assemblies, structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other, sub-systems, elements, structures, components, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this invention belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying figures.
The present invention focusses on providing a system and a method for inspecting printing defects in scanned copy of printed samples of packaging artworks used for packaging goods or products produced by diverse industries, such as consumer packaged goods, pharmaceuticals, etc. Sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. However, during printing of such packaging artworks, lots of versions of a packaging artwork may be formed, where all the versions of the packaging artwork may have some or many defects or dissimilarities with respect to the actual version of the packaging artwork. Therefore, such defects or dissimilarities may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged good or not. This is true, especially in case of medicines, as the customers need to buy the right composition and quantity of medicine. Hence, printing defects management is very crucial in packaging industries.
However, most of the printing defects management systems known detect defects in the printed samples of packaging artworks either manually or by scanning the packaging artworks. The manual inspection of print quality of the packaging artworks does not capture the print defects impacting quality of the packaging artworks and hence may lead to expensive reprinting and increase in waste of packaging artworks. Henceforth lowering the profits and turning down customer satisfaction. Therefore, the present disclosure discloses a system and a method for inspecting printing defects in printed samples of packaging artworks automatically, thereby providing efficient and effective inspection of defects. Further, the present disclosure discloses a system and a method for inspecting printing defects in printed copies of packaging artworks well before the packaging artworks are printed in bulk and that too without any human intervention.
FIG. 1 illustrates a system 100 for inspecting printing defects in packaging artworks, in accordance with an embodiment of the present disclosure. The system 100 includes a user device 102 monitored by a user 104, and a printing defects inspection unit 106. The user device 102 relates to hardware component such as a keyboard, mouse, etc which accepts data from the user 104 and also relates to a hardware component such as a display screen of a desktop, laptop, tablet, etc. which displays data to the user 104. The user device 102 is configured to allow the user 104 to input a digitally approved master image of a packaging artwork as a source image and a scanned copy of a printed sample of the digitally approved master image of the packaging artwork as a target image. The source image and the target image are images of a same packaging artwork. The user 104 may be, but not limited to, any employ of an industry monitoring printing of packaging artworks, a person at a printing unit who may have received printing order of packaging artworks, etc.
In an embodiment of the present disclosure, the user 104 may use a high quality flat-bed scanner for scanning printed samples of digitally approved master image of the packaging artworks because the flat-bed scanner has a flat surface for scanning documents and can capture all elements on a document without requiring any movement of the document. In another embodiment of the present disclosure, the user 104 may select any scanner for scanning printed samples of digitally approved master image of the packaging artworks based on various criteria and a requirement of the user 104. The various criteria may include, but not limited to, the following:
Package artwork document size;
a resolution value of 600 dpi (dots per inch) for scanning. Lower dpi leads to loss in detail and quality, and lower accuracy;
Depth estimation. For depth estimation, a 3D (three-dimensional) scanner may be used as depth estimation is not applicable to 2D (two-dimensional) scanners. 3D scanners use multiple lighting to approximate depth to reproduce high quality images where depth may be discerned;
a scanner with higher colour accuracy representation, thereby producing higher accuracy during comparison; or
high quality scanners which alleviate noise which may arise due to folding of samples (foils).
FIG. 2 illustrates an exemplary source image 202 and target image 204 of a packaging artwork 200 inputted by the user 102, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 2, the source image 202 and the target image 204 are images of a same packaging artwork which may be used to package a medicine. The packaging artwork includes information such as name of the medicine, composition and quantity of the medicine, dosage and precaution instruction, storage guidelines, etc.
The user device 102 is further configured to send the source image and the target image to the printing defects inspection unit 106. The printing defects inspection unit 106 is a hardware component which is capable for processing any data or information received by them. In certain embodiments, the printing defects inspection unit 106 may be part of any regularly devices, such as laptops, desktops, tablets, mobile devices, etc. The printing defects inspection unit 106 is configured to inspect printing defects in the received target image based on the received source image. The printing defects inspection unit 106 includes an image pre-processing unit 108, an image registration unit 110, an image processing unit 112, an image analysis unit 114, an image compare unit 116, and an output unit 118.
The image pre-processing unit 108 is configured to receive the source image and the target image from the user device 102 and to analyze the source image and the target image to identify a plurality of printing errors in the target image based on the source image. Since the target image is a scanned copy of a printed sample of the source image, the target image may include various printing errors as compared to the source image. The printing errors may include, but not limited to, colour proofing, bleeding errors, trapping errors, eye marks, lines or cuts, etc.
FIGS. 3A-3D illustrate a pictorial representation of various printing errors that may be present in the target image, in accordance with an embodiment of the present disclosure. FIG. 3A illustrates an exemplary bleeding error that may be present in the target image, in accordance with an embodiment of the present disclosure. Every printable document includes an extra margin (called a bleed) which allows for inaccuracies to be trimmed off along the margin line when the document is printed. This also happens when a printer is unable to handle high colour resolution. Further, trimming the bleeds is important for preparing print documents as well as folding process of documents. Therefore, if a bleed is present in the source image, and is not trimmed and appears in the target image, or vice-versa, the image pre-processing unit 108 identifies it as a printing error (bleeding error) while analysing the source image and the target image.
FIG. 3B illustrates an exemplary trapping error that may be present in the target image, in accordance with an embodiment of the present disclosure. Trapping errors are errors which may be caused due to misregistration, i.e., when two layers of ink are not perfectly aligned, thereby causing gaps or white-space on a printed document. The image pre-processing unit 108 identifies trapping error as a printing error while analysing the source image and the target image.
FIG. 3C and FIG. 3D illustrate an exemplary stain mark errors that may be present in the target image, in accordance with an embodiment of the present disclosure. Stain mark errors may be caused by a worn-out drum unit or printer cartridge, which may form some black lines and dots on a printed document. In an embodiment of the present disclosure, stain marks may include an ‘eye mark’ (also known as ‘eye spot’) which is a small rectangular printed area located near the edge of the printed flexible packaging material. It indicates the start of the page for printing. At every 4th or 8th eye mark, there may be a stain. For example, as illustrated in FIG. 3C, the 2nd eye mark, shows a stain represented by number 9. In another embodiment of the present disclosure, stain marks may be caused by ink being short and buttery. In yet another embodiment, stain marks may be caused due to other printer defects and cylinder issue. The image pre-processing unit 108 identifies the stain marks as printing errors while analysing the source image and the target image.
The image registration unit 110 is also configured to receive the source image and the target image from the user device 102 and to align the target image with the source image. Any two images can be compared perfectly if they are aligned with each other. Image alignment or image registration is a process of wrapping images so that there is a perfect line-up between two images. Therefore, the image registration unit 110 uses image alignment technique to align the target image with the source image. In an image alignment technique, a sparse set of features are detected in one image and matched with the features in the other image. The image alignment technique also identifies interesting stable points, known as key-points or feature points, in an image. The key points or feature points are points similar to points that a human notices when he/she sees that image for a first time.
In an embodiment of the present disclosure, the image registration unit 110 uses an Oriented FAST and Rotated BRIEF (ORB) algorithm to detect the key-points or feature points in the source image and the target image. ORB is a combination of two algorithms: FAST (Features from Accelerated Segments Test) and BRIEF (Binary Robust Independent Elementary Features). FAST identifies key points on the source image and the target image that are stable under image transformations like translation (shift), scale (increase/decrease in size), and rotation, and gives the (x, y) coordinates of such points. BRIEF takes the identified key points and turns them into a binary descriptor or a binary feature vector. The key points founded by FAST algorithm and binary descriptors created by BRIEF algorithm both together represent an object of an image. A threshold of max 30,000 key points is defined for ORB to control a number of key points extracted. The advantages of ORB are that it is very fast, accurate, license-free, and gives a high recognition rate. In another embodiment of the present disclosure, the image registration unit 110 may use any technique known for detecting the key-points or feature points in the source image and the target image.
After the key points have been identified for the source image and the target image and each detected key point has been converted into a binary descriptor, the image registration unit 110 identifies alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. In an embodiment of the present disclosure, the image registration unit 110 may use a hamming distance as a measure of similarity between a binary descriptor of the source image and a binary descriptor of target image. The image registration unit 110 may then sort them by goodness of match (15%) threshold.
In another embodiment of the present disclosure, the image registration unit 110 may use homography with Random Sample Consensus (RANSAC) method to find similarities between the binary descriptors of the source image and the binary descriptors of the target image. A homography may be computed when there are 4 or more corresponding key points in the source image and the target image. Basically, homography is a 3X3 matrix. Let us assume that (x1, y1) are coordinates of a key point in the source image and (x2, y2) be the coordinates of the same key point in the target image. Then, the Homography (H) for the coordinates is as represented by equation 1:
H = ¦(h00&h01&h02@h10&h11&h12@h20&h21&h22) (1)
Once an accurate homography is calculated, the homography transformation is applied to all pixels in one image to map it to the other image. Therefore, the Homography (H) is then applied to all the pixels of the target image to obtain a wrapped target image, as represented by equation 2:
[¦(x_1@y_1@1)]=H [¦(x_2@y_2@1)] (2)
The image registration unit 110 then aligns the wrapped target image with the source image. RANSAC is a robust estimation technique. RANSAC has an advantage that it produces a right result even in a presence of large number of bad matches or dissimilarities between the source image and the target image by removing outlier features of both the images. In another embodiment of the present disclosure, the image registration unit 110 may use any method known to find similarities between the binary descriptors of the source image and the binary descriptors of the target image.
After the wrapped target image has been aligned with the source image by the image registration unit 110, the image processing unit 112 is configured to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image. Since the target image is a scanned copy of a print sample of the source image, the target image may include noise and colour variation effects. Therefore, the image processing unit 112 identifies these errors and corrects them.
In an embodiment of the present disclosure, the image processing unit 112 uses a bilateral filtering to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image. The image processing unit 112 uses bilateral filtering when the aligned wrapped target image is an image of an artwork. Bilateral filtering uses a bilateral filter which is a non-linear, edge-preserving, and noise-reducing smoothing filter used for images. The bilateral filter replaces an intensity of each pixel with a weighted average of intensity values from nearby pixels. Further, the bilateral filtering uses adaptive thresholding to identify and correct colour variations. Adaptive thresholding is a technique in which a threshold value is calculated for smaller regions in an image, thereby generating different threshold values for different regions. The adaptive thresholding is also used to separate desirable foreground image objects from a background based on a difference in pixel intensities of each region of the image. Therefore, the image processing unit 112 uses bilateral filtering with adaptive thresholding for smoothening images and reducing noise, while preserving edges. FIG. 4A and FIG. 4B illustrate an exemplary image of a packaging artwork before and after bilateral filtering and adaptive thresholding, in accordance with an embodiment of the present application.
In another embodiment of the present disclosure, the image processing unit 112 uses an Otsu thresholding to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image. The image processing unit 112 uses Otsu thresholding when the aligned wrapped target image is an image of a leaflet. Otsu’s thresholding is an adaptive thresholding technique for binarization in image processing. Otsu’s thresholding finds an optimal threshold value of an input image by going through all possible threshold values (from 0 to 255). The Otsu thresholding takes care of back page impressions in scanned leaflets. FIG. 5A and FIG. 5B illustrate an exemplary image of a packaging leaflet before and after Otsu thresholding, in accordance with an embodiment of the present disclosure.
Once the image processing unit 112 has identified and corrected the plurality of image defects in the aligned wrapped target image, the image analysis unit 114 is configured to determine if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit 108. In an embodiment of the present disclosure, the image processing unit 112 uses a Structural Similarity Index (SSIM) to find similarities and/or dissimilarities between the aligned wrapped target image and the source image. SSIM is a perceptual metric used to measure differences or dissimilarities between two similar images. SSIM puts analysis of images in a scale of -1 to 1, where a score of 1 means that the images are very similar and a score of -1 means that the images are very different. Hence, SSIM fits well for artwork comparison. In another embodiment of the present disclosure, the image analysis unit 114 may use any technique known for finding similarities and/or dissimilarities between the aligned wrapped target image and the source image.
The image analysis unit 114 uses the following predefined thresholds to determine similarities and/or dissimilarities between the aligned wrapped target image and the source image:
SSIM WINDOW SIZE = 25 – This value is based on a character size taken across 100’s of samples.
SSIM VERIFICATION VALUE = [0.925, 0.975] - These values are for verifying each individual deviation or dissimilarity, for normal and high, respectively.
MORPH KERNEL SIZE = 10 - This value is for closing deviations or dissimilarities to combine.
GAUSSIAN BLUR = (3,3) – This represent values to which both the source image and wrapped target image are blurred for smoothing.
RAW THRESHOLDS = (205, 253) - This threshold value is defined to choose between threshold for normal and highly sensitive differences in SSIM output, and to identify which pixels are potential deviations or dissimilarities.
THRESHOLD RANGE = (0.75, 3.0, 18.0, 50.0) - This threshold range is defined to choose values of threshold based on a percentage of pixel differences or dissimilarities (extreme low range, low range, mid-range, high range).
THRESHOLDS NON 0 = (127.5, 160, 180, 240, 250) - These threshold values are used to choose values for artworks with rotation/ translation/ scaling as major difference or dissimilarity. For non 0 degree, there may be interpolation artifacts due to rotation. The Threshold values help to ignore interpolation artifacts.
127.5 - High range threshold – This value related to Pack inserts, digital v/s print proof (Since print proof has highly flattened images which has high noise).
160 - Mid-range threshold – This value relates to significant transformation between 2 versions which may lead to registration issue.
180 - Lower mid-range threshold – This value relates to minor transformation leading to registration issue.
240 - Low range threshold – This value relates to same orientation and no transformation with differences or dissimilarities.
250 – This value relates to extreme low range threshold where there is no transformation with very minute differences or dissimilarities.
If, based on the above-mentioned predefined thresholds and the plurality of printing errors, the image analysis unit 114 determines that the source image and the aligned wrapped target image are not similar, the image compare unit 116 compares a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image. The image compare unit 116 determines dissimilarities between the source image and the aligned wrapped image as a set of contour points or bounding box coordinates. Contour points provide more accurate results as compared to bounding box coordinates, as a bounding box may include non-difference regions based on an individual difference. The dynamic and automatically chosen contour points are actual differences. Contour points are simply a curve joining all the continuous points (along the boundary), having same colour or intensity. Each individual contour is an array of (x,y) coordinates of boundary points of the object.
The image analysis unit 114 further allows the user 104 to find extreme minute differences by choosing high sensitivity option. The image compare unit 116 returns translation value, theta value (rotation in degrees), scaling value, contours, homography matrix values as json file to the output unit 118.
The output unit 118 is configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device 102 to display dissimilarities between the source image and the aligned wrapped target image to the user 104, wherein the set of contour points or the bounding box coordinates displayed at the aligned wrapped target image represent the printing defects of the packaging artworks. The output unit 118 outputs the dissimilarities by highlighting the set of contour points or the bounding box coordinates in pixel coordinates at the user device 102. FIG. 6 illustrates an exemplary display screen 600 of the user device 102 illustrating dissimilarities between a source image 602 and an aligned wrapped target image 604 highlighted by the output unit 118, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 6, the dissimilarities between the source image 602 and the aligned wrapped target image 604 are displayed as highlighted contour points or bounding box coordinates and are marked by numbers 1-12 in the source image as well as the aligned wrapped target image. Same number in the source image and the aligned wrapped target image highlights dissimilarities in same area of the image.
FIG. 7 illustrates a method 700 for inspecting printing defects in packaging artworks, in accordance with an embodiment of the present disclosure. At step 702, the method includes receiving, at an image pre-processing unit 108, a source image and a target image from a user device 102. At step 704, the method includes analysing, at the image pre-processing unit 108, the source image and the target image to identify a plurality of printing errors in the target image based on the source image.
At step 706, the method includes aligning, at an image registration unit 110, the target image with the source image. The aligning includes identifying key points on the source image and the target image that are stable under a plurality of image transformations and converting each of the identified key points into a binary descriptor. The aligning further includes identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Also, aligning includes applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities; and aligning the wrapped target image with the source image.
At step 708, the method includes analyzing, at an image processing unit 112, the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image. In an embodiment of the present disclosure, the image processing unit 112 uses a bilateral filtering with adaptive thresholding to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image. In another embodiment of the present disclosure, the image processing unit 112 uses an Otsu thresholding to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image.
At step 710, the method includes determining, at an image analysis unit 114, if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit 108.
At step 712, the method includes comparing, at an image compare unit 116, a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit 116 determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates.
At step 714, the method includes outputting, at an output unit 118, the set of contour points or the bounding box coordinates in pixel coordinates at the user device 102 to display dissimilarities between the source image and the aligned wrapped target image to the user 104, wherein the set of contour points or the bounding box coordinates displayed on the aligned wrapped target image represent the printing defects of the packaging artworks.
The system and method for inspecting printing defects in packaging artworks disclosed in the present disclosure have numerous advantages. The system and method disclosed in the present disclosure is used in packaging and labelling industry to provide an efficient and qualitative inspection of printing defects in packaging artworks. Further, the system and method disclosed in the present disclosure is used for inspecting printing defects in printed samples of packaging artworks automatically, without any human intervention. Furthermore, the system and method disclosed in the present disclosure is used for inspecting printing defects in printed copies of packaging artworks well before the packaging artworks are printed in bulk, thereby saving the wasting of packaging materials, time, money, and human effort. Also, the system and method disclosed in the present disclosure inspects printing defects by analyzing the target image twice – one for identifying printing errors and other for identifying image defects. This make the system and method effective and efficient as scope of any defect or dissimilarity being unnoticed is very less.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments.
It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
Any discussion of documents, acts, materials, devices, articles and the like that has been included in this specification is solely for the purpose of providing a context for the disclosure.
It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
The numerical values mentioned for the various physical parameters, dimensions or quantities are only approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
While considerable emphasis has been placed herein on the particular features of this disclosure, it will be appreciated that various modifications can be made, and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other modifications in the nature of the disclosure or the preferred embodiments will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
,CLAIMS:I/We Claim:
1. A system (100) for inspecting printing defects in packaging artworks, the system (100) comprising:
an image pre-processing unit (108) configured to receive a source image and a target image from a user device (102), and to analyze the source image and the target image to identify a plurality of printing errors in the target image based on the source image;
an image registration unit (110) configured to align the target image with the source image, wherein the image registration unit (110) is configured to:
identify key points on the source image and the target image that are stable under a plurality of image transformations;
convert each of the identified key points into a binary descriptor;
identify alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image;
apply homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities; and
align the wrapped target image with the source image;
an image processing unit (112) configured to analyze the aligned wrapped target image to identify a plurality of image defects in the wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image;
an image analysis unit (114) configured to determine if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit (108);
an image compare unit (116) configured to compare a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit (116) determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates; and
an output unit (118) configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102) to display dissimilarities between the source image and the aligned wrapped target image to the user (104), wherein the set of contour points or the bounding box coordinates displayed on the aligned wrapped target image represent the printing defects of the packaging artworks.
2. The system (100) as claimed in claim 1, wherein the source image is a digitally approved master image of a packaging artwork, and the target image is a scanned copy of printed sample of the digitally approved master image of the packaging artwork.
3. The system (100) as claimed in claim 1, wherein the source image and the target image are two images of a same packaging artwork.
4. The system (100) as claimed in claim 1, wherein the plurality of printing errors comprises colour proofing, bleeding errors, tapping errors, eye marks, lines or cuts.
5. The system (100) as claimed in claim 1, wherein the image registration unit (110) uses Oriented FAST and Rotated BRIEF (ORB) technique to identify the key points and to convert the identified key points into binary descriptors.
6. The system (100) as claimed in claim 1, wherein the plurality of image transformations comprises translation (shift), scale (increase/decrease in size), rotation, reflection, and dialation.
7. The system (100) as claimed in claim 1, wherein the image registration unit (110) applies homography to the plurality of pixels of the target image using Random Sample Consensus (RANSAC), and wherein the plurality of pixels comprises all the pixels of the target image.
8. The system (100) as claimed in claim 1, wherein the image processing unit (112) uses a bilateral filtering with adaptive thresholding to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image.
9. The system (100) as claimed in claim 1, wherein the image processing unit (112) uses an Otsu thresholding to analyze the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image.
10. The system (100) as claimed in claim 1, wherein the image analysis unit (114) uses Structural Similarity Index (SSIM) to determine if the source image and the wrapped target image are similar.
11. The system (100) as claimed in claim 10, wherein the SSIM determines similarity between the source image and the wrapped target image in a scale of -1 to 1, wherein a score of 1 means the source image and the wrapped target image are similar and a score of -1 means the source image and the wrapped target image are different.
12. The system (100) as claimed in claim 1, wherein the plurality of predefined thresholds comprises:
SSIM WINDOW SIZE (25), SSIM VERIFICATION VALUE (0.925, 0.975), MORPH KERNEL SIZE (10), GAUSSIAN BLUR (3,3), RAW THRESHOLDS (205, 253), THRESHOLD RANGE (0.75, 3.0, 18.0, 50.0), and THRESHOLDS NON 0 (127.5, 160, 180, 240, 250).
13. The system (100) as claimed in claim 1, wherein the image compare unit (116) returns translation value, theta value (rotation in degrees), scaling value, contours, homography matrix values of the source image and the aligned wrapped target image as a json file.
14. The system (100) as claimed in claim 1, wherein the output unit (118) outputs the dissimilarities by highlighting the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102).
15. A method (700) for inspecting printing defects in packaging artworks, the method (700) comprising:
receiving, at an image pre-processing unit (108), a source image and a target image from a user device (102);
analysing, at the image pre-processing unit (108), the source image and the target image to identify a plurality of printing errors in the target image based on the source image;
aligning, at an image registration unit (110), the target image with the source image, wherein the aligning comprises:
identifying key points on the source image and the target image that are stable under a plurality of image transformations;
converting each of the identified key points into a binary descriptor;
identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image;
applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities; and
aligning the wrapped target image with the source image;
analyzing, at an image processing unit (112), the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image and to correct the identified plurality of image defects in the aligned wrapped target image;
determining, at an image analysis unit (114), if the source image and the aligned wrapped target image are similar based on a plurality of predefined thresholds and the plurality of printing errors identified by the image pre-processing unit (108);
comparing, at an image compare unit (116), a plurality of pixel values of the source image and a plurality of pixel values of the aligned wrapped target image to determine dissimilarities between the source image and the aligned wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit (116) determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates; and
outputting, at an output unit (118), the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102) to display dissimilarities between the source image and the aligned wrapped target image to the user (104), wherein the set of contour points or the bounding box coordinates displayed on the aligned wrapped target image represent the printing defects of the packaging artworks.
16. The method (700) as claimed in claim 15, wherein analyzing, by the image processing unit (112), the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image comprises using a bilateral filtering with adaptive thresholding.
17. The method (700) as claimed in claim 15, wherein analyzing, by the image processing unit (112), the aligned wrapped target image to identify a plurality of image defects in the aligned wrapped target image comprises an Otsu thresholding.
| # | Name | Date |
|---|---|---|
| 1 | 202241029638-STATEMENT OF UNDERTAKING (FORM 3) [23-05-2022(online)].pdf | 2022-05-23 |
| 2 | 202241029638-PROVISIONAL SPECIFICATION [23-05-2022(online)].pdf | 2022-05-23 |
| 3 | 202241029638-PROOF OF RIGHT [23-05-2022(online)].pdf | 2022-05-23 |
| 4 | 202241029638-POWER OF AUTHORITY [23-05-2022(online)].pdf | 2022-05-23 |
| 5 | 202241029638-FORM FOR SMALL ENTITY(FORM-28) [23-05-2022(online)].pdf | 2022-05-23 |
| 6 | 202241029638-FORM FOR SMALL ENTITY [23-05-2022(online)].pdf | 2022-05-23 |
| 7 | 202241029638-FORM 1 [23-05-2022(online)].pdf | 2022-05-23 |
| 8 | 202241029638-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-05-2022(online)].pdf | 2022-05-23 |
| 9 | 202241029638-DRAWINGS [23-05-2022(online)].pdf | 2022-05-23 |
| 10 | 202241029638-DECLARATION OF INVENTORSHIP (FORM 5) [23-05-2022(online)].pdf | 2022-05-23 |
| 11 | 202241029638-DRAWING [03-05-2023(online)].pdf | 2023-05-03 |
| 12 | 202241029638-CORRESPONDENCE-OTHERS [03-05-2023(online)].pdf | 2023-05-03 |
| 13 | 202241029638-COMPLETE SPECIFICATION [03-05-2023(online)].pdf | 2023-05-03 |
| 14 | 202241029638-PostDating-(12-06-2023)-(E-6-195-2023-CHE).pdf | 2023-06-12 |
| 15 | 202241029638-APPLICATIONFORPOSTDATING [12-06-2023(online)].pdf | 2023-06-12 |