Abstract: METHODS AND SYSTEM FOR IMAGE COMPARISON OF ARTWORKS Abstract The present invention discloses a system (100) and a method (400) for image comparison of artworks. The system (100) includes a user device (102) monitored by a user (104) and an image comparison unit (106). The image comparison unit (106) is configured to find dissimilarities between images of an artwork. The image comparison unit (106) receives a source image and a target image and aligns a wrapped image of the target image with the source image. The image comparison unit (106) then analyzes the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds. Further, the image comparison unit (106) compares a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image, and displays dissimilarities between the source image and the wrapped target image to the user (104) on the user device (102). FIG. 1
DESC:METHODS AND SYSTEM FOR IMAGE COMPARISON OF ARTWORKS
FIELD OF INVENTION
The present invention relates to a system and a method for image comparison of artworks. More specifically, the present invention relates to a system and a method for image comparison of artworks to identify dissimilarities in images of an artwork.
BACKGROUND OF INVENTION
Investments in packaging is increasing day-by-day in various industries specially in pharmaceutical, food and beverages, etc. Sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. The content present on the packaging is referred to as an artwork related to the packaged good, wherein the artwork includes images, brand name, text, composition, nutritional value table, etc. related to the packaged good. Hence, an artwork management system plays a crucial role in these industries.
The artwork management system is a process wherein based on a customer’s briefs and a customer’s source language, the whole process of packaging and labelling is managed. However, during developments of these artworks lots of versions of an artwork are formed where all the versions of the artwork have some or many dissimilarities. Therefore, such dissimilarities may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged good or not. This is true, especially in case of medicines, as the customers need to buy the right composition and quantity of medicine. Hence, for making a decision with respect to a specific artwork and artwork management, an immaculate comparison of artworks is desired.
However, most of the artwork management systems known perform comparison of artworks either manually or by scanning the artworks. In such a scenario, the artworks can only be compared when the packaged products have been packed and are ready to be shipped. This causes a lot of money and time wastage, as if the artworks have dissimilarities there is no way of correcting the dissimilarities. The industry officials either have to ship the packaged products as it or will have to repeat the entire packaging process, thereby leading to monetary losses. Therefore, there is a need for artwork management systems for packaging and labelling industry which can provide an efficient and qualitative image comparison approach to artwork management. Further, there is a need for an artwork management system that compares artworks to find dissimilarities well before packaged good are packaged without any human intervention.
OBJECT OF INVENTION
The object of the present invention is to provide a system and a method that provides an efficient and qualitative image comparison of artworks. More specifically, the object of the present invention is to provide a system and a method that compares artworks to find dissimilarities between images of an artwork well before packaged good are packaged without any human intervention.
SUMMARY
The present application discloses a system for image comparison of artworks. The present application discloses that the system includes a user device monitored by a user, and an image comparison unit. The image comparison unit includes an image receiving unit, an image processing unit, an image registration unit, an image analysis unit, an image compare unit, and an output unit. The image receiving unit is configured to receive a source image and a target image from a user device. The source image and the target image are two images of a same artwork. The image processing unit is configured to process the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value. Further, the image processing unit is configured to convert the PNG image into a gray scale image.
The image registration unit is configured to align the target image with the source image. The image registration unit identifies key points on the source image and the target image that are stable under a plurality of image transformations, and converts each of the identified key points into a binary descriptor. Further, the image registration unit identifies alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Thereafter, the image registration unit applies homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities, and aligns the wrapped target image with the source image.
The image analysis unit is configured to analyze the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds.
The image compare unit is configured to compare a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image if the source image and the wrapped target image are not similar. The image compare unit determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates.
The output unit is configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device to display dissimilarities between the source image and the wrapped target image to the user. The output unit outputs the dissimilarities by highlighting the set of contour points or the bounding box coordinates in pixel coordinates at the user device.
The present disclosure further discloses a method for image comparison of artworks. The method includes receiving, at an image receiving unit, a source image and a target image from a user device. The method further includes processing, at an image processing unit, the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value.
The method further includes aligning, at an image registration unit, the target image with the source image. The aligning comprises identifying key points on the source image and the target image that are stable under a plurality of image transformations, and converting each of the identified key points into a binary descriptor. Further, aligning comprises identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Also, it comprises applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities, and aligning the wrapped target image with the source image.
Further, the method includes analyzing, at an image analysis unit, the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds. Also, the method includes comparing, at an image compare unit, a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image if the source image and the wrapped target image are not similar, wherein dissimilarities between the source image and the wrapped target image are determined as a set of contour points or bounding box coordinates. The method further includes outputting, at an output unit, the set of contour points or the bounding box coordinates in pixel coordinates at the user device to display dissimilarities between the source image and the wrapped target image to the user.
BRIEF DESCRIPTION OF DRAWINGS
The novel features and characteristics of the disclosure are set forth in the description. The disclosure itself, however, as well as a preferred mode of use, further objectives, and advantages thereof, will best be understood by reference to the following description of an illustrative embodiment when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawings wherein like reference numerals represent like elements and in which:
FIG. 1 illustrates a system 100 for image comparison of artworks, in accordance with an embodiment of the present disclosure.
FIG. 2 illustrates an exemplary source image 202 and target image 204 of an artwork 200 inputted by the user 102, in accordance with an embodiment of the present disclosure.
FIG. 3 illustrates an exemplary display screen 300 of the user device 102 illustrating dissimilarities between a source image 302 and a wrapped target image 304 highlighted by the output unit 118, in accordance with an embodiment of the present disclosure.
FIG. 4 illustrates a method 400 for image comparison of artworks, in accordance with an embodiment of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the assemblies, structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other, sub-systems, elements, structures, components, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this invention belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying figures.
The present invention focusses on providing a system and a method for image comparison of artworks in goods or products produced by diverse industries, such as consumer packaged goods, pharmaceuticals, etc. Sale of any packaged good launched in the market by any industry depends on content about the packaged good given on the packaging. The content present on the packaging is referred to as an artwork related to the packaged good, wherein the artwork includes images, brand name, text, composition, nutritional value table, etc. related to the packaged good. Therefore, an artwork plays a crucial role in sale of a packaged good. However, during developments of these artworks lots of versions of an artwork are formed where all the versions of the artwork have some or many dissimilarities. Therefore, such dissimilarities may cause confusion and lack of trust in minds of customers as to whether they are buying correct packaged good or not.
However, most of the artwork management systems known perform comparison of artworks either manually or by scanning the artworks. In such a scenario, the artworks can only be compared when the packaged products have been packed and are ready to be shipped. This causes a lot of money and time wastage, as if the artworks have dissimilarities there is no way of correcting the dissimilarities. Therefore, the present disclosure discloses a system which can provide an efficient and qualitative image comparison of artworks. Further, the present disclosure discloses a system that compares artworks to find dissimilarities well before packaged good are packaged without any human intervention.
FIG. 1 illustrates a system 100 for image comparison of artworks, in accordance with an embodiment of the present disclosure. The system 100 includes a user device 102 monitored by a user 104, and an image comparison unit 106. The user device 102 relates to hardware component such as a keyboard, mouse, etc which accepts data from the user 104 and also relates to a hardware component such as a display screen of a desktop, laptop, tablet, etc. which displays data to the user 104. The user device 102 is configured to allow the user 104 to input a scanned source image and target image of an artwork related to a packaged good. The source image and the target image are images of a same artwork. The user 104 may be, but not limited to, any employ of an industry monitoring printing of artworks, a person at a printing unit who may have received printing order of artworks, etc.
FIG. 2 illustrates an exemplary source image 202 and target image 204 of an artwork 200 inputted by the user 102, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 2, the source image 202 and the target image 204 includes information such as name of the medicine, composition and quantity of the medicine, dosage and precaution instruction, storage guidelines, distributer information, bar code, etc. Also, the source image 202 and the target image 204 are images of a same artwork.
The user device 102 is further configured to send the source image and the target image to the image comparison unit 106. The image comparison unit 106 is a hardware component which is capable for processing any data or information received by them. In certain embodiments, the image comparison unit 106 may be part of any regularly devices, such as laptops, desktops, tablets, mobile devices, etc. The image comparison unit 106 includes an image receiving unit 108, an image processing unit 110, an image registration unit 112, an image analysis unit 114, an image compare unit 116, and an output unit 118.
The image receiving unit 108 is configured to receive the source image and the target image from the user device 102. In an embodiment, the image receiving unit 108 may receive the source image and the target image in the form of a PDF image. In another embodiment, the image receiving unit 108 may receive the source image and the target image in a standard image format such as .png / .jpeg. In yet another embodiment, the image receiving unit 108 may receive the source image and the target image in any format known. After receiving the source image and the target image, the image receiving unit 108 sends the source image and the target image the image processing unit 110.
The image processing unit 110 is configured to receive the source image and the target image from the image receiving unit 108 and to process the received source image and target image for suppressing any undesired distortions or noise effects to enhance image features of the source image and the target image. The image processing unit 110 is configured to process the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value. A DPI value is a measure of a number of dots that may be placed in a line across one inch. A high DPI value indicates a sharper image, and a lower DPI value indicates a less sharp image.
In an embodiment, a DPI value may be computed dynamically based on a PDF size of the source image and the target image in which a metadata is given in a form of an XML file. A dynamic DPI value is advantageous because different size of packages are used for packaging different goods. For example, wine bottles may be packaged in a wine carton which may be a 50x30 inches square carton, whereas a medicine tube may be packaged in a simple pharmaceuticals carton of an A4 size (8.27 x 11.69 inch square). In such a scenario, if same DPI value is used for the wine carton and the pharmaceutical carton, then an image of the wine carton may turn out to be 18000 x 10800 pixels which is very huge. Therefore, the use of dynamic DPI value helps to overcome such a scenario, where a high DPI value is used for smaller packages (such as pharmaceutical cartons) and a low DPI value is used for bigger packages (such as wine cartons). The DPI value calculation gradually decreases in order of 30 from small to large cartons, thereby processing the source image and the target image in a manageable range of 3000 x 4000 pixels. Also, a dynamic DPI value helps in keeping text characters between various sizes of packages to be similar (approximately 25 pixels high). This is because on a larger carton font size may be larger while in smaller cartons font size may be smaller.
Further, the image processing unit 110 checks if the DPI value of the source image and the DPI value of target image is same in order to calculate the dynamic DPI value. If the DPI value of the source image and the DPI value of target image is different, then the image processing unit 110 chooses either the source image or the target image whichever has a lower inch square area to calculate the dynamic DPI, thereby providing better accuracy. After processing the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value, the image processing unit 110 converts the source image and the target image into a grayscale image before sending them to the image registration unit 112.
The image registration unit 112 is configured to receive the source image and the target image from the image processing unit 110 and to align the target image with the source image. Any two images can be compared perfectly if they are aligned with each other. Image alignment or image registration is a process of wrapping images so that there is a perfect line-up between two images. Therefore, the image registration unit 112 uses image alignment technique to align the target image with the source image. In an image alignment technique, a sparse set of features are detected in one image and matched with the features in the other image. The image alignment technique also identifies interesting stable points, known as key-points or feature points, in an image. The key points or feature points are points similar to points that a human notices when he/she sees that image for a first time.
In an embodiment of the present disclosure, the image registration unit 112 uses an Oriented FAST and Rotated BRIEF (ORB) algorithm to detect the key-points or feature points in the source image and the target image. ORB is a combination of two algorithms: FAST (Features from Accelerated Segments Test) and BRIEF (Binary Robust Independent Elementary Features). FAST identifies key points on the source image and the target image that are stable under image transformations like translation (shift), scale (increase/decrease in size), and rotation, and gives the (x, y) coordinates of such points. BRIEF takes the identified key points and turns them into a binary descriptor or a binary feature vector. The key points founded by FAST algorithm and binary descriptors created by BRIEF algorithm both together represent an object of an image. A threshold of max 30,000 key points is defined for ORB to control a number of key points extracted. The advantages of ORB are that it is very fast, accurate, license-free, and gives a high recognition rate. In another embodiment of the present disclosure, the image registration unit 112 may use any technique known for detecting the key-points or feature points in the source image and the target image.
After the key points have been identified for the source image and the target image and each detected key point has been converted into a binary descriptor, the image registration unit 112 identifies alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. In an embodiment of the present disclosure, the image registration unit 112 may use a hamming distance as a measure of similarity between a binary descriptor of the source image and a binary descriptor of target image. The image registration unit 112 may then sort them by goodness of match (15%) threshold.
In another embodiment of the present disclosure, the image registration unit 112 may use homography with Random Sample Consensus (RANSAC) method to find similarities between the binary descriptors of the source image and the binary descriptors of the target image. A homography may be computed when there are 4 or more corresponding key points in the source image and the target image. Basically, homography is a 3X3 matrix. Let us assume that (x1, y1) are coordinates of a key point in the source image and (x2, y2) be the coordinates of the same key point in the target image. Then, the Homography (H) for the coordinates is as represented by equation 1:
H = ¦(h00&h01&h02@h10&h11&h12@h20&h21&h22) (1)
Once an accurate homography is calculated, the homography transformation is applied to all pixels in one image to map it to the other image. Therefore, the Homography (H) is then applied to all the pixels of the target image to obtain a wrapped target image, as represented by equation 2:
[¦(x_1@y_1@1)]=H [¦(x_2@y_2@1)] (2)
The image registration unit 112 then aligns the wrapped target image with the source image. RANSAC is a robust estimation technique. RANSAC has an advantage that it produces a right result even in a presence of large number of bad matches or dissimilarities between the source image and the target image by removing outlier features of both the images. In another embodiment of the present disclosure, the image registration unit 112 may use any method known to find similarities between the binary descriptors of the source image and the binary descriptors of the target image.
After the wrapped target image has been aligned with the source image by the image registration unit 112, the image analysis unit 114 is configured to analyze the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds. In an embodiment of the present disclosure, the image analysis unit 114 uses a Structural Similarity Index (SSIM) to find similarities and/or dissimilarities between the wrapped target image and the source image. SSIM is a perceptual metric used to measure differences or dissimilarities between two similar images. SSIM puts analysis of images in a scale of -1 to 1, where a score of 1 means that the images are very similar and a score of -1 means that the images are very different. Hence, SSIM fits well for artwork comparison. In another embodiment of the present disclosure, the image analysis unit 114 may use a mean square error (MSE) technique to find similarities and/or dissimilarities between the wrapped target image and the source image. In yet another embodiment of the present disclosure, the image analysis unit 114 may use any technique known for finding similarities and/or dissimilarities between the wrapped target image and the source image.
The image analysis unit 114 uses the following predefined thresholds to determine similarities and/or dissimilarities between the wrapped target image and the source image:
SSIM WINDOW SIZE = 25 – This value is based on a character size taken across 100’s of samples.
SSIM VERIFICATION VALUE = [0.925, 0.975] - These values are for verifying each individual deviation or dissimilarity, for normal and high, respectively.
MORPH KERNEL SIZE = 10 - This value is for closing deviations or dissimilarities to combine.
GAUSSIAN BLUR = (3,3) – This represent values to which both the source image and wrapped target image are blurred for smoothing.
RAW THRESHOLDS = (205, 253) - This threshold value is defined to choose between threshold for normal and highly sensitive differences in SSIM output, and to identify which pixels are potential deviations or dissimilarities.
THRESHOLD RANGE = (0.75, 3.0, 18.0, 50.0) - This threshold range is defined to choose values of threshold based on a percentage of pixel differences or dissimilarities (extreme low range, low range, mid-range, high range).
THRESHOLDS NON 0 = (127.5, 160, 180, 240, 250) - These threshold values are used to choose values for artworks with rotation/ translation/ scaling as major difference or dissimilarity. For non 0 degree, there may be interpolation artifacts due to rotation. The Threshold values help to ignore interpolation artifacts.
127.5 - High range threshold – This value related to Pack inserts, digital v/s print proof (Since print proof has highly flattened images which has high noise).
160 - Mid-range threshold – This value relates to significant transformation between 2 versions which may lead to registration issue.
180 - Lower mid-range threshold – This value relates to minor transformation leading to registration issue.
240 - Low range threshold – This value relates to same orientation and no transformation with differences or dissimilarities.
250 – This value relates to extreme low range threshold where there is no transformation with very minute differences or dissimilarities.
If, based on the above-mentioned predefined thresholds, the image analysis unit 114 determines that the source image and the wrapped target image are not similar, the image compare unit 116 compares a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image. The image compare unit 116 determines dissimilarities between the source image and the wrapped image as a set of contour points or bounding box coordinates. Contour points provide more accurate results as compared to bounding box coordinates, as a bounding box may include non-difference regions based on an individual difference. The dynamic and automatically chosen contour points are actual differences. Contour points are simply a curve joining all the continuous points (along the boundary), having same colour or intensity. Each individual contour is an array of (x,y) coordinates of boundary points of the object.
The image analysis unit 114 further allows the user 104 to find extreme minute differences by choosing high sensitivity option. The image compare unit 116 returns translation value, theta value (rotation in degrees), scaling value, contours, homography matrix values as json file to the output unit 118.
The output unit 118 is configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device 102 to display dissimilarities between the source image and the wrapped target image to the user 104. The output unit 118 outputs the dissimilarities by highlighting the set of contour points or the bounding box coordinates in pixel coordinates at the user device 102. FIG. 3 illustrates an exemplary display screen 300 of the user device 102 illustrating dissimilarities between a source image 302 and a wrapped target image 304 highlighted by the output unit 118, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 3, the dissimilarities between the source image 302 and the wrapped target image 304 are displayed as highlighted contour points or bounding box coordinates and are marked by numbers 1-17 in the source image as well as the wrapped target image. Same number in the source image and the wrapped target image highlights dissimilarities in same area of the image.
FIG. 4 illustrates a method 400 for image comparison of artworks, in accordance with an embodiment of the present disclosure. At step 402, the method includes receiving, at an image receiving unit 108, a source image and a target image from a user device. At step 404, the method includes processing, at an image processing unit 110, the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value.
At step 406, the method includes aligning, at an image registration unit 112, the target image with the source image. In order to align the target image with the source image, the method includes identifying key points on the source image and the target image that are stable under a plurality of image transformations, and converting each of the identified key points into a binary descriptor. The method further includes identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image. Also, the method includes applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities, and aligning the wrapped target image with the source image.
At step 408, the method includes analyzing, at an image analysis unit 114, the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds. At step 410, the method includes comparing, at an image compare unit 116, a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image if the source image and the wrapped target image are not similar. The dissimilarities between the source image and the wrapped target image are determined as a set of contour points or bounding box coordinates. At step 412, the method includes outputting, at an output unit 118, the set of contour points or the bounding box coordinates in pixel coordinates at the user device to display dissimilarities between the source image and the wrapped target image to the user 104.
The system and method for image comparison of artworks disclosed in the present disclosure have numerous advantages. The system and methos disclosed in the present disclosure is used in packaging and labelling industry to provide an efficient and qualitative image comparison approach to artwork management. Further, the system and method disclosed in the present disclosure compares artworks to find dissimilarities well before packaged good are packaged without any human intervention.
Further, the disclosed system and method compares digital artworks with high accuracy. Also, the disclosed system and method compares digital artworks at all right angles (0, 90, 180 and 270 degrees) of artworks. Further, the disclosed system and method detects dissimilarities between images close to 100 percent accuracy. There is no thresholds or choice of sensitivity required from the user. Also, the disclosed system dynamically ensures only relevant dissimilarities are shown. Further, the use of the dynamic DPI value and predefined thresholds work for every kind of images, thereby providing a niche solution with perfect results for artwork management in any kind of industry.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments.
It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
Throughout this specification, the word “comprise”, or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
Any discussion of documents, acts, materials, devices, articles and the like that has been included in this specification is solely for the purpose of providing a context for the disclosure.
It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
The numerical values mentioned for the various physical parameters, dimensions or quantities are only approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
While considerable emphasis has been placed herein on the particular features of this disclosure, it will be appreciated that various modifications can be made, and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other modifications in the nature of the disclosure or the preferred embodiments will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
,CLAIMS:I/We Claim:
1. A system (100) for image comparison of artworks, the system (100) comprising:
an image receiving unit (108) configured to receive a source image and a target image from a user device (102);
an image registration unit (112) configured to align the target image with the source image, wherein the image registration unit (112) is configured to:
identify key points on the source image and the target image that are stable under a plurality of image transformations;
convert each of the identified key points into a binary descriptor;
identify alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image;
apply homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities; and
align the wrapped target image with the source image; and
an image analysis unit (114) configured to analyze the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds;
an image compare unit (116) configured to compare a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image if the source image and the wrapped target image are not similar, wherein the image compare unit (116) determines dissimilarities between the source image and the wrapped target image as a set of contour points or bounding box coordinates;
an output unit (118) configured to output the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102) to display dissimilarities between the source image and the wrapped target image to the user (104).
2. The system (100) as claimed in claim 1, wherein the system (100) comprises an image processing unit (110) configured to process the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value before the target image is aligned with the source image by the image registration unit (112).
3. The system (100) as claimed in claim 2, wherein the image processing unit (110) converts the PNG image into a gray scale image.
4. The system (100) as claimed in claim 1, wherein the source image and the target image are two images of a same artwork.
5. The system (100) as claimed in claim 1, wherein the source image and the target image are received in a pdf format.
6. The system (100) as claimed in claim 1, wherein the image registration unit (112) uses Oriented FAST and Rotated BRIEF (ORB) technique to identify the key points and to convert the identified key points into binary descriptors.
7. The system (100) as claimed in claim 1, wherein the plurality of image transformations comprises translation (shift), scale (increase/decrease in size), rotation, reflection, and dialation.
8. The system (100) as claimed in claim 1, wherein the image registration unit (112) applies homography to the plurality of pixels of the target image using Random Sample Consensus (RANSAC), and wherein the plurality of pixels comprises all the pixels of the target image.
9. The system (100) as claimed in claim 1, wherein the image analysis unit (114) uses Structural Similarity Index (SSIM) to determine if the source image and the wrapped target image are similar.
10. The system (100) as claimed in claim 9, wherein the SSIM determines similarity between the source image and the wrapped target image in a scale of -1 to 1, wherein a score of 1 means the source image and the wrapped target image are similar and a score of -1 means the source image and the wrapped target image are different.
11. The system (100) as claimed in claim 1, wherein the plurality of predefined thresholds comprises:
SSIM WINDOW SIZE (25), SSIM VERIFICATION VALUE (0.925, 0.975), MORPH KERNEL SIZE (10), GAUSSIAN BLUR (3,3), RAW THRESHOLDS (205, 253), THRESHOLD RANGE (0.75, 3.0, 18.0, 50.0), and THRESHOLDS NON 0 (127.5, 160, 180, 240, 250).
12. The system (100) as claimed in claim 1, wherein the image analysis unit (114) uses mean square error (MSE) to determine if the source image and the wrapped target image are similar.
13. The system (100) as claimed in claim 1, wherein the image compare unit (116) returns translation value, theta value (rotation in degrees), scaling value, contours, homography matrix values of the source image and the wrapped target image as a json file.
14. The system (100) as claimed in claim 1, wherein the output unit (118) outputs the dissimilarities by highlighting the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102).
15. A method (400) for image comparison of artworks, the method (400) comprising:
receiving, at an image receiving unit (108), a source image and a target image from a user device;
aligning, at an image registration unit (112), the target image with the source image, wherein the aligning comprises:
identifying key points on the source image and the target image that are stable under a plurality of image transformations;
converting each of the identified key points into a binary descriptor;
identifying alignment similarities between the source image and the target image by matching binary descriptors of the source image and binary descriptors of the target image;
applying homography to a plurality of pixels of the target image to generate a wrapped target image when the source image and the target image do not have alignment similarities; and
aligning the wrapped target image with the source image; and
analyzing, at an image analysis unit (114), the source image and the wrapped target image to determine if the source image and the wrapped target image are similar based on a plurality of predefined thresholds;
comparing, at an image compare unit (116), a plurality of pixel values of the source image and a plurality of pixel values of the wrapped target image to determine dissimilarities between the source image and the wrapped target image if the source image and the wrapped target image are not similar, wherein dissimilarities between the source image and the wrapped target image are determined as a set of contour points or bounding box coordinates;
outputting, at an output unit (118), the set of contour points or the bounding box coordinates in pixel coordinates at the user device (102) to display dissimilarities between the source image and the wrapped target image to the user (104).
16. The method as claimed in claim 15, wherein the method comprises processing, at an image processing unit (110), the received source image and the target image into a PNG (portable network graphics) image based on a predetermined dots per inch (DPI) value before aligning the target image with the source image.
17. The method as claimed in claim 16, wherein the method comprises converting the PNG image into a gray scale image.
| # | Name | Date |
|---|---|---|
| 1 | 202241021679-STATEMENT OF UNDERTAKING (FORM 3) [11-04-2022(online)].pdf | 2022-04-11 |
| 2 | 202241021679-PROVISIONAL SPECIFICATION [11-04-2022(online)].pdf | 2022-04-11 |
| 3 | 202241021679-PROOF OF RIGHT [11-04-2022(online)].pdf | 2022-04-11 |
| 4 | 202241021679-POWER OF AUTHORITY [11-04-2022(online)].pdf | 2022-04-11 |
| 5 | 202241021679-FORM FOR SMALL ENTITY(FORM-28) [11-04-2022(online)].pdf | 2022-04-11 |
| 6 | 202241021679-FORM FOR SMALL ENTITY [11-04-2022(online)].pdf | 2022-04-11 |
| 7 | 202241021679-FORM 1 [11-04-2022(online)].pdf | 2022-04-11 |
| 8 | 202241021679-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [11-04-2022(online)].pdf | 2022-04-11 |
| 9 | 202241021679-EVIDENCE FOR REGISTRATION UNDER SSI [11-04-2022(online)].pdf | 2022-04-11 |
| 10 | 202241021679-DRAWINGS [11-04-2022(online)].pdf | 2022-04-11 |
| 11 | 202241021679-DECLARATION OF INVENTORSHIP (FORM 5) [11-04-2022(online)].pdf | 2022-04-11 |
| 12 | 202241021679-DRAWING [10-03-2023(online)].pdf | 2023-03-10 |
| 13 | 202241021679-COMPLETE SPECIFICATION [10-03-2023(online)].pdf | 2023-03-10 |
| 14 | 202241021679-ENDORSEMENT BY INVENTORS [15-03-2023(online)].pdf | 2023-03-15 |
| 15 | 202241021679-PostDating-(12-06-2023)-(E-6-198-2023-CHE).pdf | 2023-06-12 |
| 16 | 202241021679-APPLICATIONFORPOSTDATING [12-06-2023(online)].pdf | 2023-06-12 |