Abstract: Disclosed is a system for detecting a defect in an object present in an image. Initially, a fragmentation module (214) fragments an input image into a plurality of regions. An elimination module (216) eliminates a first set of regions upon comparing a luminosity pertaining to each pixel with a predefined threshold luminosity. An assignment module (218) assigns a flag to a set of pixels. A data computation module (220) computes a set of differential values for the set of pixels based on the luminosity of a pixel with respect to other pixels’ neighbor to the pixel. An identification module (222) identifies one or more corner points amongst the set of pixels. A comparison module (224) compares vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates, in order to detect a defect.
This patent application does not claim priority from any application.
TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to defect detection in an object present in an image. More particularly, detecting the defect by using a fuzzy based corner detector technique.
BACKGROUND
[003] In a semiconductor industry, defect detection is a significant activity for ensuring quality of any object. The defect detection is carried out by capturing an image of an object. In general, the defect detection may be categorized into distinct detection techniques that may or may not require a reference image. It may be observed that the detection techniques, do not require the reference image, performs defect detection by manually identifying specific repeating features pertaining to the object in the image. Furthermore, one or more regions in the image that vary from the specific repeating features may be detected as defects in the object. However, it becomes impossible for manually detecting defects in the image with no specific repeating features.
[004] In one embodiment, conventional systems and methodologies may compare the image with the reference image by using an image processing technique. Further, it may be observed that each image is different from other image based on one or more image parameters. Examples of the image parameters are brightness, sharpness, alignment, pixel density and others. Further, because of the one or more image parameters, an accurate comparison of the object present in the image with the reference image becomes cumbersome. In addition to the above challenge, another challenge for the conventional systems is to detect the defects in the image by comparing with the reference image, when the both the image and the reference image are aligned in different axis.
SUMMARY
[005] Before the present systems and methods, are described, it is to be understood that this application is not limited to the particular systems, and methodologies described, as
3
there can be multiple possible embodiments which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present application. This summary is provided to introduce concepts related to systems and methods for detecting a defect in an object present in an image and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[006] In one implementation, a method for detecting a defect in an object present in an image is disclosed. In order to detect a defect, initially, an input image pertaining to an object may be received. Further to receiving the image, the input image may be fragmented into a plurality of regions. In one aspect, a region may be distinct from another region, and it may be understood that the region and the other region is a subset of the plurality of regions. Subsequent to fragmentation, a first set of regions from the plurality of regions may be eliminated upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity. Upon eliminating, a flag may be assigned to a set of pixels present in each of a second set of regions of the plurality of regions. In one aspect, the flag may be assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value. In another aspect, the second set of regions may indicate presence of a part of the object in each region of the second set of regions. Furthermore, a set of differential values for the set of pixels may be computed based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel. After computing the set of differential value, one or more corner points amongst the set of pixels, present in each of the second set of regions, may be identified based on the set of differential values computed for each pixel of the set of pixels. Subsequent to identifying the one or more corner points, vector coordinates corresponding to the one or more corner points may be compared with reference vector coordinates, present in a reference image, corresponding to the vector coordinates. In one aspect, the vector coordinates and the reference vector coordinates may be compared to detect a defect in the second set of the regions associated to the input image. In another aspect, the aforementioned method for detecting a defect in an object present in an image may be performed by a processor using programmed instructions stored in a memory.
[007] In another implementation, a system for detecting a defect in an object present in an image is disclosed. The system may comprise a processor and a memory coupled to the
4
processor. The processor may execute a plurality of modules present in the memory. The plurality of modules may comprise a data receiving module, a fragmentation module, an elimination module, an assignment module, a data computation module, an identification module, and a comparison module. The data receiving module may receive an input image pertaining to the object. Further to receiving the image, the fragmentation module may fragment the input image into a plurality of regions. In one aspect, a region may be distinct from another region, and it may be understood that the region and the other region is a subset of the plurality of regions. Subsequent to fragmentation, the elimination module may eliminate a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity. Upon eliminating, the assignment module may assign a flag to a set of pixels present in each of a second set of regions of the plurality of regions. In one aspect, the flag may be assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value. In another aspect, the second set of regions may indicate presence of a part of the object in each region of the second set of regions. Furthermore, the data computation module may compute a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel. After computing the set of differential value, the identification module may identify one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels. Subsequent to identifying the one or more corner points, the comparison module may compare vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates. In one aspect, the vector coordinates and the reference vector coordinates may be compared to detect a defect in the second set of the regions associated to the input image.
[008] In yet another implementation, non-transitory computer readable medium embodying a program executable in a computing device for detecting a defect in an object present in an image is disclosed. The program may comprise a program code for receiving an input image pertaining to an object. The program may further comprise a program code for fragmenting the input image into a plurality of regions. In one aspect, a region is distinct from another region, also the region and the other region is a subset of the plurality of regions. The program may further comprise a program code for eliminating a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each
5
region with a predefined threshold luminosity. The program may further comprise a program code for assigning a flag to a set of pixels present in each of a second set of regions of the plurality of regions. In one aspect, the flag is assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value. In another aspect, the second set of regions indicates presence of a part of the object in each region of the second set of regions. The program may further comprise a program code for computing a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel. The program may further comprise a program code for identifying one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels. The program may further comprise a program code for comparing vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates. In one aspect, the vector coordinates and the reference vector coordinates are compared to detect a defect in the second set of the regions associated to the input image.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The foregoing detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the disclosure, example constructions of the disclosure are shown in the present document; however, the disclosure is not limited to the specific methods and apparatus disclosed in the document and the drawings.
[0010] The detailed description is given with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0011] Figure 1 illustrates a network implementation of a system for detecting a defect in an object present in an image, in accordance with an embodiment of the present subject matter.
[0012] Figure 2 illustrates the system, in accordance with an embodiment of the present subject matter.
6
[0013] Figures 3, 4, 5, and 6 illustrate an example, in accordance with an embodiment of the present subject matter.
[0014] Figure 7 illustrates a method for detecting a defect in an object present in an image, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0015] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "receiving," "fragmenting," "eliminating," "assigning," "computing," "identifying," and "comparing," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, systems and methods are now described. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
[0016] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure is not intended to be limited to the embodiments illustrated, but is to be accorded the widest scope consistent with the principles and features described herein.
[0017] The present invention discloses detecting a defect in an object present in an image. To do so, the present invention proposes a corner detection approach using a fuzzy logic. It may be understood that the defect may be a foreign material defect, a topological defect, a missing pattern, a patterning defect, a bridge defect, a mouse bite defect, a large area defect, a line break and others. In order to detect the defect in the object, an input image of the object may be received. Typically, an example of the object may include a wafer substrate, a Thin Film Transistor-Liquid Crystal Display (TFT-LCD), Printed Circuit Board (PCB) and others. It may be noted that every input image is different from another input image based upon one or more image parameters. Examples of the one or more image
7
parameters include, but not limited to, luminosity, pixel density, color, and alignment. Further, because of the one or more image parameters there is a likelihood that the input image may comprise noisy data. The noisy data indicates errors in the one or more image parameters. It may be understood that the errors in the input image is a result of an improper capturing of the input image. Further, the noisy data reduces an overall luminosity pertaining to the input image. In order to remove the noisy data, a pre-processing technique may be applied to the input image. In one aspect, the pre-processing technique may reduce a computational load pertaining to a system.
[0018] In order to perform faster computation, the input image may be fragmented into a plurality of regions. Further, a luminosity pertaining to each pixel present in the plurality of regions may be analyzed. It may be understood that the luminosity pertaining to a pixel corresponding to the object may be higher than another pixel pertaining to background of the image. Subsequently, a first set of plurality of regions, having no presence of object may be eliminated from further processing. After eliminating, a set of pixels, present in a second set of plurality of regions, corresponding to the object may be identified. Furthermore, the set of pixels may be normalized by employing a fuzzy logic. Subsequently, one or more corner points, of the set of pixels, may be identified. In one example, the pixel having maximum luminosity may be identified as a corner point. Subsequently, one or more defects may be detected in the image by comparing the one or more corner points with reference corner points corresponding to a reference image pertaining to the object.
[0019] In one embodiment, the one or more defects may be highlighted using a bounded box. Furthermore, an image moment pertaining to the bounded box may be computed. Example of the image moment may include, but not limited to, an area, a perimeter, a convex hull, a center of mass, and others. While aspects of described system and method for detecting the defect in the object present in the image and may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
[0020] Referring now to Figure 1, a network implementation 100 of a system 102 for detecting a defect in an object present in an image is disclosed. In order to detect a defect, initially, the system 102 may receive an input image pertaining to an object. Further to receiving the image, the system 102 may fragment the input image into a plurality of regions. In one aspect, a region may be distinct from another region, and it may be understood that the region and the other region is a subset of the plurality of regions. Subsequent to
8
fragmentation, the system 102 may eliminate a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity. Upon eliminating, the system 102 may assign a flag to a set of pixels present in each of a second set of regions of the plurality of regions. In one aspect, the flag may be assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value. In another aspect, the second set of regions may indicate presence of a part of the object in each region of the second set of regions. Furthermore, the system 102 may compute a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel. After computing, the system 102 may identify one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels. Subsequently, the system 102 may compare vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates. In one aspect, the vector coordinates and the reference vector coordinates may be compared to detect a defect in the second set of the regions associated to the input image.
[0021] Although the present disclosure is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, a cloud-based computing environment. It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2…104-N, collectively referred to as user 104 or stakeholders, hereinafter, or applications residing on the user devices 104. In one implementation, the system 102 may comprise the cloud-based computing environment in which a user may operate individual computing systems configured to execute remotely located applications. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.
[0022] In one implementation, the network 106 may be a wireless network, a wired network or a combination thereof. The network 106 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 106 may either be a dedicated network or a shared network. The shared network represents an association of the different types of
9
networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0023] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present subject matter. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0024] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0025] The memory 206 may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[0026] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. In one implementation, the modules 208 may include a data receiving module 212, a fragmentation module 214, an elimination module 216, an assignment module 218, a data
10
computation module 220, an identification module 222, a comparison module 224, and other modules 226. The other modules 226 may include programs or coded instructions that supplement applications and functions of the system 102. The modules 208 described herein may be implemented as software modules that may be executed in the cloud-based computing environment of the system 102.
[0027] The data 210, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 208. The data 210 may also include a system database 228 and other data 230. The other data 230 may include data generated as a result of the execution of one or more modules in the other modules 226.
[0028] As there are various challenges observed in the existing art, the challenges necessitate the need to build the system 102 for detecting a defect in an object present in an image. In order to detect a defect in an object present in an image, at first, a user may use the client device 104 to access the system 102 via the I/O interface 204. The user may register them using the I/O interface 204 in order to use the system 102. In one aspect, the user may access the I/O interface 204 of the system 102. The system 102 may employ the data receiving module 212, the fragmentation module 214, the elimination module 216, the assignment module 218, the data computation module 220, the identification module 222, and the comparison module 224. The detail functioning of the modules is described below with the help of Figures 2, 3, 4, 5, and 6.
[0029] The present system 102 detects a defect in an object present in an image. To do so, initially, the data receiving module 212 may receive an input image and a reference image pertaining to an object. The reference image indicates a template for detecting the defect in the object pertaining to the input image. The input image, on the other hand, indicates an actual image of the object upon which defect detection is to be performed. It may be noted that every input image is different from other input image based upon one or more image parameters. Examples of the one or more image parameters include, but not limited to, luminosity, pixel density, colors, and alignment. It may be understood that the input image may comprise of a plurality of pixels. It may also be understood that a luminosity pertaining to each pixel of the plurality of pixels may be different from the luminosity pertaining to another pixel of the plurality of pixels.
[0030] In an example, the data receiving module 212 may receive the input image ‘I’ of size ‘M * N’ comprising ‘L’ gray levels as an array of fuzzy singletons. In one aspect, each
11
fuzzy singleton of the array of fuzzy singletons may comprise a degree of membership based on luminosity level
. Furthermore, the input image ‘I’ may be expressed as per below equation (1).
………… (1)
[0031] As per the equation (1), ‘
’ denotes a grade of a membership indicating a brightness, a smoothness, and a like.
[0032] Upon receipt of the input image, the fragmentation module 214 fragments the input image into a plurality of regions. In one aspect, a region indicates a non-overlapping image segment pertaining to the input image. In one aspect, the region and another region are a subset of the plurality of regions. In one embodiment, the fragmentation of the image facilitates the system 102 to perform parallel processing on the input image.
[0033] Subsequent to the fragmentation, the elimination module 216 eliminates a first set of regions from the plurality of regions upon comparing the luminosity pertaining to each pixel with a predefined threshold luminosity. The first set of regions may be eliminated based on variance in the luminosity of each pixel. It may be understood that the variance may indicate difference between luminosity of one pixel present in a region with other pixels’ neighbor to the pixel. In one aspect, the first set of regions may be eliminated when the luminosity pertaining to each pixel present in each region is less than the predefined threshold luminosity. The first set of regions, as eliminated, is any region, belongs to the first set of region, that does not contain at least a part of the object.
[0034] In order to illuminate working of the aforementioned modules, consider an example pertaining to detection of defects in a wafer fabrication, in accordance with an embodiment of the present subject matter. Now referring to the Figure 3, an input image 302 pertaining to wafer fabrication is shown. In order to elucidate further, the data receiving module 212 receives the input image 302 to detect the defect present in the wafer fabrication. Further, the fragmentation module 214 fragments the input image 302 into the plurality of regions def1_05_02, def1_05_03, …def1_08_02. Subsequently, the elimination module 216 eliminates the first set of regions def1_05_05, def1_06_05, def1_07_01, def1_07_02, and def1_08_02.
[0035] Upon eliminating the first set of regions, now referring to Figure 2, the assignment module 218 assigns a flag to a set of pixels present in each of a second set of
12
regions of the plurality of regions. The second set of regions indicates a presence of at least a part of the object in each region of the second set of regions. In order to assign the flag, the assignment module 218 compares the luminosity associated to each pixel associated to the second set of regions with a pre-defined threshold value. When the luminosity associated to each pixel is greater than the pre-defined threshold value, the assignment module 218 then assigns the flag to the set of pixels.
[0036] In one example, the flag associated to a pixel having the luminosity greater than the pre-defined threshold value is further assigned with a truth value as ‘1’. On the contrary, a pixel having the luminosity less than the pre-defined threshold value may be assigned the truth value as ‘0’. The truth value i.e. ‘1’ or ‘0’ assigned to each pixel indicates normalization of the input image upon implementation of a fuzzy logic. In one aspect, the pre-defined threshold value may be computed as per below pseudo code.
{
//mean
=
; //standard deviation
}
[0037] As per the aforementioned pseudo code, it may be noted that ‘L’ indicates the pre-defined threshold value corresponding to the luminosity associated with a pixel pertaining to the object. It may be understood that a mean ‘μij’ is computed by using the below equation (2).
…………………... (2)
[0038] As per the equation (2), the mean ‘μij’ indicates an overall mean associated with the luminosity of the plurality of pixels. The mean ‘μij’ comprises the ‘F(i,j)’ indicating normalized luminosity associated with each pixel of the second set of regions and ‘nxn’ indicating dimensions of the input image. Further to computation of the mean ‘μij’, a standard deviation ‘σij’ may be computed by using below equation (3).
13
=
………………… (3)
[0039] As per the equation (3), the standard deviation σij is a function of the F(i,j) indicating normalized luminosity associated to each pixel and the mean μij indicating the overall mean of the luminosity of the pixels present in the second set of regions. In one aspect, the standard deviation σij may also be termed as the variance pertaining to each pixel present in the second set of regions. Furthermore, a maximum variance σmax pertaining to a pixel may be identified amongst the set of pixels. In one aspect, the pre-defined threshold value may be termed as a fuzzy membership function ‘
’ and computed as per below equation (4).
……………… (4)
[0040] As per the equation (4), the pre-defined threshold value ‘
’ is computed based on an adaptive parameter ‘
’ and the maximum variance ‘σmax’. It may be understood that the adaptive parameter ‘
’ may vary as per the image parameters. In one example, the adaptive parameter ‘
’ may be assigned a constant value as ‘0.3’. In another example, when the input image has less luminosity variation, the value of the adaptive parameter ‘
’ may be reduced to identify each pixel of the set of pixels.
[0041] Subsequent to the assignment of the flag, the data computation module 220 computes a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to another pixels’ neighbor to the pixel. In one example, the set of differential values for each pixel may indicate the differential value of each pixel with respect to each of the other pixels present in 8 directions of the pixel.
[0042] In one embodiment, the data computation module 220 may compute the set of differential values as per below pseudo code.
{
-2
-2
14
}
[0043] As per the aforementioned pseudo code, it may be noted that ‘D’ indicates the set of differential values for the set of pixels present in the input image ‘I’. Further, the ‘Fij’ function normalizes each of the set of pixels by applying the fuzzy logic onto the image ‘I’. In one aspect, ‘i’ may indicate x-coordinate and ‘j’ may indicate y-coordinate pertaining to of each pixel of the set of pixels. In order to normalize, each pixel having luminosity in a range of ‘1’ to ‘255’ is normalized to ‘1’ or ‘0’ when the luminosity is out of the range. As per the above pseudocode, ‘8’ directions pertaining to the pixel may be indicated as ‘N’, ‘NW’, ‘NE’, ‘W’, ‘E’, ‘S’, ‘SW’, and ‘SE’. It may be understood that the ‘N’, ‘NW’, ‘NE’, ‘W’, ‘E’, ‘S’, ‘SW’, and ‘SE’ indicates North, North-West, North-East, West, East, South, South-West, and South-East respectively.
[0044] Upon computing the set of differential values, the identification module 222 identifies one or more corner points amongst the set of pixels. The one or more corner points may be identified based on the set of differential values computed for each pixel of the set of pixels. The identification module 222 may then aggregate the set of differential values to compute an aggregated differential value corresponding to each pixel. The set of differential values may be aggregated to identify a corner point in a region of the second set of region. In one aspect, the corner point may be identified when the aggregated differential value is greater than the pre-defined threshold value. In other words, the identification module 222 identifies the one or more corner points as per below mentioned pseudo code.
{
15
//Compute the truth value for all directions
//defuzzification
//defuzzification
}
[0045] As per the above pseudocode, a function ‘f_coords’ indicates vector coordinates of the one or more corner points. Further, ‘cor_truth’ indicates the truth value assigned to each pixel having differential value ‘D(i,j)’ greater than the pre-defined threshold value ‘
’. Further, a sum of truth value ‘sum_truth’ indicating a summation of the set of differential values pertaining to each pixel. In one aspect, the identification module 222 may identify the one or more corner points when the ‘sum_truth’ of the set of differential values pertaining to each pixel is equal to or greater than ‘7’. On the contrary, when the ‘sum_truth’ is less than’ 7’, the identification module 222 may identify the pixel as the pixel corresponding to an edge or a smooth surface of the object. Upon identifying the one or more corner points, the identification module 222 appends vector coordinates corresponding to the pixels pertaining to the corner point.
[0046] In order to further elucidate functioning of an aforementioned module, referring now to Figure 4, the identification module 222 identifies the one or more corner points 406 present in the input image 302. In one example, the identification module 222 identifies reference corner points 404 present in the reference image 402.
[0047] Subsequent to identifying the one or more corner points, referring again to Figure 2, the comparison module 224 compares vector coordinates corresponding to each corner point with reference vector coordinates, present in the reference image, corresponding to the vector coordinates. The vector coordinates and the reference vector coordinates may be
16
compared to detect the defect in the second set of the regions. In an example, the comparison module 224 may compare the vector coordinates with the reference vector coordinates as per below pseudo code.
{
//
}
[0048] As per the above pseudocode, ‘d_coords’ indicates a set of defect coordinates, ‘f_coords2’ indicates the vector coordinates corresponding to the one or more corner points and ‘f_coords1’ indicates the reference vector coordinates present in the reference image. Furthermore, one or more patches of ‘nxn’ dimension, comprising the one or more corner points, corresponding to the input image and the reference image may be compared with a spatial coordinate ‘(x, y)’ to find a match. In addition, correlation between the one or more patches centered on each corner point is computed using a Normalized Cross Correlation (NCC) and is indicated as ‘cor’. Subsequently, a distance between the one or more corner points present in the input image and the reference corner points present in the reference
17
image may be computed using Euclidean distance and a maximum threshold distance may be indicated as ‘th’.
[0049] In one aspect, the correlation (‘cor’) between the vector coordinates and the reference vector coordinates may be computed as per below equation (5).
…………… (5)
[0050] As per the equation (5), the correlation (‘cor’) is computed using one or more of ‘f(x,y)’, ‘d(x,y)’, ‘μf’, ‘μd’,’m’, and ‘n’. In one aspect, the ‘f(x,y)’ indicates reference vector coordinates, ‘d(x,y)’ indicates the vector coordinates corresponding to the input image, ‘μf’ indicates overall mean associates with the reference image, ‘μd’ indicates overall mean associates with the input image. In another aspect, ‘m’ and ‘n’ indicates dimensions pertaining to each of the second set of regions.
[0051] In one embodiment, when the NCC is more than 0.8 (i.e cor > 0.8) and the distance between the vector coordinate and the reference vector coordinate is less than the maximum threshold distance 15 (i.e. th <15), the comparison module 224 identifies a matched vector coordinate and is indicated as ‘m_coords(x,y)’. Further, a set of the matched vector coordinate is stored in the system database 228. Subsequently, the comparison module 224 detects at least one defect vector coordinate by excluding the set of the matched vector coordinate from the vector coordinates corresponding to the one or more corner points present in the input image. In one aspect, the at least one defect vector coordinates indicate the defect in the object.
[0052] Referring now to Figure 5, the comparison module 224 compares vector coordinates of the reference corner points 404 present in the reference image 402 with the vector coordinates of one or more corner points 406 present in the input image 302. The comparison may be performed by using the NCC and the Euclidean distance. It may be noted that the comparison of the corner points is independent of variation in alignment of the input image and the reference image. Subsequently, the vector coordinates may be matched and a plurality of lines 500 may be drawn over the set of the matched vector coordinate present in the reference image 402 and the input image 302. Further, a set difference between the set of the matched vector coordinates and the vector coordinates pertaining to the one or more corner points 406 is computed to detect the defect present in the object corresponding to the
18
input image 302. It may be understood that a result of the set difference indicates defect present in the input image 302.
[0053] Referring again to Figure 2. In one embodiment, one or more defects detected in the second set of the regions are highlighted using a bounded box. Furthermore, the data computation module 220 may compute an image moment pertaining to the bounded box based on dimensions corresponding to the bounded box. The dimensions corresponding to the bounded box may be computed based on vector coordinates of the one or more defects. Examples of the image moment may include, but not limited to, an area, a perimeter, a convex hull and a center of mass.
[0054] Referring now to Figure 6, the at least one defect vector coordinates 602 present in the object pertaining to the input image 302 is shown in accordance with the embodiment of the present subject matter. In one example, the at least one defect vector coordinates 602 is highlighted by the bounded box 604. Further, the image moment corresponding to the bounded box 604 is computed based upon dimension of the bounded box 604. In the aforementioned example, the area and the perimeter corresponding to the bounded box 604 is 4231 and 824.23 respectively.
[0055] Referring now to Figure 7, a method 700 for detecting a defect in an object present in an image is shown, in accordance with an embodiment of the present subject matter. The method 700 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 700 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0056] The order in which the method 700 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 700 or alternate methods. Additionally, individual blocks may be deleted from the method 700 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the
19
embodiments described below, the method 700 may be considered to be implemented as described in the system 102.
[0057] At block 702, an input image pertaining to an object may be received. In one implementation, the input image pertaining to an object may be received by a data receiving module 212.
[0058] At block 704, the input image may be fragmented into a plurality of regions. In one aspect, a region may be distinct from another region, also the region and the other region may be a subset of the plurality of regions. In one implementation, the input image may be fragmented into the plurality of regions by a fragmentation module 214.
[0059] At block 706, a first set of regions may be eliminated from the plurality of regions. In one aspect, the first set of regions may be eliminated from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity. In one implementation, the first set of regions may be eliminated from the plurality of regions by the elimination module 216.
[0060] At block 708, a flag may be assigned to a set of pixels present in each of a second set of regions of the plurality of regions. In one aspect, the flag may be assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value. In another aspect, the second set of regions indicates presence of a part of the object in each region of the second set of regions. In one implementation, the flag may be assigned by an assignment module 218.
[0061] At block 710, a set of differential values for the set of pixels may be computed based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel. In one implementation, the set of differential values for the set of pixels may be computed by a data computation module 220.
[0062] At block 712, one or more corner points may be identified amongst the set of pixels. In one aspect, the one or more corner points may be identified amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels. In one implementation, the one or more corner points may be identified by an identification module 222.
[0063] At block 714, vector coordinates may be compared to detect a defect in the second set of the regions. In one aspect, the vector coordinates corresponding to the one or more corner points may be compared with reference vector coordinates, present in a reference image, corresponding to the vector coordinates. Further, the vector coordinates and the
20
reference vector coordinates may be compared to detect a defect in the second set of the regions associated to the input image. In one implementation, the one or more corner points may be identified by the comparison module 224.
[0064] Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
[0065] Some embodiments enable a system and a method to facilitate parallel processing.
[0066] Some embodiments enable a system and a method to detect one or more corner points, present in an image, independent of image parameters.
[0067] Some embodiments enable a system and a method to accurately detect defects in the image by using fuzzy logic.
[0068] Some embodiments enable a system and a method to reduce computational load by eliminating homogenous fragments.
[0069] Some embodiments enable a system and a method to match corner points independent of variation in alignment of the input image.
[0070] Although implementations for methods and systems for detecting a defect in an object present in an image have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for detecting a defect in an object present in an image.
WE CLAIM:
1. A method for detecting a defect in an object present in an image, the method comprising:
receiving, by a processor (202), an input image pertaining to an object;
fragmenting, by the processor (202), the input image into a plurality of regions, wherein a region is distinct from another region, and wherein the region and the other region is a subset of the plurality of regions;
eliminating, by the processor (202), a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity;
assigning, by the processor (202), a flag to a set of pixels present in each of a second set of regions of the plurality of regions, wherein the flag is assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value, and wherein the second set of regions indicates presence of a part of the object in each region of the second set of regions;
computing, by the processor (202), a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel;
identifying, by the processor (202), one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels; and
comparing, by the processor (202), vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates, wherein the vector coordinates and the reference vector coordinates are compared to detect a defect in the second set of the regions associated to the input image.
2. The method of claim 1, wherein the first set of regions are eliminated when the luminosity pertaining to each pixel present in each region is less than the predefined threshold luminosity.
3. The method of claim 1, wherein the flag is assigned when the luminosity associated to each pixel is greater than the pre-defined threshold value.
22
4. The method of claim 1, wherein the corner point is identified when a sum of differential value pertaining to the set of pixels, neighbor to the pixel, is greater than the pre-defined threshold value.
5. The method of claim 1, wherein the vector coordinates corresponding to the one or more corner points are compared with the reference vector coordinates based on Normalized Cross Correlation (NCC) and Euclidean distance.
6. The method of claim 1, wherein one or more defects detected in the second set of the regions are highlighted using a bounded box.
7. The method of claim 6 further comprises computing an image moment of the bounded box, wherein the image moment includes at least an area, a perimeter, a convex hull and a center of mass of the bounded box.
8. The method of claim 1 further comprises computing a variance pertaining to each pixel present in each region, wherein the variance indicates difference between the luminosity of each pixel with respect to the other pixels’ neighbor to the pixel.
9. The method of claim 1, wherein the pre-defined threshold value is defined based on a maximum variance pertaining to a pixel present in the second set of regions and a constant term associated with image parameters associated with the input image.
10. A system (102) for detecting a defect in an object present in an image, the system (102) comprising:
a processor (202); and
a memory (206) coupled to the processor, wherein the processor (202) is capable of executing a plurality of modules (208) stored in the memory (206), and wherein the plurality of modules (208) comprising:
a data receiving module (212) for receiving an input image pertaining to an object;
a fragmentation module (214) for fragmenting the input image into a plurality of regions, wherein a region is distinct from another region, and wherein the region and the other region is a subset of the plurality of regions;
23
an elimination module (216) for eliminating a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity;
an assignment module (218) for assigning a flag to a set of pixels present in each of a second set of regions of the plurality of regions, wherein the flag is assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value, and wherein the second set of regions indicates presence of a part of the object in each region of the second set of regions;
a data computation module (220) for computing a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel;
an identification module (222) for identifying one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels; and
a comparison module (224) for comparing vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image, corresponding to the vector coordinates, wherein the vector coordinates and the reference vector coordinates are compared to detect a defect in the second set of the regions associated to the input image.
11. The system (102) of claim 1, wherein the first set of regions are eliminated when the luminosity pertaining to each pixel present in each region is less than the predefined threshold luminosity.
12. The system (102) of claim 1, wherein the flag is assigned when the luminosity associated to each pixel is greater than the pre-defined threshold value.
13. The system (102) of claim 1, wherein the corner point is identified when a sum of differential value pertaining to the set of pixels, neighbor to the pixel, is greater than the pre-defined threshold value.
24
14. The system (102) of claim 1, wherein the vector coordinates corresponding to the one or more corner points are compared with the reference vector coordinates based on Normalized Cross Correlation (NCC) and Euclidean distance.
15. The system (102) of claim 1, wherein one or more defects detected in the second set of the regions are highlighted using a bounded box.
16. The system (102) of claim 6 is further configured to compute an image moment of the bounded box, wherein the image moment includes at least an area, a perimeter, a convex hull and a center of mass of the bounded box.
17. The system (102) of claim 1 is further configured to compute a variance pertaining to each pixel present in each region, wherein the variance indicates difference between the luminosity of each pixel with respect to the other pixels’ neighbor to the pixel.
18. The system (102) of claim 1, wherein the pre-defined threshold value is defined based on a maximum variance pertaining to a pixel present in the second set of regions and a constant term associated with image parameters associated with the input image.
19. A non-transitory computer readable medium embodying a program executable in a computing device for detecting a defect in an object present in an image, the program comprising a program code:
a program code for receiving an input image pertaining to an object;
a program code for fragmenting the input image into a plurality of regions, wherein a region is distinct from another region, and wherein the region and the other region is a subset of the plurality of regions;
a program code for eliminating a first set of regions from the plurality of regions upon comparing a luminosity pertaining to each pixel present in each region with a predefined threshold luminosity;
a program code for assigning a flag to a set of pixels present in each of a second set of regions of the plurality of regions, wherein the flag is assigned based on comparison of luminosity associated to each pixel and a pre-defined threshold value, and wherein the second
25
set of regions indicates presence of a part of the object in each region of the second set of regions;
a program code for computing a set of differential values for the set of pixels based on the luminosity of a pixel, of the set of pixels, with respect to other pixels’ neighbor to the pixel;
a program code for identifying one or more corner points amongst the set of pixels, present in each of the second set of regions, based on the set of differential values computed for each pixel of the set of pixels; and
a program code for comparing vector coordinates corresponding to the one or more corner points with reference vector coordinates, present in a reference image corresponding to the vector coordinates, wherein the vector coordinates and the reference vector coordinates are compared to detect a defect in the second set of the regions associated to the input image.
| # | Name | Date |
|---|---|---|
| 1 | 201711022617-IntimationOfGrant09-02-2024.pdf | 2024-02-09 |
| 1 | Power of Attorney [28-06-2017(online)].pdf | 2017-06-28 |
| 2 | 201711022617-PatentCertificate09-02-2024.pdf | 2024-02-09 |
| 2 | Form 9 [28-06-2017(online)].pdf_189.pdf | 2017-06-28 |
| 3 | Form 9 [28-06-2017(online)].pdf | 2017-06-28 |
| 3 | 201711022617-Written submissions and relevant documents [23-01-2024(online)].pdf | 2024-01-23 |
| 4 | Form 3 [28-06-2017(online)].pdf | 2017-06-28 |
| 4 | 201711022617-Correspondence to notify the Controller [28-12-2023(online)].pdf | 2023-12-28 |
| 5 | Form 20 [28-06-2017(online)].jpg | 2017-06-28 |
| 5 | 201711022617-FORM-26 [28-12-2023(online)].pdf | 2023-12-28 |
| 6 | Form 18 [28-06-2017(online)].pdf_69.pdf | 2017-06-28 |
| 6 | 201711022617-US(14)-HearingNotice-(HearingDate-08-01-2024).pdf | 2023-12-18 |
| 7 | Form 18 [28-06-2017(online)].pdf | 2017-06-28 |
| 7 | 201711022617-Proof of Right [13-10-2021(online)].pdf | 2021-10-13 |
| 8 | Drawing [28-06-2017(online)].pdf | 2017-06-28 |
| 8 | 201711022617-FORM 13 [09-07-2021(online)].pdf | 2021-07-09 |
| 9 | 201711022617-POA [09-07-2021(online)].pdf | 2021-07-09 |
| 9 | Description(Complete) [28-06-2017(online)].pdf_68.pdf | 2017-06-28 |
| 10 | 201711022617-CLAIMS [10-11-2020(online)].pdf | 2020-11-10 |
| 10 | Description(Complete) [28-06-2017(online)].pdf | 2017-06-28 |
| 11 | 201711022617-COMPLETE SPECIFICATION [10-11-2020(online)].pdf | 2020-11-10 |
| 11 | abstract.jpg | 2017-07-20 |
| 12 | 201711022617-FER_SER_REPLY [10-11-2020(online)].pdf | 2020-11-10 |
| 12 | 201711022617-Proof of Right (MANDATORY) [04-08-2017(online)].pdf | 2017-08-04 |
| 13 | 201711022617-OTHERS [10-11-2020(online)].pdf | 2020-11-10 |
| 13 | 201711022617-OTHERS-090817.pdf | 2017-08-17 |
| 14 | 201711022617-Correspondence-090817.pdf | 2017-08-17 |
| 14 | 201711022617-FER.pdf | 2020-07-31 |
| 15 | 201711022617-Correspondence-090817.pdf | 2017-08-17 |
| 15 | 201711022617-FER.pdf | 2020-07-31 |
| 16 | 201711022617-OTHERS [10-11-2020(online)].pdf | 2020-11-10 |
| 16 | 201711022617-OTHERS-090817.pdf | 2017-08-17 |
| 17 | 201711022617-Proof of Right (MANDATORY) [04-08-2017(online)].pdf | 2017-08-04 |
| 17 | 201711022617-FER_SER_REPLY [10-11-2020(online)].pdf | 2020-11-10 |
| 18 | 201711022617-COMPLETE SPECIFICATION [10-11-2020(online)].pdf | 2020-11-10 |
| 18 | abstract.jpg | 2017-07-20 |
| 19 | 201711022617-CLAIMS [10-11-2020(online)].pdf | 2020-11-10 |
| 19 | Description(Complete) [28-06-2017(online)].pdf | 2017-06-28 |
| 20 | 201711022617-POA [09-07-2021(online)].pdf | 2021-07-09 |
| 20 | Description(Complete) [28-06-2017(online)].pdf_68.pdf | 2017-06-28 |
| 21 | 201711022617-FORM 13 [09-07-2021(online)].pdf | 2021-07-09 |
| 21 | Drawing [28-06-2017(online)].pdf | 2017-06-28 |
| 22 | 201711022617-Proof of Right [13-10-2021(online)].pdf | 2021-10-13 |
| 22 | Form 18 [28-06-2017(online)].pdf | 2017-06-28 |
| 23 | 201711022617-US(14)-HearingNotice-(HearingDate-08-01-2024).pdf | 2023-12-18 |
| 23 | Form 18 [28-06-2017(online)].pdf_69.pdf | 2017-06-28 |
| 24 | 201711022617-FORM-26 [28-12-2023(online)].pdf | 2023-12-28 |
| 24 | Form 20 [28-06-2017(online)].jpg | 2017-06-28 |
| 25 | Form 3 [28-06-2017(online)].pdf | 2017-06-28 |
| 25 | 201711022617-Correspondence to notify the Controller [28-12-2023(online)].pdf | 2023-12-28 |
| 26 | Form 9 [28-06-2017(online)].pdf | 2017-06-28 |
| 26 | 201711022617-Written submissions and relevant documents [23-01-2024(online)].pdf | 2024-01-23 |
| 27 | Form 9 [28-06-2017(online)].pdf_189.pdf | 2017-06-28 |
| 27 | 201711022617-PatentCertificate09-02-2024.pdf | 2024-02-09 |
| 28 | Power of Attorney [28-06-2017(online)].pdf | 2017-06-28 |
| 28 | 201711022617-IntimationOfGrant09-02-2024.pdf | 2024-02-09 |
| 1 | Search201711022617E_28-07-2020.pdf |