Abstract:
Since a conventional inspection device needs to perform for each pixel calculation and determination of the presence/absence of an abnormality and needs to position all the pixels with high accuracy there is a problem that such a device involves increase of introduction cost and increase of calculation time on a computer. The present invention is provided with: an analysis unit (12a) that calculates a parameter representing a property of data of an object including no abnormality by performing on the data of the object including no abnormality dimensional compression for reducing dimensions of the data and that performs dimensional compression on data of an object to be inspected by using the parameter; a recovery unit (14a) that generates recovered data obtained by recovering the data of an object to be inspected having undergone the dimensional compression by the analysis unit (12a); a determination unit (14a) that outputs a determination result indicating whether or not the object to be inspected has an abnormality on the basis of the magnitude of difference between the data of the object to be inspected and the recovered data; and an output unit (15) that outputs the determination result outputted from the determination unit (14a).
Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence
7-3, Marunouchi 2-chome, Chiyoda-ku, Tokyo
100-8310
Inventors
1. MIYAZAWA, Kazuyuki
c/o Mitsubishi Electric Corporation, 7-3, Marunouchi 2-chome, Chiyoda-ku, Tokyo
100-8310
Specification
Description Title of the Invention
INSPECTION DEVICE AND INSPECTION METHOD Technical Field
[0001]
The present invention relates to an inspection device and an inspection method for inspecting, from obtained data of an object, whether the object has a defect (or an abnormality), such as a lack, an erroneous arrangement, or a fault. Background Art
[0002]
Capturing an object with a camera and a machine automatically inspecting, from the obtained image data, whether the object is defective is a technique important for automation or labor-saving of visual inspection or appearance inspection performed in a manufacturing process of an industrial product, for example.
[0003]
Conventionally, for inspection as to whether an object has a defect, such as a lack, an erroneous arrangement, or a fault, for example, an inspection device of Patent Literature 1 stores multiple image data sets obtained by capturing an object, calculates, from the multiple image data sets, for each pixel having the same coordinates, a range of the brightness value in which it is determined that the object has no defect, and sets it as a criterion for inspection as to whether the object has a defect. The inspection device determines, for each pixel having the same coordinates, whether the brightness value of an image data set obtained by capturing an object to be inspected is within the set range of the brightness value in which it is determined that the object has no defect, thereby inspecting whether the object has a defect, such as a lack, an
erroneous arrangement, or a fault. Citation List Patent Literature
[0004]
Patent Literature 1: Japanese Patent Application Publication No. 2013-32995 Summary of Invention Technical Problem
[0005]
However, the above conventional inspection device calculates, for each pixel, the range of the brightness value in which it is determined that the object has no defect, and determines and inspects, for each pixel, whether the object has a defect, such as a lack, an erroneous arrangement, or a fault. Thus, it is required that the positional relationship between the object and the camera in capturing be always constant, and highly accurate alignment be performed for all the pixels. With the introduction of the inspection device, it is required to introduce a jig for fixing the camera and object, a positioning device, and the like to highly accurately perform alignment for all the pixels. This leads to increase in introduction cost and increase in computer calculation time.
[0006]
The present invention has been made to solve the problems as described above, and is intended to provide an inspection device and an inspection method for inspecting whether an object has a defect, such as a lack, an erroneous arrangement, or a fault while easing the requirements that the object and a camera be securely fixed and highly accurate alignment be performed for each pixel of image data obtained by capturing the object, compared to the conventional inspection device. Solution to Problem
3
[0007]
An inspection device according to the present invention includes: an analyzing unit to calculate a parameter representing a feature of data of an object having no defect by performing dimensionality reduction for reducing a dimensionality of data on the data of the object having no defect, and perform dimensionality reduction on data of an object to be inspected by using the parameter; a restoring unit to generate restored data obtained by restoring the data of the object to be inspected subjected to the dimensionality reduction by the analyzing unit; a determination unit to output a determination result indicating whether the object to be inspected is defective, on a basis of a magnitude of a difference between the data of the object to be inspected and the restored data; and an output unit to output the determination result output by the determination unit. Advantageous Effects of Invention [0008]
With the present invention, by calculating a parameter representing a feature of data of an object having no defect by performing dimensionality reduction for reducing a dimensionality of data on the data of the object having no defect, performing dimensionality reduction on data of an object to be inspected by using the parameter, generating restored data obtained by restoring the data of the object to be inspected subjected to the dimensionality reduction, and outputting a determination result indicating whether the object to be inspected is defective, on a basis of a magnitude of a difference between the data of the object to be inspected and the restored data, it is possible to inspect whether an object has a defect, such as a lack, an erroneous arrangement, or a fault while easing the requirements that the object and a camera be securely fixed
4
and highly accurate alignment be performed for each pixel of image data obtained by capturing the object, compared to the conventional inspection device. Brief Description of Drawings [0009]
FIG. 1 is a functional block diagram of an inspection system including an inspection device according to a first embodiment of the present invention.
FIG. 2 is a hardware configuration diagram of the inspection device according to the first embodiment of the present invention.
FIG. 3 is a flowchart illustrating an operation in a learning mode of the inspection device according to the first embodiment of the present invention.
FIG. 4 is a conceptual diagram of a principal component analysis.
FIG. 5 is a part of a flowchart illustrating an operation in an inspection mode of the inspection device according to the first embodiment of the present invention.
FIG. 6 is another part of the flowchart illustrating the operation in the inspection mode of the inspection device according to the first embodiment of the present invention.
FIG. 7 illustrates an example where a printed board is taken as an object and it is inspected whether there is a defect on the board.
FIG. 8 illustrates an example where part of the board, which is an object to be inspected, is lacking.
FIG. 9 illustrates a result of threshold processing.
FIG. 10 illustrates an example of a two-dimensional mask for defining a region to be inspected.
FIG. 11 illustrates an example of contents that an input/output unit commands an input/output device to display when the input/output device includes a display as its
5
component.
FIG. 12 illustrates another example of contents that the input/output unit commands the input/output device to display when the input/output device includes a display as its component.
FIG. 13 illustrates still another example of contents that the input/output unit commands the input/output device to display when the input/output device includes a display as its component.
FIG. 14 is a functional block diagram of an inspection system including an inspection device according to a second embodiment of the present invention.
FIG. 15 is a flowchart illustrating an operation in a learning mode of the inspection device according to the second embodiment of the present invention.
FIG. 16 is a diagram in which a neuron is modeled as a multi-input single-output node.
FIG. 17 illustrates an example of a sandglass-type neural network.
FIG. 18 is an example illustrating a process in which the number of hidden layers of an autoencoder is changed.
FIG. 19 is a part of a flowchart illustrating an operation in an inspection mode of the inspection device according to the second embodiment of the present invention.
FIG. 20 is another part of the flowchart illustrating the operation in the inspection mode of the inspection device according to the second embodiment of the present invention.
FIG. 21 is a functional block diagram of an inspection system including an inspection device according to a third embodiment of the present invention.
FIG. 22 is a part of a flowchart illustrating an operation in an inspection mode of the inspection device according to the third embodiment of the present invention.
6
FIG. 23 is another part of the flowchart illustrating the operation in the inspection mode of the inspection device according to the third embodiment of the present invention. Description of Embodiments
[0010]
First embodiment
FIG. 1 is a functional block diagram of an inspection system including an inspection device 1 according to a first embodiment of the present invention.
[0011]
The inspection system includes the inspection device 1 that inspects an object 3, a camera 2 that captures the object 3, and an input/output device 4 that receives inspection content and outputs an inspection result. The inspection device 1 receives, as input data, image data of the object 3 captured by the camera 2, analyzes it, and sends a result thereof to the input/output device 4.
[0012]
The inspection device 1 includes a control unit (or controller) 10 that controls units, an input unit (or receiver) 11 into which image data is input, an analyzing unit (or analyzer) 12a that analyzes the image data input through the input unit 11, a storage unit (or memory) 13a that records a result of the analysis, a determination unit (or determiner) 14a that outputs, from a result of the analysis and the obtained image data, a determination result indicating whether the object 3 is defective (or abnormal), and an input/output unit 15 that outputs the determination result output by the determination unit 14a.
[0013]
The control unit 10 sends and receives commands to and from the input unit 11, analyzing unit 12a, storage unit 13a, determination unit 14a, and input/output unit 15, thereby
controlling the units. [0014]
The input unit 11 receives image data of the object 3 from the camera 2. The image data is an example of data of the object 3, and it is not limited to image data and may be data representing a waveform, a solid, or the like. The first embodiment assumes that the input image data is digital data, but it may be analog data.
[0015]
The analyzing unit 12a selectively executes two different operation modes in accordance with a command sent from the control unit 10. Here, the two operation modes are a learning mode and an inspection mode. In the learning mode, it uses one or more image data sets of a normal object 3 having no defect (or abnormality) to perform dimensionality reduction for reducing the dimensionality (or number of dimensions) of the image data sets of the object 3 on the image data sets of the normal object 3 having no defect and thereby calculates a parameter representing a feature of the data sets of the normal object 3 having no defect, thereby learning how the normal state is. Thus, in the learning mode, the inspection device 1 does not perform inspection for defects of an object 3. The inspection for defects is performed in the inspection mode, which is performed after completion of the learning mode. Here, when the number of the image data sets of the normal object 3 having no defect is two or more, a single parameter representing a feature of the data sets of the normal object 3 is calculated. In the learning mode, it is required that the object 3 be in a normal state with no defect. If objects 3 are objects of the same type, it is possible to obtain the image data sets from the multiple different objects. Hereinafter, image data sets obtained by capturing an object 3 having no defect will be referred to as normal image data sets.
[0016]
In the inspection mode, it performs, on an image data set of an object 3 to be inspected, the same dimensionality reduction as that performed in the calculation of the parameter representing the feature of the data sets of the normal object 3 learned in the learning mode.
[0017]
The storage unit 13a stores a learning result, reads the learning result, and sends the learning result to the analyzing unit 12a in accordance with commands from the control unit 10. The learning result read here is a learning result corresponding to the dimensionality reduction method used in the learning mode.
[0018]
The determination unit 14a restores the image data set of the object 3 to be inspected subjected to the dimensionality reduction, by the same method as that used for the dimensionality reduction, and outputs a determination result indicating whether the object 3 to be inspected is defective, to the input/output unit 15, on the basis of a magnitude of a difference between the restored data set, which is the restored image data set, and the image data set of the object 3 to be inspected. The determination unit 14a is an example of a unit serving as both a restoring unit (or restorer) and a determination unit (or determiner).
[0019]
The input/output unit 15 outputs information indicating the progress of the learning or the like to the outside through the input/output device 4 in accordance with commands from the control unit 10. Here, it is assumed that an operator sees the input/output device 4 outside; however, this is not mandatory, and it is also possible to output a signal to an external control device or the like without
intervention of an operator. The input/output unit 15 also outputs the determination result received from the determination unit 14a to the outside through the input/output device 4 in accordance with a command from the control unit 10. Here, it is assumed that an operator sees the input/output device 4 outside; however, this is not mandatory, and it is also possible to output a signal to an external control device or the like without intervention of an operator. The input/output unit 15 is an example of an output unit, and in the first embodiment, it includes an input unit in addition to an output unit.
[0020]
The camera 2 obtains image data of an object 3 by capturing the object 3 and storing it in image data. The camera 2 sends the image data of the object 3 to the inspection device 1. The camera 2 is an example, and any capable of obtaining data of an object 3 may be used instead.
[0021]
The input/output device 4 receives inspection content of the inspection device 1, and outputs an inspection result output by the inspection device 1. The input/output device 4 may be constituted by, for example, a display, a speaker, a keyboard, a mouse, and the like. The display is an example of a display unit.
[0022]
FIG. 2 is a hardware configuration diagram of the inspection device 1 according to the first embodiment of the present invention. A configuration of the inspection device 1 according to the first embodiment of the present invention will be described with reference to FIG. 2.
[0023]
In the first embodiment, the inspection device 1 is formed by a computer. The computer forming the inspection device 1 includes hardware including a bus 104, an
input/output interface 100 that sends and receives data, a memory 102, a storage medium 103 that stores programs, learning data, and the like, and a processor 101 that reads and executes programs in the storage medium 103 loaded in the memory 102. Hereinafter, the input/output interface 100 will be referred to as the input/output IF 100.
[0024]
The bus 104 is a signal path that electrically connects the devices and through which data is communicated.
[0025]
The input/output IF 100 sends and receives data. For example, when receiving a start signal and a setting signal to the inspection device 1 from the input/output device 4, the input/output IF 100 sends them to the control unit 10. Also, for example, when receiving a command signal from the control unit 10 to the analyzing unit 12a, the input/output IF 100 sends the command signal to the analyzing unit 12a. The input unit 11 and input/output unit 15 are implemented by the input/output IF 100.
[0026]
The memory .102 functions as a work area into which programs stored in the storage medium 103 are loaded. The memory 102 is, for example, a random access memory (RAM).
[0027]
The storage medium 103 stores a program for implementing functions of a program for the learning mode and a program for the inspection mode. The storage medium 103 also stores learning data or the like. The storage medium 103 is, for example, a read only memory (ROM), a flash memory, or a hard disk drive (HDD) . The storage medium 103 also stores an operating system (OS). The storage unit 13a is implemented by the storage medium 103.
[0028]
The processor 101 is connected to the other devices
through the bus 104,. and controls the other devices and the units. The processor 101 reads and executes programs in the storage medium 103 loaded in the memory 102. The processor 101 loads at least part of the OS stored in the storage medium 103 into the memory 102, and executes the programs while executing the OS. The processor 101 is an integrated circuit (IC) that performs processing. The processor 101 is, for example, a central processing unit (CPU). The control unit 10, analyzing unit 12a, and determination unit 14a are implemented by the processor 101 reading and executing programs in the storage medium 103 loaded in the memory 102.
[0029]
Information indicating results of the respective devices, data, signal values, the values of variables, and the like are stored in the memory 102, storage medium 103, or a register or a cache memory in the processor 101.
[0030]
The memory 102 and storage medium 103 may be a single device instead of separate devices.
[0031]
The programs may be stored in a portable recording medium, such as a magnetic disc, a flexible disc, an optical disc, a compact disc, a digital versatile disc (DVD).
[0032]
Next, operations of the inspection device 1 according to the first embodiment of the present invention will be described.
[0033]
FIG. 3 is a flowchart illustrating an operation in the learning mode of the inspection device 1 according to the first embodiment of the present invention. The operation in the learning mode of the inspection device 1 will be described below with reference to FIG. 3.
[0034]
In step S10, the control unit 10 receives a start signal and a setting signal from the input/output device 4 through the input/output unit 15. It then gives a command to the input unit 11 in accordance with the setting signal. The input unit 11 receives a normal image data set of an object 3 from the camera 2. Here, the timing to receive a normal image data set may be predetermined to be, for example, 30 times a second, or may be determined in accordance with a command from the control unit 10. The control unit 10 gives a command to start processing of the learning mode to the analyzing unit 12a. The analyzing unit 12a switches to the learning mode by reading, from the memory 102, the program corresponding to the learning mode in the storage medium 103 loaded in the memory 102 and executing it in the processor 101. The analyzing unit 12a receives, from the input unit 11, the normal image data set of the object 3 captured by the camera 2.
[0035]
In step Sll, the analyzing unit 12a determines whether to further receive a normal image data set or end receiving a normal image data set. Here, the determination as to whether to end receiving a normal image data set may be determined by the analyzing unit 12a, or may be determined in accordance with a command from the control unit 10. In a case where it is determined by the analyzing unit 12a, receiving a normal image data set may be ended when the number of the received normal image data sets reaches a predesignated number, for example. The predesignated number is, for example, 100, 1000, or the like. In a case where it is determined in accordance with a command from the control unit 10, the control unit 10 may receive a command to end receiving a normal image data set, from the input/output device 4 through the input/output unit 15, and send it to the analyzing unit 12a, for example.
[0036]
In step S12, the analyzing unit 12a returns to the reception of a normal image data set, or proceeds to the next step, in accordance with the determination result in step Sll. When it is determined, in the determination as to whether to end receiving a normal image data set, that it is required to further receive a normal image data set, step S12 results in No, and it returns to step S10. When it is determined that receiving a normal image data set is to be ended, step S12 results in Yes, and it proceeds to the next step.
[0037]
In step S13, the analyzing unit 12a performs dimensionality reduction by using the one or more received normal image data sets. Here, dimensionality reduction refers to converting high dimensional data, such as image data or three-dimensional solid data, into low dimensional data. The analyzing unit 12a performs learning by using the normal image data sets to obtain a data conversion method optimal for the normal image data sets, in the learning mode.
[0038]
Known dimensionality reduction methods include principal component analysis, linear discriminant analysis, canonical correlation analysis, discrete cosine transform, random projection, an autoencoder with a neural network, and the like. Among these, principal component analysis is one of the most commonly used linear dimensionality reduction methods. The following describes a case where a principal component analysis is used.
[0039]
The principal component analysis is a method for obtaining, from a large number of normal image data sets for learning distributed in a multidimensional space, a lower dimensional space representing a feature of the distribution.
The lower dimensional space is referred to as a subspace. When the pixel values of the large number of normal image data sets received in step SIO are simply plotted in a space, they are often distributed in a cluster in a significantly low dimensional subspace.
[0040]
FIG. 4 is a conceptual diagram of the principal component analysis. As illustrated in FIG. 4, for example, normal image data sets distributed in a three-dimensional space are shown as a group contained in a plane. In the principal component analysis, the two-dimensional plane as in FIG. 4 representing a feature of the group is obtained. Here, a dimension has no physical meaning, and refers to an element contained in data. Dimensionality (or number of dimensions) refers to the number of elements contained in data. Here, one pixel is equal to one dimension, and for example, an image data set having 10 pixels vertically and 10 pixels horizontally has 10 x 10 = 100 pixels, and thus is a "100-dimensional data set." Thus, it is represented as a point in a 100-dimensional space. FIG. 4 is a schematic diagram illustrating the principal component analysis so that the principal component analysis is visualized. Each ellipse in FIG. 4 corresponds to one of the image data sets. In FIG. 4, since the image data sets are three-dimensionally represented, they are three-dimensional image data sets. This example shows a situation where the image data sets actually distributed in the three-dimensional space are represented in the two-dimensional subspace, i.e., reduced in dimensionality.
[0041]
The total number of normal image data sets received in step S10 will be denoted by N, and the total number of pixels of each normal image data set will be denoted by K. For example, the value of N is 100, 1000, or the like, and
the value of K is 1024 when the size of the normal image data sets is 32 * 32 pixels, and 409600 when the size is 640 x 64 0 pixels, or the like. When a number of a normal image data set is denoted by n, the normal image data set xn can be represented as a vector by equation (1), where T indicates a transposed vector. [0042]
[0043]
Then, a mean vector M and variance-covariance matrix S thereof are obtained according to equations (2) and (3). [0044]
[0045]
[0046]
In the principal component analysis, for the distribution of the normal image data sets in the space, a first principal component that is a straight line passing through the point that is the mean and extending in a direction with the largest variability is obtained. Next, a straight line of a second principal component being orthogonal to the first principal component, passing through the mean, and extending in a direction with the second
largest variability is obtained. In this manner, principal components are obtained in sequence. The directions with large variability passing through the mean are equivalent to the problem of obtaining eigenvectors of the variance-covariance matrix S.
[0047]
Specifically, eigenvalues Xj and eigenvectors UJ satisfying equation (4) are obtained using the calculated variance-covariance matrix S, where j is a dimension number.
[0048]
Suj = XjUj (4)
[0049]
Selecting the d eigenvectors Uj corresponding to the first to d-th largest eigenvalues Xj yields d principal components (ui, U2, . . . , Ud) . Larger eigenvalues Xj indicate more principal components in the principal component analysis. By extracting principal components, parameters in the principal component analysis are sorted in order of importance, d ^ K, and in general, d is considerably smaller than K. UJ is referred to as the j-th principal component. The principal components are mutually orthogonal, and may be referred to as bases. The values of the d principal components (ui, U2, . . ., Ud) are an example of a parameter representing a feature of the data sets of the object 3.
[0050]
The original normal image data sets before the dimensionality reduction can be represented as a linear combination of the principal components. By taking the first to d-th dimensions and discarding the others, i.e., (d+l)th and subsequent dimensions, the normal image data sets originally having K dimensions can be reduced to ones having d dimensions.
[0051]
Here, the value of d is an important parameter affecting the performance of the inspection device 1 according to the first embodiment. By appropriately setting the value of d, it is possible to extract only important components commonly appearing in the normal image data sets received in step S10. Meanwhile, it is possible to eliminate unwanted components, such as variation between the objects 3 of the same type, variation between the image data sets due to the difference between their capture times, or noise of the camera.
[0052]
However, when the value of d is too small, important components are eliminated, and when the value of d is too large, unwanted components are left.
[0053]
For example, the value of d may be determined in accordance with a command from the control unit 10. In this case, the control unit 10 receives the value of d from the input/output device 4 through the input/output unit 15 and sends it to the analyzing unit 12a.
[0054]
The analyzing unit 12a may also read and use the value of d stored in the memory 102 or storage medium 103. In this case, the previously stored value of d may be, for example, about one tenth or one fifth of the total number K of pixels of each image data set, or the like.
[0055]
The analyzing unit 12a may also adaptively determine the value of d on the basis of characteristics of the normal image data sets for learning. In this case, it is effective to use a cumulative contribution ratio P calculated by equation (5)
[0056]
100 Ell A;
P= y/A (5)
[0057]
The cumulative contribution ratio P is an indicator indicating how much characteristics of information contained in the original normal image data sets before dimensionality reduction can be represented by using the first to d-th components, and the analyzing unit 12a can perform dimensionality reduction appropriately for characteristics of the normal image data sets by obtaining the smallest value of d such that the value of P exceeds a threshold value.
[0058]
The threshold value for the cumulative contribution ratio P may be, for example, determined in accordance with a command from the control unit 10. In this case, the control unit 10 receives the threshold value from the input/output device 4 through the input/output unit 15, and sends it to the analyzing unit 12a.
[0059]
The analyzing unit 12a may also read and use the threshold value stored in the memory 102 or storage medium 103. In this case, the previously stored threshold value may be, for example, 80, 100, or the like.
[0060]
Returning to FIG. 3, in step S14, after performing the dimensionality reduction, the analyzing unit 12a sends, to the storage unit 13a, the d principal components (ui, U2, . . ., Ud) as a parameter representing a feature of the data of the object 3 resulting from learning the normal image data sets. The storage unit 13a stores the parameter, which is a learning result output from the analyzing unit 12a,
representing the feature of the data of the object 3 into the storage medium 103 in accordance with a command from the control unit 10. Although it has been described that the storage unit 13a stores the parameter, which is a learning result output from the analyzing unit 12a, representing the feature of the data of the object 3 into the storage medium 103 in accordance with a command from the control unit 10, it may be stored in the memory 102.
[0061]
In the learning mode, after completion of the processing by the analyzing unit 12a, the control unit 10 gives a command to start processing to the input/output unit 15. In accordance with the command from the control unit 10, the input/output unit 15 outputs information indicating the progress of the learning or the like to the outside through the input/output device 4. Here, it is assumed that an operator sees the input/output device 4 outside; however, this is not mandatory, and it is also possible to output a signal to an external control device without intervention of an operator.
[0062]
FIG. 5 is a part of a flowchart illustrating an operation in the inspection mode of the inspection device 1 according to the first embodiment of the present invention. The operation in the inspection mode of the inspection device 1 will be described below with reference to FIG. 5.
[0063]
In step S20, the control unit 10 receives a start signal and a setting signal from the input/output device 4 through the input/output unit 15. It then gives a command to the input unit 11 in accordance with the setting signal. The input unit 11 receives an image data set to be inspected of an object 3 from the camera 2. Here, the timing to receive an image data set may be predetermined to be, for example,
30 times a second, or may be determined in accordance with a command from the control unit 10. The control unit 10 gives a command to start processing of the inspection mode to the analyzing unit 12a. The analyzing unit 12a switches to the inspection mode by reading, from the memory 102, the program corresponding to the inspection mode in the storage medium 103 loaded in the memory 102 and executing it in the processor 101. The analyzing unit 12a receives, from the input unit 11, the image data set of the object 3 captured by the camera 2. The image data set to be inspected of the object 3 is an example of data of an object to be inspected.
[0064]
In step S21, the analyzing unit 12a determines whether to further receive an image data set or end receiving an image data set. Here, the determination as to whether to end receiving an image data set may be determined by the analyzing unit 12a, or may be determined in accordance with a command from the control unit 10. In a case where it is determined by the analyzing unit 12a, receiving an image data set may be ended when the number of received image data sets reaches a predesignated number, for example. The predesignated number is, for example, 1, 10, or the like. In a case where it is determined in accordance with a command from the control unit 10, the control unit 10 may receive a command to end receiving an image data set, from the input/output device 4 through the input/output unit 15, and send it to the analyzing unit 12a, for example.
[0065]
In step S22, the analyzing unit 12a returns to the reception of an image data set, or proceeds to the next step, in accordance with the determination result in step S21. When it is determined, in the determination as to whether to end receiving an image data set, that it is required to further receive an image data set, step S22 results in No,
and it returns to step S20. When it is determined that acquiring an image data set is to be ended, step S22 results in Yes, and it proceeds to the next step.
[0066]
In step S23, to read a result of the learning in the learning mode, the analyzing unit 12a sends a read request to the control unit 10. The storage unit 13a reads a required learning result from the storage medium 103 in accordance with a command from the control unit 10, and inputs it into the analyzing unit 12a. The learning result read here is a learning result corresponding to the dimensionality reduction method step S13 used in the learning mode. Specifically, in the first embodiment, since the principal component analysis is used as an example, the values of the d principal components (ui, U2, . . . , ua) , which are vectors representing principal components, are read as the leaning result.
[0067]
In step S24, the analyzing unit 12a performs dimensionality reduction on the at least one received image data set to be inspected on the basis of the read learning result. The method for the dimensionality reduction is a dimensionality reduction method corresponding to the dimensionality reduction method step S13 used in the learning mode. Since the values of the d principal components (ui, U2, . . . , Ud), which are vectors representing principal components, have been obtained in the learning mode, the dimensionality reduction is performed by projecting the image data set to be inspected onto the d vectors. The analyzing unit 12a sends the image data set to be inspected and the vector resulting from the dimensionality reduction of the image data set to be inspected, to the determination unit 14a. Here, reference character A indicates the subsequent process, which will be
described in detail later.
[0068]
FIG. 6 is another part of the flowchart illustrating the operation in the inspection mode of the inspection device 1 according to the first embodiment of the present invention. A continuation of the operation in the inspection mode of the inspection device 1 will be described below with reference to FIG. 6.
[0069]
Following the process A subsequent to step S24, after completion of the processing by the analyzing unit 12a, in step S30, the control unit 10 gives a command to start processing to the determination unit 14a. In accordance with the command from the control unit 10, the determination unit 14a reads, from the memory 102, a program in the storage medium 103 loaded in the memory 102, and executes it in the processor 101. The determination unit 14a first restores the vector resulting from the dimensionality reduction of the image data set to be inspected received from the analyzing unit 12a, as an image data set. Here, the method for restoring the image data set is the same as that used for the dimensionality reduction in step S24. Thus, in the first embodiment, the restoration is performed using the principal component analysis.
[0070]
When the principal component analysis is used for the dimensionality reduction, since the received vector resulting from the dimensionality reduction of the image data set to be inspected is represented by the lower dimensional subspace as illustrated in FIG. 4, the restoration of the image data set is performed by projecting the vector onto a space having the same dimensions as the original image data set.
[0071]
In step S31, the determination unit 14a calculates a difference between the restored image data set and the image data set to be inspected. At this time, as the difference, a difference is calculated for each pixel. The difference may be an absolute difference. Hereinafter, the restored image data set will be referred to as the restored data set.
[0072]
FIG. 7 illustrates an example where a printed board is taken as the object 3 and it is inspected whether there is a defect on the board. In FIG. 7, the left figure illustrates an image data set to be inspected, the center figure illustrates a restored data set, and the right figure illustrates a data set representing the difference between the image data set to be inspected and the restored data set In the data set of the right figure representing the difference between the image data set to be inspected and the restored data set, a darker shade indicates a smaller difference, and a lighter shade indicates a larger difference. In a case where the board, which is the object 3 to be inspected, is normal, after the dimensionality reduction is performed, it is possible to restore an image data set that is nearly the same as the image data set to be inspected. This is because, in the learning mode, a method for efficiently representing a feature of the normal image data sets is learned, and in the inspection mode, when the object 3 to be inspected is normal, the image data set to be inspected is closely similar to the normal image data sets used for the learning.
[0073]
Thus, as illustrated in FIG. 7, when the difference between the restored data set and the image data set to be inspected is calculated, the difference is nearly zero over the entire image data set.
[0074]
On the other hand, in a case where the object 3 to be inspected is defective, after the dimensionality reduction is performed using the result of the learning with the normal image data sets, part significantly different from that of the normal image data sets cannot be properly restored.
[0075]
FIG. 8 illustrates an example where part of the board, which is an object 3 to be inspected, is lacking. In FIG. 8, the left figure illustrates an image data set to be inspected, the center figure illustrates a restored data set, and the right figure illustrates a data set representing the difference between the image data set to be inspected and the restored data set. In the data set of the right figure representing the difference between the image data set to be inspected and the restored data set, a darker shade indicates a smaller difference, and a lighter shade indicates a larger difference. In this case, in the restored data set, the normal part is properly restored, but the lacking part cannot be properly restored since it is restored on the basis of the normal image data sets used in the learning mode.
[0076]
Thus, as illustrated in FIG. 8, when a difference between the restored data set and the image data set to be inspected is calculated, the difference is large only in the part, which is the defective part, different from that of the normal image data sets.
[0077]
Returning to FIG. 6, in step S32, on the basis of the difference between the restored data set and the image data set to be inspected, the determination unit 14a outputs a determination result indicating whether the image data set to be inspected is defective, to the input/output unit 15.
The determination unit 14a performs threshold processing on the difference between the restored data set and the image data set to be inspected, and sets values for pixels where the difference is less than a threshold value to 0 and values for pixels where the difference is not less than the threshold value to 1. Here, 0 and 1 may be interchanged, and other values may be used.
[0078]
The threshold value may be, for example, determined in accordance with a command from the control unit 10. In this case, the control unit 10 receives the threshold value from the input/output device 4 through the input/output unit 15, and sends it to the analyzing unit 12a.
[0079]
The determination unit 14a may also read and use the threshold value stored in the memory 102 or storage medium 103. In this case, the previously stored threshold value is, for example, 100, 200, or the like.
[0080]
The determination unit 14a may also adaptively determine the threshold value depending on the distribution of the difference. When a certain threshold value is determined and a group of the pixels having a pixel value not less than the threshold value is referred to as class 1 and a group of the other pixels is referred to as class 2, an inter-class variance and an intra-class variance are obtained from the pixel values of classes 1 and 2, and the threshold value is determined to maximize a degree of separation calculated from these values.
[0081] FIG. 9 illustrates a result of the threshold processing. Suppose that the threshold processing yields the result as in FIG. 9. In FIG. 9, the region indicated by black is the region where the difference is less than the threshold value,
and the region indicated by white is the region where the difference is not less than the threshold value. [0082]
The determination unit 14a determines a rectangle circumscribing a white region as indicated by the dashed line in FIG. 9, and sends information that there is a defect at the position, to the input/output unit 15. Hereinafter, a rectangle circumscribing a white region will be referred to as a bounding box. The sent information includes the upper left coordinates, vertical width, horizontal width, or the like of the bounding box. The determination unit 14a may send all the positions of the defective pixels to the input/output unit 15 instead of sending the bounding box to the input/output unit 15.
[0083]
The determination unit 14a may also send the calculated difference image data set to the input/output unit 15.
[0084]
It is also possible to provide a condition for the position or size of the bounding box, and neglect bounding boxes that do not satisfy the condition. This makes it possible to prevent false detection outside a target region in the image data set or prevent false detection due to noise.
[0085]
The condition for the bounding box may be, for example, determined in accordance with a command from the control unit 10. In this case, the control unit 10 receives the condition from the input/output device 4 through the input/output unit 15, and sends it to the analyzing unit 12a.
[0086]
The determination unit 14a may also read and use the condition stored in the memory 102 or storage medium 103. In this case, the previously stored condition is, for example,
that the vertical width of the bounding box is not less than 3 pixels, that the horizontal width of the bounding box is not less than 3 pixels, that it exists within a two-dimensional mask for defining a region to be inspected, or the like.
[0087]
FIG. 10 is an example.of the two-dimensional mask for defining the region to be inspected. In FIG. 10, the left figure illustrates the image data set to be inspected, and the right figure illustrates the two-dimensional mask. The determination unit 14a applies the two-dimensional mask of the right figure to the image data set to be inspected of the left figure. In the two-dimensional mask of the right figure of FIG. 10, the region indicated by white is the region to be inspected, and the region indicated by black is a bounding box. Thus, when the two-dimensional mask of the right figure is applied to the image data set to be inspected of the left figure, defects are neglected in the region of the left figure corresponding to the bounding box indicated by black of the two-dimensional mask of the right figure.
[0088]
Returning to FIG. 6, in the inspection mode, after completion of the processing by the determination unit 14a, the control unit 10 gives a command to start processing to the input/output unit 15. In accordance with the command from the control unit 10, the input/output unit 15 outputs the determination result received from the determination unit 14a to the outside through the input/output device 4. Here, it is assumed that an operator sees the input/output device 4 outside; however, this is not mandatory, and it is also possible to output a signal to an external control device or the like without intervention of an operator.
[0089]
FIG. 11 illustrates an example of contents that the input/output unit 15 commands the input/output device 4 to display when the input/output device 4 includes a display as its component.
[0090]
FIG. 11 illustrates a case where no defect has been detected by the determination unit 14a. In this case, the image data set to be inspected is simply displayed, and a message for informing that there is no defect is displayed. The message for informing that there is no defect is, for example, the "OK" symbol illustrated at the upper left of FIG. 11. Instead of the "OK" symbol, other symbols, such as a "NO DEFECTS" symbol, a "NORMAL" symbol, or a white circle mark, may be displayed.
[0091]
FIG. 12 illustrates another example of contents that the input/output unit 15 commands the input/output device 4 to display when the input/output device 4 includes a display as its component.
[0092]
FIG. 12 illustrates a case where two defective portions have been detected by the determination unit 14a. In this case, the detected defective portions are indicated by dotted lines superimposed on the image data set to be inspected, and a message for informing that a defect has been detected is displayed. The message for informing that a defect has been detected is, for example, the "NG" (no good) symbol illustrated at the upper left of FIG. 12. Instead of the "NG" symbol, other symbols, such as a "DEFECTIVE" symbol, a "NOT NORMAL" symbol, or a cross mark, may be displayed. It is also possible to attach "CHECK !" symbols as messages to the defective portions. The defective portions are determined on the basis of bounding boxes received from the determination unit 14a. The determination of the defective
portions based on the bounding boxes may be performed by the determination unit 14a or input/output unit 15. The bounding boxes may or may not be displayed.
[0093]
FIG. 13 illustrates still another example of contents that the input/output unit 15 commands the input/output device 4 to display when the input/output device 4 includes a display as its component.
[0094]
FIG. 13 illustrates a case where two defective portions have been detected by the determination unit 14a, similarly to FIG. 12. However, instead of directly indicating the detected defective portions, the image data set to be inspected is displayed on the left side, and a difference composite image data set obtained by combining the difference image data set calculated by the determination unit 14a with the image data set to be inspected is displayed on the right side. In the difference composite image data set, a darker shade indicates a smaller difference, and a lighter shade indicates a larger difference. Thus, the white outstanding portions in the difference composite image data set on the right side of FIG. 13 appear prominently as portions of the image data set to be inspected where the difference from the normal state is large, which allows an operator who inspects for defects to easily perceive portions to which attention should be paid. As in FIG. 12, the determination of the defective portions based on the bounding boxes may be performed by the determination unit 14a or input/output unit 15.
[0095]
The manners of the display by the input/output device 4 illustrated in FIGs. 11, 12, and 13 are just examples, and in practice, combinations of them or display manners different from them may be used. Also, the input/output
device 4 may be constituted by a speaker instead of the display, and in this case, information may be output to the outside by voice, music, or the like. [0096]
By repeating the processing of the learning mode and the processing of the inspection mode as described above until a trigger to end processing, such as a power-off or a termination operation, occurs, it is possible to inspect whether the object 3 has a defect, such as a lack, an erroneous arrangement, or a fault while easing the requirements that the object 3 and the camera 2 be securely fixed and highly accurate alignment be performed for each pixel of image data obtained by capturing the object 3. Although it has been described that the processing of the learning mode and the processing of the inspection mode are repeated, the processing of the learning mode may be performed only once instead of being repeated. Likewise, the processing of the inspection mode may be performed only once instead of being repeated.
[0097]
As described above, the inspection device 1 included in the inspection system of the first embodiment calculates a parameter representing a feature of data of an object 3 having no defect by performing dimensionality reduction on the data of the object 3 having no defect, performs dimensionality reduction on data of an object 3 to be inspected by using the parameter, generates restored data obtained by restoring the data of the object 3 to be inspected subjected to the dimensionality reduction, and outputs, to the input/output unit 15, a determination result indicating whether the object 3 to be inspected is defective, on the basis of a magnitude of a difference between the data of the object 3 to be inspected and the restored data. Thus, it can inspect whether the object 3 has a defect, such as a
lack, an erroneous arrangement, or a fault while easing the requirements that the object 3 and the camera 2 be securely fixed and highly accurate alignment be performed for each pixel of image data obtained by capturing the object 3, compared to the conventional inspection device.
[0098]
Further, with dimensionality reduction using principal component analysis, the inspection device 1 included in the inspection system of the first embodiment can obtain an efficient characteristic obtained by extracting only a parameter representing a feature of the data of the object 3 that commonly appears in the normal image data sets. In this process, unwanted information, such as variation between the objects 3 of the same type, variation between the image data sets due to the difference between their capture times, or noise of the camera is discarded, which reduces the data size, and allows the storage capacity required by the storage medium 103 to be reduced.
[0099]
Further, since the inspection device 1 included in the inspection system of the first embodiment learns image data of the object 3 in the normal state from the normal image data sets and performs the inspection, no defective state need be defined by a user, a developer, or the like. Thus, it does not cause a situation where a defect is missed due to incomplete definition of the defective states, and can be applied generally to any defects.
[0100]
In the above inspection system of the first embodiment, the inspection device 1, camera 2, and input/output device 4 are separated from each other. However, either or both of the camera 2 and input/output device 4 may be included in the inspection device 1. Inspection systems configured in this manner can also provide the above advantages of the
first embodiment.
[0101]
In the above inspection system of the first embodiment, the determination unit 14a restores the image data of the object 3 to be inspected subjected to the dimensionality reduction, by the same method as that used for the dimensionality reduction, and outputs, to the input/output unit 15, a determination result indicating whether the object 3 to be inspected is defective, on the basis of a magnitude of a difference between the restored data, which is restored image data, and the image data of the object 3 to be inspected. The determination unit 14a is an example of a unit serving as both a restoring unit and a determination unit. However, a restoring unit that restores the image data of the object 3 to be inspected subjected to the dimensionality reduction by the same method as that used for the dimensionality reduction may be separately provided in the inspection device 1, or the analyzing unit 12a may have the function of the restoring unit. Inspection systems configured in this manner can also provide the above advantages of the first embodiment.
[0102]
In the above inspection system of the first embodiment, the analyzing unit 12a uses principal component analysis in the dimensionality reduction of the object 3, the storage unit 13a stores a result of the principal component analysis, and the determination unit 14a performs the restoration by using principal component analysis. However, when there are different types of objects 3, the dimensionality reduction method may be changed depending on the type of the object 3. For example, it is possible that for an object 3 of a first type, the analyzing unit 12a uses principal component analysis in the dimensionality reduction of the object 3 of the first type, the storage unit 13a stores a result of the
principal component analysis, and the determination unit 14a performs the restoration by using principal component analysis; and for an object 3 of a second type, the analyzing unit 12a uses linear discriminant analysis in the dimensionality reduction of the object 3 of the second type, the storage unit 13a stores a result of the linear discriminant analysis, and the determination unit 14a performs the restoration by using linear discriminant analysis. The number of types of objects 3 is not limited, and the combination of types of objects 3 and dimensionality reduction methods may be arbitrarily determined. However, for objects 3 of the same type, the same dimensionality reduction method is used. Inspection systems configured in this manner can also provide the above advantages of the first embodiment.
[0103]
Second embodiment
In the first embodiment, the analyzing unit 12a uses principal component analysis in the dimensionality reduction of the object 3, the storage unit 13a stores a result of the principal component analysis, and the determination unit 14a performs the restoration using principal component analysis. Principal component analysis is a typical example of liner dimensionality reduction methods. An analyzing unit 12b, a storage unit 13b, and a determination unit 14b according to a second embodiment use an autoencoder using a neural network for dimensionality reduction, as illustrated in FIGs. 14 to 20. An autoencoder using a neural network is known as a method capable of non-linear dimensionality reduction. Thus, since it is capable of non-liner dimensionality reduction, it can obtain the feature more efficiently than principal component analysis, which is a linear dimensionality reduction method. Otherwise, it is the same as the first embodiment.
[0104]
FIG. 14 is a functional block diagram of an inspection system including an inspection device 200 according to the second embodiment of the present invention. In the following description, already-described elements and operations are given the same reference characters, and the same descriptions thereof are omitted.
[0105]
In the second embodiment, the analyzing unit 12b, storage unit 13b, and determination unit 14b are added as elements of the functional block diagram, instead of the analyzing unit 12a, storage unit 13a, and determination unit 14a of FIG. 1 of the first embodiment.
[0106]
The analyzing unit 12b performs dimensionality reduction by using the autoencoder using the neural network, instead of principal component analysis used for dimensionality reduction in the analyzing unit 12a of the first embodiment. Otherwise, it is the same as the analyzing unit 12a.
[0107]
The storage unit 13b stores a result of learning by the autoencoder using the neural network, reads the result of learning by the autoencoder using the neural network, and inputs it into the analyzing unit 12b, instead of the storage unit 13a of the first embodiment storing a result of learning by principal component analysis, reading the result of learning by principal component analysis, and inputting it into the analyzing unit 12a. Otherwise, it is the same as the storage unit 13a.
[0108]
The determination unit 14b performs restoration by using the autoencoder using the neural network, instead of principal component analysis used for restoration in the
35
determination unit 14b of the first embodiment. Otherwise, it is the same as the determination unit 14a.
[0109]
A hardware configuration diagram of the inspection device 200 according to the second embodiment of the present invention is the same as in FIG. 2 of the first embodiment. A hardware configuration of the analyzing unit 12b is the same as that of the analyzing unit 12a, a hardware configuration of the storage unit 13b is the same as that of the storage unit 13a, and a hardware configuration of the determination unit 14b is the same as that of the determination unit 14a.
[0110]
Next, an operation in the learning mode of the inspection device 200 according to the second embodiment of the present invention will be described.
[0111]
FIG. 15 is a flowchart illustrating the operation in the learning mode of the inspection device 200 according to the second embodiment of the present invention. The operation in the learning mode of the inspection device 200 will be described below with reference to FIG. 15.
[0112]
Steps S40, S41, and S42 are the same as steps S10, Sll, and S12 of the first embodiment. However, the processes are performed by the analyzing unit 12b instead of the analyzing unit 12a.
[0113]
In step S43, the analyzing unit 12b performs dimensionality reduction by using the one or more received normal image data sets. Here, dimensionality reduction refers to converting high dimensional data, such as image data or three-dimensional solid data, into low dimensional data, as in the first embodiment. The analyzing unit 12b
performs learning by using the normal image data sets to obtain a data conversion method optimal for the normal image data sets, in the learning mode. A case where an autoencoder with a neural network is used for the dimensionality reduction will be described below.
[0114]
Neural networks are computational models of a human brain mechanism in which neurons connected in a network through synapses perform learning and pattern recognition on the basis of the strength of current flowing therethrough, and the simplest model is called a perceptron.
[0115]
FIG. 16 is a diagram in which a neuron is modeled as a multi-input single-output node.
[0116]
A perceptron is a diagram in which a neuron is modeled as a multi-input single-output node as illustrated in FIG. 16. As in the first embodiment, the total number of pixels of each normal image data set received in step S40 is denoted by K, and a dimension number is denoted by j. For example, the value of K is 1024 when the size of the normal image data sets is 32 x 32 pixels, and 409600 when the size is 640 x 640 pixels, or the like. The perceptron is composed of a linear weighted sum obtained by weighting the pixels XJ of a normal image data set as the input data set by weights WJ and subtracting a bias b, and a threshold logic function z(u) that outputs 1 or 0 depending on whether the linear weighted sum is positive or negative. A group of the pixels XJ of the normal image data set as the input data set makes a vector x of the normal image data set, and the group of the weights WJ makes a weight vector w. When the output data set is denoted by y, the perceptron is represented by equation (6) . FIG. 16 illustrates a calculation for a single image data set, and when the number of the image data sets is 1000,
the same calculation is performed 1000 times. [0117]
y=z\Lw^-b)where £[u)=o 11
Documents
Application Documents
#
Name
Date
1
201947018969.pdf
2019-05-13
2
201947018969-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [13-05-2019(online)].pdf
2019-05-13
3
201947018969-STATEMENT OF UNDERTAKING (FORM 3) [13-05-2019(online)].pdf
2019-05-13
4
201947018969-REQUEST FOR EXAMINATION (FORM-18) [13-05-2019(online)].pdf
2019-05-13
5
201947018969-PROOF OF RIGHT [13-05-2019(online)].pdf
2019-05-13
6
201947018969-POWER OF AUTHORITY [13-05-2019(online)].pdf
2019-05-13
7
201947018969-FORM 18 [13-05-2019(online)].pdf
2019-05-13
8
201947018969-FORM 1 [13-05-2019(online)].pdf
2019-05-13
9
201947018969-DRAWINGS [13-05-2019(online)].pdf
2019-05-13
10
201947018969-DECLARATION OF INVENTORSHIP (FORM 5) [13-05-2019(online)].pdf