Sign In to Follow Application
View All Documents & Correspondence

Method And System For Surface Defect Inspection

Abstract: A method and system for identifying one or more defects on surfaces of a moving object are provided. The method comprises receiving a video stream comprising a plurality of images of the moving object. Further, the method comprises comparing each of the plurality of images in the video stream with reference image frames that comprise images of defect-free surfaces of the moving object. Further, the method comprises filtering a set of images including an image of a defective surface of the moving object, from the plurality of images, and determining defect-type identifying parameters from each of the set of images to identify one or more types of defects that exist on surfaces of the moving object, based on values of the defect-type identifying parameters. Finally, the method comprises performing an automated action based on one or more types of defects that are identified on surfaces of the moving object.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 March 2024
Publication Number
14/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TATA ELXSI LIMITED
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore – 560048, India

Inventors

1. ANUP SRIMANGALAM SOMASEKHARAN NAIR
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore – 560048, India
2. LIPIKA SREEDHARAN
TATA ELXSI LIMITED, ITPB Road, Whitefield, Bangalore – 560048, India

Specification

Description:RELATED ART

[0001] The present disclosure generally relates to the field of inspection and quality control, and in particular, relates to a method and system for identifying one or more surface defects using machine vision.
[0002] Surface defect inspection is an essential component of quality control in several sectors including manufacturing, automotive and aerospace. Conventionally, surface defect inspection entails a human inspector visually identifying any irregularity or imperfection on the surface of a product. However, subjective human inspection is unable to satisfy the continuously increasing quality standards of industrial manufacturing processes. Accordingly, automated visual inspection systems using a multitude of expensive sensors or machine vision have been used in recent times to perform quality inspection and remove defective products.
[0003] Further, with the advent of deep learning technology, a large number of visual analysis techniques, for example using convolutional neural networks, are being applied to detect surface defects in sheets. These methods provide good accuracy and robustness when trained on a large number of labelled training image samples, along with a large number of image samples of defective objects. However, collecting a large number of image samples and subsequent accurate manual labelling of the samples in an actual industrial production scenario with accurate types of defects while facing uncertain product quality is tedious, time consuming, and resource intensive.
[0004] Further, when performing surface defect detection, conventional supervised learning models need to analyse a large number of image samples, irrespective of whether the image sample belongs to a defective object or a non-defective object. This increases the processing time required for inspecting the objects, and thereby renders use of conventional learning models unsuitable for a fast-paced industrial set up. Furthermore, due to the need for extensive training images, adapting conventional learning models to changes to even lighting or location in a production environment may necessitate extensive and time-consuming retraining, thus resulting in prohibitive maintenance efforts. Consequently, despite availability of various automation techniques, some areas of industrial set up, such as product quality inspection or visual examination of products are still done manually and are tedious, time consuming and involve intense labouring.
[0005] Thus, there exists a need to provide an improved inspection method and system for identifying one or more surface defects while mitigating the aforementioned limitations of existing learning-based automated visual inspection systems.

BRIEF DESCRIPTION

[0006] It is an objective of the present disclosure to provide a method for identifying one or more defects on one or more surfaces of an object. The method includes receiving a video stream including a plurality of images of the object from one or more image capturing devices. Further, the method includes comparing each of the plurality of images in the video stream with one or more reference image frames, including images indicating one or more defect-free surfaces of the object, to determine a similarity score for each of the plurality of images. Furthermore, the method includes filtering a set of images from the plurality of images whose associated similarity score is less than a predefined threshold. Each of the set of images include an image indicating one or more defective surfaces of the object. In addition, the method includes determining one or more defect-type identifying parameters from each of the filtered set of images.
[0007] The defect-type identifying parameters include one or more of a number of discrete contours in a corresponding image, a total area occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image. Moreover, the method includes identifying one or more types of defects on one or more of the surfaces of the object based on corresponding values of the one or more defect-type identifying parameters determined from each of the set of images. The method also includes performing an automated action based on the one or more types of defects. The automated action includes one or more of automatically grading a quality of the object and automatically removing one or more portions of the object including one or more of the identified defects.
[0008] Comparing each of the plurality of images with the reference image frames includes extracting a region of interest from each of the plurality of images, and performing one or more image enhancement actions on the region of interest of each of the plurality of images. Further, the method includes analyzing the region of interest of each of the plurality of images using a pre-trained autoencoder model for determining the similarity score. The pre-trained autoencoder model is trained on the reference image frames including images of defect-free surfaces of the object. Identifying the one or more types of defects includes computing Principal Component Analysis values of each of the defect-type identifying parameters, and comparing the Principal Component Analysis values with pre-computed Principal Component Analysis values of a plurality of clusters included in a pre-constructed Principal Component Analysis panel. Each cluster of the plurality of clusters represents a type of defect. Further, the method includes associating each of the defect-type identifying parameters with one or more clusters in the plurality of clusters based on the comparison. Furthermore, the method includes identifying the one or more types of defects in the set of images based on an association between each of the defect-type identifying parameters and the one or more clusters in the plurality of clusters.
[0009] In addition, the method includes pre-constructing the Principal Component Analysis panel by analyzing one or more defect-type identifying parameters extracted from an initial set of images of the object using a Principal Component Analysis algorithm. Further, the method includes computing a set of Principal Component Analysis values corresponding to each of the defect-type identifying parameters of the initial set of images based on the analysis, and creating the plurality of clusters, each cluster representing a type of defect, based on similarity between the set of Principal Component Analysis values. Identifying the one or more types of defects further includes calculating corresponding Euclidean distances between a defect-type identifying parameter in one or more of the defect-type identifying parameters and centroids of each of the plurality of clusters, when the selected defect-type identifying parameter is associated with two or more clusters in the plurality of clusters. Further, the method includes ranking the plurality of clusters based on the corresponding Euclidean distances, and identifying a combination of the one or more types of defects in the set of images based on the association of the defect-type identifying parameter with two or more clusters selected from the ranked plurality of clusters. Identification of one or more types of defects on one or more of the surfaces of the object includes identification of one or more of dents, cracks, scratches, rust, corrosion, fold marks, patches, holes, ink smudge, and printing errors in newsprints. The object includes one or more of a metal, wood, paper, and a textile. Grading the quality of the object further includes classifying the object as one of a low-quality object, a medium quality object, and a high quality object.
[0010] It is an objective of the present disclosure to provide a system for identifying defects on surface of an object. The system includes one or more image capturing devices configured to capture a video stream including a plurality of images of the object. Further, the system includes a reference database configured to store reference image frames including images of defect-free surfaces of a plurality of objects. Furthermore, the system includes a defect identification system communicatively coupled with the image capturing devices. The defect identification system configured to receive the plurality of images of the object from the image capturing devices. The defect identification system compares each of the plurality of images with the reference image frames to determine a similarity score for each of the plurality of images. Further, the defect identification system filters a set of images from the plurality of images, whose associated similarity score is less than a predefined threshold. Each of the set of images include an image of a defective surface of the moving object. Furthermore, the defect identification system determines defect-type identifying parameters from each of the set of images.
[0011] The defect-type identifying parameters include one or more of a number of discrete contours in a corresponding image, a total area occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image. In addition, the defect identification system identifies one or more types of defects that exist on the surfaces of the moving object based on values of the defect-type identifying parameters determined from each of the set of images. An automation unit in the system is configured to perform a predefined automated action based on the one or more types of defects. The predefined automated action includes one or more of automatically grading a quality of the object and automatically removing one or more portions of the object including the defects.
[0012] The similarity detection unit is further configured to extract a region of interest from each of the plurality of images. Further, the similarity detection unit perform one or more image enhancement actions on the region of interest of each of the plurality of images, and analyzes the region of interest of each of the plurality of images using a pre-trained autoencoder model for determining the similarity score. The pre-trained autoencoder model is trained on the reference image frames including images of defect-free surfaces of the moving object. The defect-type identification unit is further configured to compute Principal Component Analysis values of each of the defect-type identifying parameters by analyzing each of the set of images using a predefined Principal Component Analysis model. The defect-type identification unit compares the Principal Component Analysis values with pre-computed Principal Component Analysis values of a plurality of clusters included in a pre-computed Principal Component Analysis panel. Each cluster of the plurality of clusters represents a type of defect. Further, the defect-type identification unit associates each of the defect-type identifying parameters with one or more clusters in the plurality of clusters based on the comparison, and identifies the one or more types of defects in the set of images based on an association between each of the defect-type identifying parameters and the one or more clusters in the plurality of clusters.
[0013] Furthermore, the defect-type identification unit analyzes one or more defect-type identifying parameters extracted from an initial set of images of the object using a Principal Component Analysis algorithm, and computes a set of Principal Component Analysis values corresponding to each of the defect-type identifying parameters of the initial set of images based on the analysis. Moreover, the defect-type identification unit creates the plurality of clusters. Each cluster representing a type of defect, based on similarity between the set of Principal Component Analysis values. In addition, the defect-type identification unit calculates corresponding Euclidean distances between a defect-type identifying parameters in one or more of the defect-type identifying parameters and centroids of each of the plurality of clusters, when the selected defect-type indicating parameter is associated with two or more clusters in the plurality of clusters. Further, the defect-type identification unit ranks the plurality of clusters based on the corresponding Euclidean distances, and identifies a combination of the one or more types of defects in the set of images based on the association of the defect-type identifying parameter with two or more clusters selected from the ranked plurality of clusters.
[0014] The automation unit includes a transport subsystem. The transport subsystem includes one or more of a robotic arm and a mechanical diversion configured to divert a path of the object including one or more of the identified defects to one of a cutting, labelling, and collection station in a manufacturing facility. The image processing unit is communicatively coupled to a display unit configured to display quality control information related to the object. The quality control information includes one or more of an identifier corresponding to the object, the identified type of defects along with a grade that the object is classified into, a live or a recorded feed of the video stream received from the one or more image capturing devices, images with one or more defects, and providing an alert when the one or more defects identified on one or more surfaces of the object.

BRIEF DESCRIPTION OF DRAWINGS

[0015] The embodiments of the disclosure itself, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment, when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawings in which:
[0016] FIG. 1A shows a block diagram depicting certain exemplary components of a system for identifying one or more defects on surfaces of an object, in accordance with an embodiment of the present disclosure;
[0017] FIG. 1B depicts an exemplary moving object disposed on a conveyor belt in a manufacturing facility, in accordance with an embodiment of the present disclosure;
[0018] FIG. 1C shows another block diagram depicting certain exemplary components of a defect identification system configured for identifying one or more defect on surfaces of a moving object, in accordance with an embodiment of the present disclosure;
[0019] FIGS. 2A-2C show a flowchart depicting a method for identifying one or more defects on surfaces of a moving object using the system depicted in FIG. 1C, in accordance with an embodiment of the present disclosure;
[0020] FIG. 3 shows a flowchart depicting a method of training an autoencoder model, in accordance with an embodiment of the present disclosure;
[0021] FIG. 4 shows a flowchart depicting a method of building a set of Principal Component Analysis (PCA) clusters, in accordance with an embodiment of the present disclosure; and
[0022] FIGS. 5A and 5B show graphical representations of exemplary PCA clusters, in accordance with an embodiment of the present disclosure.
[0023] The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.

DETAILED DESCRIPTION

[0024] The following description presents a surface defect detection system and method for identifying defects on surfaces of objects or products to ensure quality control. Particularly, embodiments described herein disclose an exemplary defect detection system and method that can be used in an industry or a manufacturing plant to identify surface defects in moving objects. Examples of moving objects include rolling sheets of metal, wood, textile and paper, among others that can be manufactured in manufacturing plants, such as, without limiting to, a steel manufacturing plant, a textile manufacturing plant, a wooden laminate manufacturing plant, a paper manufacturing plant, a printing press or the like. A manufacturing plant such as the ones mentioned above generally involve pieces of metal, wood, textile, or paper being shaped into thinner and longer pieces by passing them through hot rolling equipment, which are then cooled and rolled into coils. During rolling and other steps involved in manufacturing, there may be multiple setbacks to the machine or the manufacturing processes such as machine failure, excessive stretching, and weaving problems that result in different types of defects on a surface of the moving object.
[0025] In an embodiment, defects that generally occur on surfaces of moving objects such as rolling sheets may include edge damage, dent mark, fold mark, dirt mark, metal spots, flow lines and holes, among others. These defects may be present at multiple locations on the surface of the moving object. There may either be a single defect present at some locations or regions of the moving object while some other locations of the moving object may include more than one defect, or a combination of two or more different defects concentrated at one region. While conventional automated defect identification techniques may typically be able to identify single defects present at multiple locations, they often fail to identify combination of two or more different defects concentrated at one region.
[0026] When the combination of two or more different defects concentrated at one region of the moving object are not accurately identified, a subsequent grading of the moving object such as metal sheets may also be done inaccurately. With wrong gradings, there are chances that sheets, which should have ended up in junkyards or low risk applications, may end up being used in high-risk applications. An example of low-risk application may include manufacture of steel containers or appliance bodies while an example of a high-risk application may include manufacture of automotive bodies and airplane fuselages and wings. Due to wrong grading, steel sheets that should have been used in the manufacture of steel containers or appliance bodies may be used in an automotive body manufacturing plant or airplane fuselage and wings manufacturing plant. As a result, the quality of the items that will be produced using defective sheets may not even pass quality check, thereby leading to a huge loss in the production cost.
[0027] The defect identification system uses image capturing devices such as a digital camera for capturing a continuous stream of video of the moving object. Subsequently, each image frame in the video stream is processed for detection of defects on either surface of the moving object. The image capturing devices are positioned such that one or more surfaces of the moving objects can be captured under varying lighting conditions and under varying speed at which the object moves.
[0028] In an embodiment, a region of interest (ROI) from each of the plurality of images may be extracted and one or more image enhancement actions on the ROI of each of the plurality of images may be performed. As it may be difficult to identify whether a defect exists or not based on images that are captured under poor lighting conditions, image enhancement may be required for such images. Once the images that may indicate presence of defects are detected, only these images are then passed to a further level of analysis, where the defects are identified by their names or types. This further level of analysis identifies single defects present in one particular region on the surface of the moving object as opposed to a combination of multiple defects present at one particular region on the surface of the moving object.
[0029] In an embodiment, a set of images indicating defects on surface of the moving object are filtered from the plurality of images using a pre-trained autoencoder model in the first level of analysis. The autoencoder model is trained using different reference image frames comprising various types of moving objects, such as rolling sheets, and conveyor belts., from various industrial environments under varying lighting conditions and different speeds of the moving object. In the second level of analysis, only the filtered set of images, that indicates defects, are further analysed to determine the defect types based on one or more defect-type identifying parameters using a pre-constructed principal component analysis (PCA) module. The defect-type identifying parameters may comprise one or more of a number of discrete contours detected in an image, a total area that is occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image.
[0030] In another embodiment, as already described, the second level of analysis may be capable of identifying only single defect present in one particular region on the surface of the moving object as opposed to combination of multiple defects on the surface of the moving object. If a particular region of the surface of the moving object includes combination of multiple defects, a third level of analysis may be performed on the images that indicate the combination of multiple defects.
[0031] In one or more scenarios, when the moving object encounters one or more of the setbacks associated with the manufacturing process, more than one type of defects may be concentrated in one particular region of the surface of the moving object as opposed to a single type of defect. When the image frames corresponding to such defects are passed to the defect type identification unit, the comparison of these images with the PCA panel may not be able to output a defect type accurately. In such scenarios, the defect type identification unit may be configured to apply a custom logic to identify the combination of defects in the image by calculating an Euclidean distance between centroids of a plurality of clusters of defects and the defect-type identifying parameters associated with the images indicating combination of defects.
[0032] In certain embodiments, the defect identification system may include a display unit for continuously displaying a number and frequency of identified defects along with any other information related to the identified defects. The display unit may also be capable of displaying the probable grade of the moving object. Subsequently, a suitable automated corrective action may be performed intelligently on the moving object based on the one or more type of defects that are identified on surfaces of the moving object. For example, the automated action may include appropriately grading of the moving object. In some embodiments, the automated action may include transporting the moving object along an alternative path and positioning the moving object at a cutting station for cutting parts of the moving object where the defects may exist, among others.
[0033] In this manner, the method and system proposed in the present disclosure ensure faster and efficient identification of defects from the surface of the moving objects without the need for a multitude of expensive sensors. Further, by preventing the defect free images from being passed to the second level analysis, the method and system of present disclosure also help reduce computational time and ensure quality control at a better operating efficiency. Additionally, the present disclosure provides an end-to-end automated solution that identifies the type of defects and also performs one or more automated tasks on the objects having defects, without any human intervention.
[0034] FIG. 1A shows block diagram depicting an exemplary surface defect inspection system 100 for identifying one or more defects on surfaces of a moving object 113 (shown in FIG. 1B), in accordance with an embodiment of the present disclosure. Although the present embodiment is described with reference to identifying surface defects of a moving object, it may be noted that the surface defect inspection system 100 may be similarly used to identify defects in stationary objects as well.
[0035] In one implementation, the system 100 comprises a defect identification system 101, one or more image capturing devices 103, and a reference database 105. In some embodiments, the exemplary system may comprise a plurality of lighting devices 106 for providing adequate diffused lighting for capturing a clear view of the moving object 113 without causing excessive glare or reflections. The defect identification system 101 may be configured to process a video stream 111 comprising a plurality of images 135 of a moving object 113 for identifying one or more defects, type of defects, and grading quality of the moving object 113. To that end, the defect identification system 101 may include one or more graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, integrated circuits, system on chips, and/or other suitable computing devices. Accordingly, certain operations of the defect identification system 101 may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer.
[0036] In some embodiments, the defect identification system 101 may be located within a manufacturing plant such as, but not limited to, steel manufacturing plant, for example, to inspect the quality of steel rolls being manufactured. In another embodiment, the defect identification system 101 may be located in a remote location external to the manufacturing plant and may operate as a remote system. In this scenario, the defect identification system 101 may be communicatively coupled to the other components of the surface defect inspection system 101 (such as the image capturing devices 103 and reference database 105) via a wired or wireless communication network 108.
[0037] In an embodiment, the defect identification system 101 may be coupled to the image capturing devices 103 for receiving a video stream 111 comprising the plurality of images 135 of the moving object. In some embodiments, the moving object 113 may include a rolling sheet of metal, wood, paper, textile or the like. The image capturing devices 103 may include devices such as, but not limited to, digital colour camera, a Red Green and Blue (RGB) camera, a camera of a smartphone. In a non-limiting embodiment, the image capturing devices 103 may be installed or positioned on top of an industrial equipment such as, but not limiting to, a roller 110 carrying the moving object. Alternatively, the image capturing devices 103 may be deployed at any place such that the moving objects are always within the field of view of the image capturing devices 103. Alternatively, a set of image capturing devices 103 may be installed above and below the roller 110 for capturing the top surface and the bottom surface of the moving object.
[0038] Further, the defect identification system 101 may be coupled to the reference database 105. The reference database 105 may be configured to store reference image frames comprising images of defect-free surfaces of a plurality of moving objects. To that end, the reference database 105 may include devices such as, but not limited to, hard drives, Solid-State Drives (SSD), CD/DVD drives, flash drives, In a non-limiting embodiment, the defect identification system 101, the image capturing device 103, and the reference database 105 may be coupled with each other using the wired or wireless communication network 108.
[0039] In an embodiment, the defect identification system 101 may comprise a image processing unit 107 and a display 109. The image processing unit 107 may be configured to process the plurality of images 135 and determining quality control information, including detecting the one or more defects and identifying the types of defects on one or more surfaces 112 of the moving object. In a non-limiting embodiment, the display 109 may be configured to display the quality control information including an one or more of identifier corresponding to the moving object, the identified type of defects along with a grade that the particular moving object 113 may be classified into, a live or recorded feed of the video stream 111 received from the image capturing devices 103, images with at least one defect, and warnings or alerts when a human intervention is required. In one embodiment, the proposed solution detects the defects in real-time from the live video feed and provides locations of the defects in the moving object 113 for display and recording purposes. In some embodiments, the quality control information related to the moving object. The information related to the defects identified on the moving object 113 may be stored in an associated memory 129 (shown in FIG. 1C) for a predefined time, for example, for a month, or a year.
[0040] FIG. 1B depicts an exemplary scenario of a moving object 113, in accordance with an embodiment of the present disclosure. In an exemplary embodiment, the image capturing device 103a and the image capturing device 103b (collectively referred to herein as image capturing devices 103) may be configured to capture a continuous video stream of the moving object 113. As already described, the moving object 113 may include hot rolling sheets of metal, wood, textile, or paper that comes out of an equipment, such as the roller 110, and needs waiting time to be cooled and coiled. As shown in the FIG. 1B, one of the image capturing devices 103a may be mounted on top of the roller 110 or a conveyer belt 114 where the moving object 113 is placed. Another image capturing device 103b may be placed beneath or at the level of the roller 100 or conveyer belt 114. By positioning two image capturing devices 103a and 103b on top and beneath the roller 110 or conveyor belt 114 carrying the moving object 113, maximum number of surfaces 112 of the moving object 113 can be captured by the two image capturing devices 103a and 103b under varying lighting conditions. Alternatively, the image capturing devices 103a, 103b may be placed at any other position relative to the position of the roller 110, such that the image capturing devices 103 have a clear and best field of view for capturing the moving object 113. Accordingly, it may be worth noting that the surface defect inspection system 110 may include any number of image capturing devices 103 for capturing suitable views of the moving object 113 for inspection.
[0041] The video stream 111 captured by the one or more image capturing devices 103 are then passed to the defect identification system 101 where the plurality of images 135 of the video stream 111 are processed to extract one or more ROIs from the images, which can subsequently be subjected to image enhancement. The enhanced images are then passed through one or more steps of analysis performed by the defect identification system 101 to initially identify images, which are indicative of defects on the surfaces 112 of the moving object 113. The images indicative of defects on the surfaces of the moving object 113 are further analysed to identify one or more types of defects existing on one or more surfaces 112 of the moving object 113.
[0042] FIG. 1C shows a block diagram of the defect identification system 101 of FIG. 1 depicting exemplary components that may be configured for identifying one or more defects on surfaces 112 of the moving object 113, in accordance with an embodiment of the present disclosure. As shown in FIG. 1C, the defect identification system 101 may comprise the image processing unit 107 and the memory 129. The image processing unit 107 may further comprise one or more sub-components including, but not limiting to, a receiving unit 115, a similarity detection unit 117, a filtering unit 119, a determination unit 121, a defect-type identification unit 123, an automation unit 125, a PCA construction unit 127. In an embodiment, each of the one or more sub-components of the image processing unit 107 may be implemented as independent microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or a separate hardware device that performs a designated functionality based on commands from the image processing unit 107. Among other capabilities, the image processing unit 107 may be configured to fetch and execute computer-readable instructions stored in the memory 129. In some embodiments, the image processing unit 107 may be coupled to an input/output (I/O) interface (118). In an embodiment of the present disclosure, the I/O interface 118 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, input device, output device and the like. The I/O interface 118 may allow the image processing unit 107 to interact with other components of the defect identification system 101. Further, the I/O interface 118 may enable the image processing unit 107 to communicate with one or more external devices, such as, web servers and external data servers, and the like.
[0043] In an embodiment, the memory 129 associated with the defect identification system 101 may be coupled to the image processing unit 107 and may store data, which may include, without limiting to, type of defects 131, clusters of defects 133, plurality of images 135, and similarity scores 137. The memory 129 may be present on-chip or off-chip of the defect identification system 101. The significance and use of each of the stored data quantities is explained in the upcoming paragraphs of the specification.
[0044] In an embodiment, the receiving unit 115 may be configured to receive the video stream 111 comprising the plurality of images 135 of the moving object 113 from the image capturing devices 103. Either the receiving unit 115 or the similarity detection unit 117 may further be configured to pre-process the received video stream 111. The plurality of images 135 received from the image capturing devices are further processed to extract one or more ROIs from each of the plurality of images 135 and perform one or more image quality enhancement actions on the extracted ROI of each of the plurality of images 135. The image enhancement process may include, without limiting to, a process for enhancing the brightness, hue, noise, and contrast of the images so that the images captured under poor lighting conditions also can present the defects with sufficient clarity. Additionally, as an example, one of the image enhancement processes may include sharpening the image, which enhances the edges and finer details in the image to make it appear sharper and more defined.
[0045] Subsequent to image enhancement, the plurality of images 135, which indicate that defects are present on the surface 112 of the moving object 113, may be filtered or separated from images that indicate defect free surfaces 112 of the moving object 113. The filtering may be performed by the filtering unit 119. The filtering unit 119 may analyse the ROI of each of the plurality of images 135 using a pre-trained autoencoder model for determining a similarity score 137. The autoencoder model is pre-trained using the reference image frames comprising images of defect-free surfaces of the moving object 113, as is described in more details with reference to FIG. 3. As described previously, reference image frames are stored in the reference database 105. The similarity score 137 is the output of a comparison of the plurality of the enhanced images against the reference image frames. The reference image frames may include images captured from various types of moving objects, such as rolling sheets moving at varying speeds in various industrial environments under varying lighting conditions. The training of the autoencoder model is described in detail with reference to FIG. 3.
[0046] In one embodiment, the similarity score 137 may be determined by comparing each of the plurality of images 135 with the reference image frames comprising images of defect-free surfaces of the moving object 113. In an example, if the determined similarity score 137 is greater than a predefined threshold ( th_auto), the autoencoder model outputs a result as “No Defect”. Otherwise, the autoencoder model outputs a set of images from the plurality of images 135, whose associated similarity score 137 is less than the predefined threshold (th_auto). The set of images whose associated similarity score 137 is less than the pre-defined threshold (th_auto) may be labelled as images indicating defects on the surface 112 of the moving object 113. This set of images are then filtered from the images labelled as “No Defect.” The filtering unit 119 outputs the set of images from the plurality of images 135 whose associated similarity scores 137 are less than the pre-defined threshold (th_auto). Only these filtered set of images may subsequently be sent to a further level of analysis where these images are processed to identify the exact type of defects 131 present on the surface 112 of the moving object 113. Upon receiving the filtered set of images, PCA values associated with the defect-type identifying parameters of each of the filtered set of images are determined. The defect-type identifying parameters are compared against a pre-constructed PCA panel. An exemplary embodiment of the pre-construction of the PCA panel is described in more details with reference to FIG. 4.
[0047] Further, the determination unit 121 may be configured to determine the defect-type identifying parameters of the filtered set of images. The defect-type identifying parameters may comprise one or more of a number of discrete contours detected in a corresponding image, a total area that is occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image detected by the image processing unit 107. For example, the number of discrete contours may refer to total number of contours in the corresponding image. In some embodiments, the determination unit 121 may be configured to determine at least one feature of the defect-type identifying parameters using suitable image processing methods. For example, the at least one feature for the number of contours and the area of the contours may be shape and texture. Further, the at least one feature for average pixel intensity may be colour. In a non-limiting embodiment, for the one or more type of defects 131 such as cracks, holes, corrosion, ink smudge, and printing errors in newsprints or other papers, the number of contours formed will be relatively high while the area of those contours may be relatively less.
[0048] Furthermore, in a non-limiting embodiment, values of the defect-type identifying parameters associated with different defect types may vary significantly from one another depending on the shapes, sizes, textures and the different colours corresponding to the defect. For example, the parameter average pixel intensity values associated with a defect type, such as, scratch may vary significantly from another defect type such as corrosion. Table A below displays the average pixel intensities of 17 different types of defects that may possibly occur in the moving object 113. The average intensity of pixels corresponding to a particular defect type may be calculated or determined using one or more sub-parameters, namely mean or average value of all colour pixel values, a count or a total number of unique colour values within the image, and a standard deviation or a colour distribution within the image. Similarly, the other defect-type identifying parameters, such as the total no. of contours, area of contours may also vary significantly between varying defect types based on their varying shapes, sizes, textures and the different colours corresponding to the defect. Table A below also displays the total no. of contours and area of the contours of all 17 different types of defects. For example, the defect type “edge damage”, may be spread across a larger area on the surface of the moving object 113 and therefore may have more no. of contours but may not have very high pixel intensity values. As seen in Table A below, the number of contours vary between 28-35, the associated areas of the contours may vary between 9800-25000 and the pixel intensity values vary between 18-34. While, for a dent mark, the defect, where the defect is concentrated at one particular region on the surface of the moving object 113, may be characterized by relatively less no. of contours (for example, 7-11 but the contour may be defined by a relatively larger area for example, 14000-25060 and have very high pixel intensity values, for example, 133-196. The defect-type identifying parameters associated with different defects mentioned in Table A are experimentally extracted from images with resolution 1280*720.

Defects No. of contours Area of Contours Pixel
intensity
values
Uncoated 40-45 1150-15436 28-54
Nozzle line mark 12-25 17505-75000 92-134
Edge damage 28-35 9800-25000 18-34
Cissing marks 11-16 12000-18900 70-97
STL marks 15-21 6700-19670 52-94
Feather marks 38-42 5600-16700 32-73
Dent mark 7-11 14000-25060 133-196
Fold mark 12-17 13500-27060 111-176
Edge Blister 19-25 2000-21005 5-13
Pock mark uncoated 44-56 3000-14700 121-156
Edge uncoated 6-10 15000-20000 44-68
Dirt mark 12-15 12000-17600 55-90
Pin uncoated 38-45 4500-15600 146-187
Metal spots 23-29 2300-16700 56-79
Skin pass band 60-78 1200-18900 28-73
Paint flow lines 10-14 18600-30000 75-105
Silver holes 54-65 2800-20070 157-205

Table A: Defect-type identifying parameters

[0049] Subsequent to determining the defect-type identifying parameters of the filtered set of images, next the PCA values associated with the defect-type identifying parameters of each of the filtered set of images are extracted. The extracted PCA values are then compared against a pre-constructed PCA panel. The construction of the pre-constructed PCA panel is described below.
[0050] The PCA panel is constructed based on one or more image level features including the defect-type identifying parameters associated with an initial set of images, for example, comprising the 17 defects listed in Table A above. The image level features including the defect-type identifying parameters associated with the initial set of images are determined and are fed to a PCA algorithm. The PCA algorithm is an unsupervised learning technique for reducing the dimensionality of data. Upon the images being fed to the PCA algorithm, clusters of the defects are created based on their similarity, as shown in Table B below. For example, let us consider an image set of 100 images of which 40 images indicate line defects on the surface 112 of the moving object 113, 50 images indicate line defects on the surface 112 of the moving object 113 and 10 images indicate line defects on the surface 112 of the moving object 113. Upon feeding these 100 images to the PCA algorithm, the image level features will initially be created for all 100 images. The image level features may include the defect-type identifying parameters such as the number of contours, the area of the contours and the average intensity of pixels corresponding to the 100 images. All the features extracted from these 100 images are placed in a 2-Dimensional (2D) vector space. The features corresponding to all 40 images showing presence of line defects will be closer to one another in the 2D vector space as the image level features of these 40 images are same or similar. Hence, a cluster for line defects is formed in the vector space. Similarly, different clusters for dents and other types of defects are formed. This way various clusters of defects 133 are created. The different clusters of defects 133 forms the PCA panel. ‘Table B’ below provides an exemplary set of the PCA panel including seven (7) different clusters of defects derived from the 17 types of defects listed in Table A. The present disclosure uses such a pre-constructed PCA panel to analyse the set of images showing defective surfaces of the moving object 113, since it increases interpretability of the images, and at the same time, minimizes loss of information. In other words, the present disclosure describes use of the PCA panel to compute PCA values corresponding to various defects in the images.

Defect Name PCA 1 PCA 2
Crack 250.38 98.22
Dent 200.23 34.67
Corrosion 134.78 45.67
Scratch 167.98 23.89
Patches 141.95 32.91
Hole 208.97 56.98
Fold Mark 234.09 25.96

Table B: Set of PCA values

[0051] The defect-type identification unit 123 may be configured to compare the PCA values of each of the defect-type identifying parameters associated with the filtered set of images with precomputed PCA values of the plurality of clusters 133. The precomputed PCA values may include the PCA values of defect-type identifying parameters of the various clusters of defects, that is, the values based on which the PCA panel is built. The pre-constructed PCA panel, thus, includes the plurality of clusters 133 where each cluster of the plurality of clusters 133 represents one type of the defect. Accordingly, in one embodiment, the PCA construction unit 127 may define the plurality of clusters 133. During inference, when a new image is received from the filtering unit 119 and the features and/or defect-type identifying parameters are derived, the features associated with the new incoming images will be compared against the pre-constructed PCA panel and a label will be assigned accordingly. Based on the label, the defect type may be inferred. The defect-type identification unit 123 identifies the one or more type of defects 131 in the set of images based on the association between the defect-type identifying parameters and the plurality of clusters 133. In a non-limiting embodiment, the one or more type of defects 131 may include, without limiting to, a least one of dents, cracks, rust, patches, holes, corrosion, fold mark, ink smudge, and printing errors in newsprint, and the like.
[0052] In one or more scenarios, when the moving object encounters one or more of the setbacks associated with the manufacturing process, more than one type of defects may be concentrated on one particular region of the surface of the moving object as opposed to a single type of defect. For example, due to excessive stretching of the moving object, one particular region may suffer both visible scratches and fold marks. In such scenarios, one or more frames of images may present such defects. When these frames are passed to the defect type identification unit 123, the comparison of these images with the PCA panel may not be able to output a defect type 131 since the PCA panel may only be equipped with PCA values of defect-type identifying parameters associated with individual defects, such as line defects, scratches, dents, etc., and not a combination of different defect types, such as a combination of line defect and dent being presented in a single image. Such combination of defects may be treated or viewed as new defects not available in the cluster of defects presented in Table B. In other words, for combination of defects, one or more of the defect-type indicating parameter associated with the image comprising the combination of defects may correspond to at least two clusters in the plurality of clusters 133.
[0053] In such scenarios, the defect type identification unit 123 may be configured to apply a custom logic to identify the combination of defects in the image. In particular, when one or more of the defect-type indicating parameters are associated with at least two clusters of the plurality of clusters 133, the defect-type identification unit 123 may be configured to identify the one or more type of defects 131 by calculating an Euclidean distance of at least one of the defect-type identifying parameter from centroids of each of the plurality of clusters 133. Further, the defect-type identification unit 123 ranks each of the plurality of clusters 133 based on the Euclidean distance and identifies a combination of the one or more type of defects 131 in the set of images based on association of the at least one defect-type identifying parameters with one or more clusters selected from the ranked plurality of clusters 133.
[0054] As described above, the image comprising the combination of defects will be processed to extract new features or defect-type identifying parameters associated with the image. These new features may be matched against one or more defect clusters. Based on the match, the image may be positioned in a 2-D vector space such that it is closer to two or more types of defects clusters. Next, an Euclidean distance may be measured from the centroid of each of the clusters to the position where the new image lies in the vector space. The shortest Euclidean distances may be the closely matched clusters, which may be provided as the final output label, that is, a suitable defect name.
[0055] In an exemplary embodiment, if the PCA values of the image match any of the PCA value combinations listed in the Table B, then the defect-type identification unit 123 may identify that there exists a single defect and corresponding defect name may be given as output (without any distance calculation). Alternatively, if the PCA values of the image do not match any PCA value combinations listed in Table B, then the defect-type identification unit 123 may identify that there exists a combination of defects. In order to identify the defect-type in this scenario, the defect-type identification unit 123 may calculate distance between the points and the centroid of each cluster in 2D space. Thereafter, the defect-type identification unit 123 may calculate the Euclidean distance and consider the top one or more clusters, whose distances are closer to this point, as the output defect types.
[0056] FIGS. 2A-2C show a flowchart depicting a method 200 for identifying one or more defects on surfaces 112 of a moving object 113, in accordance with an embodiment of the present disclosure. At block 201, a video stream 111 comprising a plurality of images 135 of the moving object 113 captured by one or more image capturing devices 103 is received by the receiving unit 115 from the image capturing devices 103. When captured, these images may not be of optimal quality due to varying lighting conditions in a manufacturing plant as a result of which it may be difficult to identify whether a defect exists. Further, one or more parameters of the image capturing devices may also not be at the preferred or required level due to which the quality of the plurality of images may be poor. Accordingly, quality of the received images may need to be enhanced. Further, in certain scenarios the images may capture a wider range of area of the moving object 113. However, only a particular region in the surface 112 of the moving object 113 may be of interest and may require subsequent enhancement. For example, the particular region may be of interest because a defect may be present in that region on the surface 112 of the moving object 113. Therefore, these received images may need to further undergo extraction of ROI and image enhancement. At block 203, a ROI from each of the plurality of images 135 is extracted and subsequently at block 205, an image quality enhancement action is performed on the extracted ROI of each of the plurality of images 135. The image enhancement process 205 may include, without limiting to, a process for enhancing one or more of brightness, hue and contrast of the images so that the defects captured in the images are clearly visible. Additionally, as an example, one of the image enhancement processes may include sharpening the image, which enhances the edges and finer details in the image to make it appear sharper and more defined. The ROI extraction and image quality enhancement may be performed by either the receiving unit 115 or the similarity detection unit 117.
[0057] After the plurality of images received from the image capturing devices 103 are subjected to image enhancement, the enhanced images may be analysed and the images indicating defective surfaces 112 of the moving object 113 may be identified and separated from images comprising defect free surfaces 112 of the moving object 113. At block 207, the pre-trained autoencoder model may compare the images against a set of reference image frames that indicate defect free surfaces 112 of the moving object 113. The pre-trained autoencoder model may be configured to determine a similarity score 137 for each of the plurality of images based on a pre-defined threshold (th_auto). At block 209, similarity scores 137 associated with each of the plurality of images 135 may be determined by comparing each of the plurality of images 135 with the reference image frames comprising images of defect-free surfaces 112 of the moving object 113 by the autoencoder model. The output of the autoencoder model is a filtered set of images whose associated similarity score 137 is less than the pre-defined threshold (th_auto). At block 211, it is determined whether the similarity scores 137 of the images 135 are more than the pre-defined threshold. If the similarity scores 137 of one or more images are determined to be more than the pre-defined threshold, then at block 213, the one or more images are identified as images corresponding to defect free surfaces 112 of the moving object 113.
[0058] If the set of images whose similarity scores 137 are determined to be less than the pre-defined threshold, then at block 215, the set of images are output as a filtered set of images for a further level of processing to identify the defect type. As an example, suppose the pre-defined similarity threshold (th_auto) value is ‘0.9’. Suppose 5 images out of 10 images have a similarity score value of 0.34, 0.67, 0.72, 0.55, 0.38 and the remaining 5 images have similarity score value of 0.9, 0.91, 9, 0.93, 0.97. Here, the first set of images whose similarity scores 137 are less than the respective predefined similarity threshold (th_auto) of 0.9 may be considered as images capturing a defective surface 112 of the object 113. Similarly, the other set of images whose similarity scores 137 are greater than the respective predefined similarity threshold (th_auto) of 0.9 may be considered as images of defect-free surface 112 of the object 113. The autoencoder model may output the first set of images whose similarity scores 137 are less than the respective predefined similarity threshold (th_auto) of 0.9 for a further level of processing to identify the defect type.
[0059] Upon receiving the filtered set of images from the autoencoder model, at block 217, defect-type identifying parameters of each of the filtered set of images are determined. At block 219, a set of PCA values for each of the defect-type identifying parameters determined at block 217 are computed by analysing each of the set of images using the pre-constructed PCA panel. These extracted PCA values of the defect-type identifying parameters are then compared against pre-computed PCA values associated with the plurality of clusters 133 comprised in the pre-constructed PCA panel at block 221. Subsequently, at block 223, it is determined whether the extracted PCA values of the defect-type identifying parameters associated with the defects identified from the filtered images match the pre-computed PCA values associated with any of the clusters among the plurality of clusters 133. If it is determined that the extracted PCA values of the defect-type identifying parameters match the pre-computed PCA values associated with any of the clusters among the plurality of clusters 133, then at block 225, the defect type is identified based on the matched cluster, that is, the defect type is the defect that is represented by the matched cluster.
[0060] Subsequently, at block 231 an automated action may be performed on the moving object based on the one or more type of defects that are identified on surfaces of the moving object 113. For example, the automated action may include grading of the moving object to accurately represent the quality of the moving object 113. Additionally or alternatively, the automated action may include transporting the moving object 113 along an alternative path in the manufacturing unit and positioning the moving object 113 at a cutting station for cutting parts of the moving object 113 where the defects may exist, among others. It may be noted that the type of automated action may be intelligently selected based on a severity of defect that has been identified. Grading the quality of the moving object, for example, may include classifying the moving object as one of a low-quality object, a medium quality object, or a high-quality object. In an embodiment, any other grading metrics known in the industry may be used to automatically grade the quality of the moving object. Additionally, or alternatively, the defect identification system 101 may include a display unit 109 for continuously displaying a number or frequency of identified defects along with any other information related to the identified defects. The display unit 109 may also be capable of displaying the probable grade that the moving object 113 may be classified into.
[0061] In one or more scenarios, when the moving object 113 encounters one or more of the setbacks associated with the manufacturing process, more than one type of defects may be concentrated in one particular region of the surface 112 of the moving object 113 as opposed to a single type of defect. When the image frames corresponding to the combination of defects are passed to the defect type identification unit 123, the comparison of these images with the PCA panel may not be able to output a defect type 131. Accordingly, at block 223, if it is determined that the extracted PCA values of the defect-type identifying parameters do not match the pre-computed PCA values associated with any of the clusters, then at block 227, an Euclidean distance between the defect-type identifying parameters of the images whose PCA values do not match any of the clusters and centroids of each of the plurality of clusters 133 are calculated. At block 229, the defect type is identified based on two or more shortest Euclidean distances. The two or more shortest Euclidean distances between the defect-type identifying parameters of the images and centroid of the clusters may be the closely matched clusters. Subsequently, the method 200 may move on to the step at block 231 where the automated action is performed on the moving object 113 based on the one or more type of defects that are identified on surfaces of the moving object 113. The automated action may be performed by the automation unit 125 (shown in FIG. 1C). To that end, the automation unit 125, for example, may include a transport subsystem including robotic arms or mechanical diversions mounted relative to the conveyor belt 114 to divert the path of the defective object 113 to an associated cutting or labelling, or collection station in the manufacturing facility for suitable further processing.
[0062] FIG. 3 shows a flowchart depicting a method 300 of training the autoencoder model, in accordance with an embodiment of the present disclosure. The autoencoder model is an unsupervised learning technique for neural networks that learns efficient data representations (encoding) by training the network to ignore signal “noise”. The autoencoder model can be used for image denoising, image compression, and, in some cases, even generation of image data. In accordance with the present disclosure, to train the autoencoder model, at block 301, a plurality of reference image frames comprising images of defect-free surfaces of the moving object 113 may be fed to the autoencoder model. The reference image frames may be collected from various sources, including images of the moving object from the manufacturing plant, images from similar setups in other manufacturing plants, images of moving objects being carried by conveyor belts as opposed to rollers in similar industrial setups. The reference image frames may include images that comprise defect free surfaces of moving objects under varying lighting conditions. The reference image frames comprising defect-free surfaces of the moving object 113 may be augmented to increase dataset size and add variety (such as different orientations of the object, lighting conditions, etc.,) to enhance the dataset. At block 303, the autoencoder model may be trained on the image dataset that comprises defect free surface of the moving object 113. At block 305, the trained autoencoder model sets a pre-defined threshold for similarity between images based on the training. At block 307, the autoencoder model may then receive new test images representing moving objects. It shall be noted that these new test images comprise defective as well as defect free surfaces of moving objects. At block 309, each of the new test images is compared against the pre-defined threshold to output a similarity score associated with each of the test images. As already described, if the similarity score of a particular test image is greater than the pre-defined threshold, the images are considered to be similar in appearance to the images that comprise defect free surfaces of the moving object. At block 311, the new test images are then labelled either with a defect free label or a defective label based on their similarity scores. It may be noted that every time any new image is received by the autoencoder model, the image is subjected to the process at block 311 and finally the image may be labelled. In one embodiment, the pre-defined threshold (th_auto) may be same for all types of defects. However, in some embodiments, the pre-defined threshold (th_auto) may be a different value for different types of defects.
[0063] Further, FIG. 4 shows a flowchart depicting a method 400 of constructing a Principal Component Analysis (PCA) panel, in accordance with an embodiment of the present disclosure. The one or more steps of the method 400 may be performed using a PCA algorithm at the PCA construction unit 127. At block 401, a PCA algorithm receives an initial set of images that indicate defective surfaces of one or more moving objects. The PCA algorithm is an unsupervised learning technique for reducing the dimensionality of data.
[0064] At block 403, one or more image level features including the defect-type identifying parameters associated with the initial set of images are extracted by the PCA algorithm. At block 405, similar image level features associated with the initial set of images are identified and the similar image level features are positioned close to one another in a 2D vector space at block 407. Similar image level features extracted from the images may indicate that the same or similar defect is present on the surface of the moving object 113 as seen in the images. For example, if the PCA algorithm receives 50 images which indicate dents being present of the surface of the moving object, the defect-type identifying parameters along with other image level features of these 50 images will first be extracted. The defect-type identifying parameters and the other image level features of these 50 images may be similar, and thus, the features corresponding to these 50 images will appear closer to each other or even overlap with one another in a 2-D vector space. Such overlapping points or points close to one another will be grouped into one cluster and the cluster will be suitably labelled.
[0065] At block 409, a plurality of clusters 133 of defects is created such that each cluster comprises of the defects that are positioned close to one another in the 2D vector space. At block 411, a PCA panel comprising the plurality of clusters of defects 133 is generated. An exemplary list of cluster of defects are presented in Table B. If the PCA values of the filtered set of images match any of the PCA value combinations listed in Table B, then the defect-type identification unit 123 may identify that there exists a single defect and corresponding defect name may be given as output.
[0066] Alternatively, if the PCA values of certain images in the filtered set of images do not match any PCA value combinations listed in Table B, then the defect-type identification unit 123 may apply a custom logic to these images. The custom logic may identify that there exists a combination of defects or multiple defects concentrated in one particular region of the surface of the moving object. In order to identify the defect-type in this scenario, the defect-type identification unit 123 may compare the defect-type identifying parameters of these images with one or more clusters of the plurality of clusters 133. Based on the comparison, the images may be positioned in the vector space such that it is closer to one or more types of defects clusters. Next, Euclidean distances may be measured from the centroid of the one or more clusters to the position where the defect type identifying parameters associated with these images lie in vector space. The two or more shortest Euclidean distances may be the closely matched clusters, which may be provided as the final output label, the defect name.
[0067] In an embodiment, the PCA panel may be capable of identifying combination of defects as various features corresponding to different defects may vary significantly. In an example, in the case of defect types ‘rust’ and ‘dents’, the two main features that vary significantly are colour, and shape. Most of the feature representation for the rust defect will be contributed by the colour of the surface, whereas for the dent it will be contributed by shape. Further, in case of ‘scratches’ and ‘dents’, the two main features that vary significantly are texture and shape. Most of the feature representation for the scratch defect will be contributed by the texture of the surface whereas for the dent it will be contributed by shape and texture. Since multiple image-level features are being extracted, the overall feature representation will have a combination of different percentages of individual features for each defect, thereby making it statistically different for the PCA panel to understand the patterns for each kind of defect. When there is a combination of defects, the defect-type parameter representation derived will be completely new and different from the existing clusters listed in Table B, since the percentage contribution varies in this instance. For example, if both ‘rust’ and ‘dents’ are present in the same image, the defect-type parameter representation may be contributed prominently by both the colour and shape of the object. Therefore, this representation will be new from the existing clusters, and hence, it will be mapped to the closest cluster based on the Euclidean metric.
[0068] FIG. 5A illustrates a graphical representation of an exemplary PCA panel, which is constructed by analysing a plurality of images. The PCA panel comprises 5 clusters of defects and each cluster may represent one type of defect. In other words, the 5 clusters in the PCA panel represent 5 different defects. For instance, each type of defect in the 5 clusters of defects may include cracks, dents, scratches, holes and patches. It may be worth noting to a person skilled in art that any number of clusters may be formed based on the number of types of defects identified. Any new set of images extracted from the video stream 111 will be first analysed by the autoencoder model and based on the analysis only those images which indicate defect being present on the surface of the moving object 113 will be passed to the PCA panel to identify the type of defect. The PCA panel will identify the type of defect on the surface of the moving object 113 as long as the number of contours, area of the contours and the average pixel intensity associated with the images match any of the cluster of defects, for example listed in Table B. The defect type identifying parameters associated with these images may subsequently be positioned in the 2D vector space in the cluster with which the defect-type identifying parameters associated with these images closely match. However, if the number of contours, area of the contours and the average pixel intensity of the images do not match any of the cluster of defects, defect type identifying parameters associated with these images are then positioned in the 2D vector space such that it is not closer to just one cluster but may be adjacent to one or more clusters of defects based on the defect-type identifying parameters. Next, an Euclidean distance may be measured from the centroid of the one or more clusters to the position where defect type identifying parameters associated with these images lie in the 2D vector space. Two or more shortest Euclidean distances may be the closely matched clusters, which may be provided as the final output label that is the defect name. Following are some exemplary scenarios representing four different situations of defect identification based on the PCA panel illustrated in FIG. 5A.
[0069] In a first scenario, images indicating defect free surfaces of the moving object 113 may be analysed for defect identification. The following steps are performed by the defect identification system 101. The video stream 111 comprising a plurality of images 135 of the moving object 113 is received from the image capturing devices 103. The step of receiving the video stream 111 may be performed by the receiving unit 115.An ROI is selected from the plurality of images and enhanced in each of the plurality of images 135 for better identification of the defects. The step of selection of ROI may be performed by the similarity detection unit 117. The plurality of images 135, which are now enhanced, are fed into the autoencoder model for determining the similarity score 137 of each the plurality images 135. The step for determining the similarity score 137 may be performed by the similarity detection unit 117. The presence of defects on the surface of the moving object 113 is determined from the plurality of images 135 based on the similarity score 137 matching a predefined threshold. If the similarity scores 137 of one or more images are greater than the predefined threshold, these images are identified as images where the moving object has no defect and the autoencoder will output a “No Defect” message. Additionally, the images will also be labelled as “No Defect”. The step of determining the presence of defects may be performed by the defect-type identification unit 123. For example, if the predefined threshold is 0.9 and the similarity score 137 for a given image is 0.91. Since the similarity score 137 is greater than the similarity threshold, the output of the autoencoder model will be “NO DEFECT”. The processing at the first level analysis then ends as there is no defect being identified on the surface of the moving object 113 in the given images.
[0070] In a second scenario, images indicating a defect, such as “Dent” on the surface of the moving object 113 may be identified based on analysis of the plurality of images 135. The following steps are performed by the defect identification system 101.The video stream 111 comprising a plurality of images 135 of the moving object 113 is received from the image capturing devices 103. The step of receiving the video stream 111 may be performed by the receiving unit 115.An ROI from the plurality of images is selected and enhanced in each of the plurality of images 135 for better identification of the defects. The step of selection of ROI may be performed by the similarity detection unit 117.The plurality of images 135, which are now enhanced, are fed into the autoencoder model for determining the similarity score 137 of each the plurality images 135. The step for determining the similarity score 137 may be performed by the similarity detection unit 117.The presence of defects on the surface of the moving object 113 is determined from the plurality of images 135 based on the similarity score 137 matching a predefined threshold. In an example, the similarity score 137 of an image is found to be 0.4, which is less than the predefined threshold (th_auto) of 0.9. The similarity score of 0.4indicates the presence of defects on the surface of the moving object 113. The step of determining the presence of defects may be performed by the defect-type identification unit 123. Subsequently, the image undergoes a second level of analysis. During the second level analysis, the image comprising the defective surface is fed to the pre-constructed PCA panel, which consists of plurality of defect clusters 133. The output of the PCA panel will be the defect-type identifying parameters corresponding to the images being positioned adjacent to the cluster type “Dent” in the pre-constructed PCA panel. The defect indicated in the images is identified as ‘Dent’.
[0071] In a third scenario, images indicating a defect, such as “Rust” on the surface of the moving object 113 may be identified based on analysis of the plurality of images 135. The following steps are performed by the defect identification system 101. The video stream 111 comprising a plurality of images 135 of the moving object 113 is received from the image capturing devices 103. The step of receiving the video stream 111 may be performed by the receiving unit 115. An ROI from the plurality of images is selected and enhanced in each of the plurality of images 135 for better identification of the defects. The step of selection of ROI may be performed by the similarity detection unit 117. the plurality of images 135, which are now enhanced, are fed into the autoencoder model for determining the similarity score 137 of each the plurality images 135. The step for determining the similarity score 137 may be performed by the similarity detection unit 117. the presence of defects on the surface of the moving object 113 is determined from the plurality of images 135 based on the similarity score 137 matching a predefined threshold. In an example, the similarity score 137 of an image is found to be 0.67, which is less than the predefined threshold (th_auto) of 0.9. The similarity score of 0.67 indicates the presence of a defect on the surface of the moving object 113. The step of determining the presence of defects may be performed by the defect-type identification unit 123. The image is passed to a second level of analysis. During second level analysis, the image comprising the defective surface is fed to the pre-constructed PCA panel, which consists of plurality of defect clusters 133. The output of the PCA panel will be the defect-type identifying parameters corresponding to the images being positioned adjacent to the cluster type “Rust” in the pre-constructed PCA panel. Subsequently, the defect indicated in the images is identified as ‘Rust.
[0072] In a fourth scenario, images indicating a defect, such as a combination of two or more defects on the surface of the moving object 113 may be identified based on analysis of the plurality of images 135. The following steps are performed by the defect identification system 101. the video stream 111 comprising a plurality of images 135 of the moving object 113 is received from the image capturing devices 103. The step of receiving the video stream 111 may be performed by the receiving unit 115. An ROI from the plurality of images is selected and enhanced in each of the plurality of images 135 for better identification of the defects. The step of selection of ROI may be performed by the similarity detection unit 117. The plurality of images 135, which are now enhanced, are fed into the autoencoder model for determining the similarity score 137 of each the plurality images 135. The step for determining the similarity score 137 may be performed by the similarity detection unit 117. The presence of defects on the surface of the moving object 113 is determined from the plurality of images 135 based on the similarity scores 137 matching a predefined threshold. In an example, the similarity score 137 of an image is found to be 0.71, which is less than the predefined threshold (th_auto) of 0.9. The similarity score of 0.71 indicates the presence of defects on the surface of the moving object 113. The step of determining the presence of defects may be performed by the defect-type identification unit 123. The image is passed to a second level of analysis. During second level analysis, the image comprising the defective surface is fed to the pre-constructed PCA panel, which consists of plurality of defect clusters 133. In this case, the output will not have any cluster name because the new defect-type indicating parameter does not belong to any cluster in the PCA panel. A custom logic is applied to identify the combination of defects in this scenario. The custom logic is explained in detail with reference to FIG. 5B.
[0073] FIG. 5B depicts an exemplary scenario for identifying combination of defects using the custom logic on any image from which the combination of defect may not be identified by the pre-constructed PCA panel, as shown in FIG. 5A. As an example, in FIG. 5B, only three clusters are represented for identifying combination of defects, where each cluster represents an individual defect. As an example, the defect types represented by the clusters in FIG. 5B include dents, lines and pittings. Further, each cluster is represented by a particular pattern or symbol (hollow circle, filled circle, and cross mark). For example, dent is represented by hollow circle, line is represented by filled circle and pitting is represented by cross mark. In certain scenarios, the combination of defects including lines and dents may not be identified by the PCA panel in the previously described fourth scenario.
[0074] As shown in FIG. 5B, the centroids of each of the clusters are represented as C1, C2, C3. For example, C1 is the centroid of the cluster that represents dent, C2 is the centroid of the cluster that represents line and C3 is the centroid of the cluster that represents pitting. The centroids of the clusters may be determined, for example, using equation (1):

Ci=1/|Ni| ? xi (1)

where ‘N’ is the number of data points and ‘x’ indicates the summation of datapoints [(x1,y1),(x2,y2),(x3,y3)….(xn, yn)] in 2D space. As an output of the PCA panel, the combination of defects, that is line and dent, may be positioned adjacent to the two clusters representing lines and dents. For identifying the combination of defects, the defect-type identification unit 123 may be configured to calculate the distances from the centroids C1, C2 and C3 to the position where the combination of the defects lie in the 2D space. The distances respectively are represented by l1, l2, and l3 using a Euclidean distance metric using the below equation:

v((x1-y1)^2 + (x2-y2)^2 )) (2)

where (x1, y1) and (x2, y2) can be any two random points in 2D space.

[0075] The two or more shortest Euclidean distances based on the measurement, may be considered as the clusters that the combination of defect belongs to. In the case of dents and lines, l1 & l2 are the closest and the output may be provided as “Line” and “Dent”. Finally, an output with the defect types will be provided at the end of the processing by the custom logic. Similarly, if there is combination of line defects and pittings, the shortest distances in this case may be l5 and l6. Therefore, the outputs may be provided as “Line” and “Pitting”.
[0076] Further, the plurality of clusters 133 are ranked based on the measured Euclidean distances. As an example, the cluster with the shortest Euclidean distance from the defect-type identifying parameters of the combination of defects may be ranked on top while a comparatively lengthier Euclidean distance from the defect-type identifying parameters of the combination of defects may be ranked at the bottom. A combination of the one or more type of defects in the set of images may be identified based on association of the at least one defect-type identifying parameters with one or more clusters selected from the ranked plurality of clusters 133.
[0077] After the one or more types of defects 131 are identified, the automation unit 125 may be configured to perform an automated action. The automated action may include, without limiting to, automatically grading the quality of the moving object 113, and removing portions of the moving object 113 comprising the defects. In some embodiments, grading the quality of the moving object 113 by performing the automated action comprises classifying the moving object 113 as one of a low-quality object, a medium-quality object, or a high-quality object. In an exemplary embodiment, when the moving object 113 is a metal and is classified as low-quality object with scratches and dent, the portion of metal surface or metal sheet with the defect may be removed or cut off. In other exemplary embodiment, when the moving object 113 is textile or cloth and is classified as a medium-quality object having minor holes or paint marks, the cloth may be set aside for repainting or other uses. A person skilled in the art will appreciate that these are mere examples and cannot be construed to limit the scope of the invention.
[0078] In this manner, the method and system proposed by the present disclosure helps to identify defects, as well as the one or more type of defects 131 using a two-step analysis, which requires less computation time with reduced complexity, and without the need for expensive and complex sensors. Further, the method and system of the present disclosure performs one or more automated actions subsequent to identification of defects without any human intervention.
[0079] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device or article, or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0080] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
[0081] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

LIST OF NUMERAL REFERENCES:

100 System
101 Defect identification system
103 Image capturing devices
105 Reference database
107 Image processing unit
108 Communication network
109 Display
110 Roller
111 Video stream
113 Moving object
114 Conveyor belt
115 Receiving unit
117 Similarity detection unit
118 I/O Interface unit
119 Filtering unit
121 Determination unit
123 Defect-type identification unit
125 Automation unit
127 PCA construction unit
129 Memory
131 Type of defects
133 Clusters
135 Plurality of images
137 Similarity scores

, Claims:We Claim:

1. A method for identifying one or more defects on one or more surfaces (112) of an object (113), comprising:
receiving a video stream (111) comprising a plurality of images (135) of the object (113) from one or more image capturing devices (103);
comparing each of the plurality of images (135) in the video stream (111) with one or more reference image frames, comprising images indicating one or more defect-free surfaces of the object (113), to determine a similarity score (137) for each of the plurality of images (135);
filtering a set of images from the plurality of images (135) whose associated similarity score (137) is less than a predefined threshold, wherein each of the set of images include an image indicating one or more defective surfaces (112) of the object (113);
determining one or more defect-type identifying parameters from each of the filtered set of images, wherein the defect-type identifying parameters comprise one or more of a number of discrete contours in a corresponding image, a total area occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image;
identifying one or more types of defects (131) on one or more of the surfaces of the object (113) based on corresponding values of the one or more defect-type identifying parameters determined from each of the set of images; and
performing an automated action based on the one or more types of defects (131), wherein the automated action comprises one or more of automatically grading a quality of the object (113) and automatically removing one or more portions of the object (113) comprising one or more of the identified defects.

2. The method as claimed in claim 1, wherein comparing each of the plurality of images with the reference image frames comprises:
extracting a region of interest from each of the plurality of images;
performing one or more image enhancement actions on the region of interest of each of the plurality of images (135); and
analyzing the region of interest of each of the plurality of images (135) using a pre-trained autoencoder model for determining the similarity score (137), wherein the pre-trained autoencoder model is trained on the reference image frames comprising images of defect-free surfaces of the object (113).

3. The method as claimed in claim 1, wherein identifying the one or more types of defects (131) comprises:
computing Principal Component Analysis values of each of the defect-type identifying parameters;
comparing the Principal Component Analysis values with pre-computed Principal Component Analysis values of a plurality of clusters (133) comprised in a pre-constructed Principal Component Analysis panel, wherein each cluster of the plurality of clusters (133) represents a type of defect (131);
associating each of the defect-type identifying parameters with one or more clusters in the plurality of clusters (133) based on the comparison; and
identifying the one or more types of defects in the set of images based on an association between each of the defect-type identifying parameters and the one or more clusters in the plurality of clusters (133).

4. The method as claimed in claim 3, wherein the method comprises pre-constructing the Principal Component Analysis panel by:
analyzing one or more defect-type identifying parameters extracted from an initial set of images of the object (113) using a Principal Component Analysis algorithm;
computing a set of Principal Component Analysis values corresponding to each of the defect-type identifying parameters of the initial set of images based on the analysis; and
creating the plurality of clusters (133), each cluster representing a type of defect, based on similarity between the set of Principal Component Analysis values.

5. The method as claimed in claim 3, wherein identifying the one or more types of defects further comprises:
calculating corresponding Euclidean distances between a defect-type identifying parameter in one or more of the defect-type identifying parameters and centroids of each of the plurality of clusters (133), when the selected defect-type identifying parameter is associated with two or more clusters in the plurality of clusters (133);
ranking the plurality of clusters (133) based on the corresponding Euclidean distances; and
identifying a combination of the one or more types of defects in the set of images based on the association of the defect-type identifying parameter with two or more clusters selected from the ranked plurality of clusters (133).

6. The method as claimed in claim 1, wherein identification of one or more types of defects (131) on one or more of the surfaces of the object (113) comprises identification of one or more of dents, cracks, scratches, rust, corrosion, fold marks, patches, holes, ink smudge, and printing errors in newsprints.

7. The method as claimed in claim 1, wherein the object (113) comprises one or more of a metal, wood, paper, and a textile.

8. The method as claimed in claim 1, wherein grading the quality of the object (113) further comprises classifying the object as one of a low quality object, a medium quality object, and a high quality object.

9. A system (100) for identifying defects on surface of an object (113), the system (110) comprising:
one or more image capturing devices (103) configured to capture a video stream (111) comprising a plurality of images (135) of the object (113);
a reference database (105) configured to store reference image frames comprising images of defect-free surfaces (112) of a plurality of objects (113); and
a defect identification system (101) communicatively coupled with the image capturing devices (103), the defect identification system (101) configured to receive the plurality of images (135) of the object (113) from the image capturing devices (135) and to:
compare each of the plurality of images (135) with the reference image frames to determine a similarity score (137) for each of the plurality of images (135);
filter a set of images from the plurality of images (135), whose associated similarity score (137) is less than a predefined threshold, wherein each of the set of images include an image of a defective surface of the moving object (113);
determine defect-type identifying parameters from each of the set of images, wherein the defect-type identifying parameters comprise one or more of a number of discrete contours in a corresponding image, a total area occupied by the contours in the corresponding image, and an average intensity of pixels in the corresponding image;
identify one or more types of defects (131) that exist on the surfaces of the moving object (113) based on values of the defect-type identifying parameters determined from each of the set of images; and
an automation unit (125) configured to perform a predefined automated action based on the one or more types of defects (131), wherein the predefined automated action comprises one or more of automatically grading a quality of the object (113) and automatically removing one or more portions of the object (113) comprising the defects.

10. The system (100) as claimed in claim 9, wherein the similarity detection unit is further configured to:
extract a region of interest from each of the plurality of images (135);
perform one or more image enhancement actions on the region of interest of each of the plurality of images (135); and
analyze the region of interest of each of the plurality of images (135) using a pre-trained autoencoder model for determining the similarity score (137), wherein the pre-trained autoencoder model is trained on the reference image frames comprising images of defect-free surfaces of the moving object (113).

11. The system (100) as claimed in claim 9, wherein the defect-type identification unit (123) is further configured to:
compute Principal Component Analysis values of each of the defect-type identifying parameters by analyzing each of the set of images using a predefined Principal Component Analysis model;
compare the Principal Component Analysis values with pre-computed Principal Component Analysis values of a plurality of clusters (133) comprised in a pre-computed Principal Component Analysis panel, wherein each cluster of the plurality of clusters (133) represents a type of defect (131);
associate each of the defect-type identifying parameters with one or more clusters in the plurality of clusters (133) based on the comparison;
identify the one or more types of defects (131) in the set of images based on an association between each of the defect-type identifying parameters and the one or more clusters in the plurality of clusters (133);
analyze one or more defect-type identifying parameters extracted from an initial set of images of the object (113) using a Principal Component Analysis algorithm;
compute a set of Principal Component Analysis values corresponding to each of the defect-type identifying parameters of the initial set of images based on the analysis;
create the plurality of clusters (133), each cluster representing a type of defect (131), based on similarity between the set of Principal Component Analysis values;
calculate corresponding Euclidean distances between a defect-type identifying parameters in one or more of the defect-type identifying parameters and centroids of each of the plurality of clusters (133), when the selected defect-type indicating parameter is associated with two or more clusters in the plurality of clusters (133);
rank the plurality of clusters (133) based on the corresponding Euclidean distances; and
identify a combination of the one or more types of defects (131) in the set of images based on the association of the defect-type identifying parameter with two or more clusters selected from the ranked plurality of clusters (133).

12. The system (100) as claimed in claim 11, wherein the automation unit (125) comprises a transport subsystem, wherein the transport subsystem comprises one or more of a robotic arm and a mechanical diversion configured to divert a path of the object (113) comprising one or more of the identified defects to one of a cutting, labelling, and collection station in a manufacturing facility.

13. The system (100) as claimed in claim 11, wherein the image processing unit (107) is communicatively coupled to a display unit (109) configured to display quality control information related to the object (113), wherein the quality control information comprises one or more of an identifier corresponding to the object (113), the identified type of defects along with a grade that the object (113) is classified into, a live or a recorded feed of the video stream (111) received from the one or more image capturing devices (103), images with one or more defects, and providing an alert when the one or more defects identified on one or more surfaces of the object (113).

Documents

Application Documents

# Name Date
1 202441026195-POWER OF AUTHORITY [29-03-2024(online)].pdf 2024-03-29
2 202441026195-FORM-9 [29-03-2024(online)].pdf 2024-03-29
3 202441026195-FORM 3 [29-03-2024(online)].pdf 2024-03-29
4 202441026195-FORM 18 [29-03-2024(online)].pdf 2024-03-29
5 202441026195-FORM 1 [29-03-2024(online)].pdf 2024-03-29
6 202441026195-FIGURE OF ABSTRACT [29-03-2024(online)].pdf 2024-03-29
7 202441026195-ENDORSEMENT BY INVENTORS [29-03-2024(online)].pdf 2024-03-29
8 202441026195-DRAWINGS [29-03-2024(online)].pdf 2024-03-29
9 202441026195-COMPLETE SPECIFICATION [29-03-2024(online)].pdf 2024-03-29
10 202441026195-FORM-26 [09-04-2024(online)].pdf 2024-04-09