Sign In to Follow Application
View All Documents & Correspondence

System And Method For Surface Inspection

Abstract: A method (300) for surface inspection of an object (102, 202) is presented. The method (300) includes illuminating (302) the object (102, 202) being monitored. Furthermore, the method (300) includes acquiring (304) a video capture of the object (102, 202). In addition, the method (300) includes obtaining (306) image frames from the acquired video capture. Moreover, the method (300) includes processing (308) each image frame to identify one or more regions of interest. The method (300) also includes processing (310) the one or more regions of interest to identify the presence of one or more defects. Furthermore, the method (300) includes classifying (312) the detected defects. Additionally, the method (300) includes presenting (314) the defects, the classification, the image frames including the defects, location coordinates of the defects, annotations, or combinations thereof. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 March 2023
Publication Number
15/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-05-14
Renewal Date

Applicants

Myelin Foundry Private Limited
C/o TKN Advisors 1st Floor, Miraya Rose, 66/1, Siddapura, Varthur, Hobli, Whitefield, Bengaluru – 560066, Karnataka, India

Inventors

1. Gopichand Katragadda
Villa 254 Palm Meadows Phase I, Whitefield Road, Ramagondanahalli Bangalore Karnataka India 560066
2. Vasant Kumar Jain
C/o Myelin Foundry Private Limited TKN Advisors 1st Floor, Miraya Rose, 66/1 Siddapura, Varthur, Hobli, Whitefield Bengaluru Karnataka India 560066

Specification

DESC:CROSS REFERENCE TO RELATED APPLICATION
[0001] This patent application claims the benefit of priority, under 35 U.S.C. 119, of Indian Provisional Patent Application Serial No. 202341024758, filed March 31, 2023, titled “SYSTEM AND METHOD FOR SURFACE INSPECTION” the entire disclosure of which is incorporated herein by reference.

BACKGROUND
[0002] Embodiments of the present specification relate generally to surface inspection, and more particularly to systems and methods for identifying a region of interest on the surface and detecting defects on the surface, while minimizing high computational activities.
[0003] Steel plays a significant role in several industries such as machinery manufacturing, aerospace, automobile, and the like. Hence, it is highly desirable that all the steel manufacturing units maximize yield, while minimizing discards due to defects. Some sources of the defects may include raw materials, processing equipment, processing technology, and the like. These defects may adversely impact the quality, appearance, corrosion resistance, and/or fatigue strength of the manufactured steel. Accordingly, timely detection of any defects in the manufacturing process is essential to optimize yield while minimizing cost and rejects.
[0004] As will be appreciated, surface inspection plays a vital role in ensuring high quality in the steel manufacturing process. Traditionally, inspectors manually examined the surface of the steel to detect any defects. However, the manual surface inspection entails arduous work by the inspectors. Also, the inspection process is highly dependent on the skill level of the inspector leading to defects being missed. Additionally, manually inspecting the surface of the steel in real-time during the manufacturing process is a challenging task.
[0005] In the recent past, several automated surface inspection methods have been developed to alleviate the issues with manual surface inspection. Some automated surface inspection methods entail use of computer vision approaches. However, non-uniform illumination conditions and similarities between certain surface textures and defects disadvantageously impact the efficiency of the computer vision approaches in the identification of the defects.
[0006] Moreover, in recent times, machine learning (ML)/artificial intelligence (AI) techniques have been used to aid in the identification of defects on the surface of steel. However, currently, identification of defects using these machine learning techniques entails processing a huge volume of data related to a wide surface area of the steel being inspected, where the data set may include redundant information. Consequently, this processing disadvantageously results in an increase in computational load of the processing systems. Also, the increase in computational load of the processing systems adversely impacts real-time detection of defects on the surface of steel.

BRIEF DESCRIPTION
[0007] In accordance with aspects of the present specification, a system for real-time inspection of a surface of an object is presented. The system includes an illumination unit configured to illuminate the surface of the object. Further, the system includes a camera unit configured to acquire video capture of the surface. Moreover, the system includes a surface inspection system including an acquisition subsystem configured to receive the video capture of the object being inspected, obtain one or more image frames from the video capture, a processing subsystem in operative association with the acquisition subsystem and including a region of interest identification platform configured to process each image frame to identify one or more regions of interest, and where to identify the one or more regions of interest the region of interest identification platform is configured to transform a plurality of segments in the image frame from a spatial domain to a frequency domain, threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content, set one or more sections having non-low or high information content to zero, transform the plurality of segments in the image frame from the frequency domain to the spatial domain, threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest, and identify one or more image segments in the image frame corresponding to the one or more regions of interest. Additionally, the system includes an interface unit configured to provide, in real-time, the one or more regions of interest to facilitate analysis.
[0008] In accordance with another aspect of the present specification, a system for real-time inspection of a surface of an object is presented. The system includes an illumination unit configured to illuminate the surface of the object. Further, the system includes a camera unit configured to acquire video capture of the surface. Moreover, the system includes a surface inspection system including an acquisition subsystem configured to receive the video capture of the object being inspected, obtain one or more image frames from the video capture, a processing subsystem in operative association with the acquisition subsystem and including a region of interest identification platform configured to process each image frame to identify one or more regions of interest, and where to identify the one or more regions of interest the region of interest identification platform is configured to transform a plurality of segments in the image frame from a spatial domain to a frequency domain, threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content, set one or more sections having non-low or high information content to zero, transform the plurality of segments in the image frame from the frequency domain to the spatial domain, threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest, and identify one or more image segments in the image frame corresponding to the one or more regions of interest. The processing subsystem further includes a defect detection unit configured to process, based on at least one artificial intelligence model, each of the one or more regions of interest to identify presence of one or more defects and classify, based on at least one artificial intelligence model, the identified one or more defects. Additionally, the system includes an interface unit configured to provide, in real-time, the one or more regions of interest to facilitate analysis.
[0009] In accordance with yet another aspect of the present specification, a method for real-time inspection of a surface of an object is presented. The method includes illuminating the surface of the object, acquiring video capture of the surface responsive to the illumination of the surface of the object, obtaining one or more image frames from the video capture, processing each image frame to identify one or more regions of interest, and where identifying the one or more regions of interest (420) includes transforming a plurality of segments in the image frame from a spatial domain to a frequency domain, thresholding the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content, setting one or more sections having non-low or high information content to zero, transforming the plurality of segments in the image frame from the frequency domain to the spatial domain, thresholding the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest, identifying one or more image segments in the image frame corresponding to the one or more regions of interest, and providing, in real-time, the one or more regions of interest to facilitate analysis.
[0010] In accordance with aspects of the present specification, a surface inspection system for real-time inspection of a surface of an object is presented. The surface inspection system includes an acquisition subsystem configured to receive the video capture of the object being inspected and obtain one or more image frames from the video capture. Further, the surface inspection system includes a processing subsystem in operative association with the acquisition subsystem and including a region of interest identification platform configured to process an image frame to identify one or more regions of interest, and where to identify the one or more regions of interest the region of interest identification platform is configured to transform a plurality of segments in the image frame from a spatial domain to a frequency domain, threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content, set one or more sections having non-low or high information content to zero, transform the plurality of segments in the image frame from the frequency domain to the spatial domain, threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest, identify one or more image segments in the image frame corresponding to the one or more regions of interest. The processing system also includes a defect detection unit configured to process, based on at least one artificial intelligence model, each of the one or more regions of interest to identify presence of one or more defects, and classify, based on the at least one artificial intelligence model, the identified one or more defects.

DRAWINGS
[0011] These and other features and aspects of embodiments of the present specification will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0012] FIG. 1 is a schematic representation of an exemplary system for surface inspection, in accordance with aspects of the present specification;
[0013] FIG. 2 is a schematic representation of an exemplary embodiment of the system for surface inspection of FIG. 1, in accordance with aspects of the present specification;
[0014] FIG. 3 is a flow chart illustrating a method for surface inspection, in accordance with aspects of the present specification;
[0015] FIG. 4 is a flow chart illustrating a method for identifying a region of interest for use in the method for surface inspection of FIG. 3, in accordance with aspects of the present specification; and
[0016] FIG. 5 is a schematic representation of one embodiment of a digital processing system implementing a region of interest identification platform for use in the system of FIG. 1, in accordance with aspects of the present specification.
DETAILED DESCRIPTION
[0017] The following description presents exemplary systems and methods for real-time surface inspection. Particularly, embodiments described hereinafter present exemplary systems and methods that facilitate enhanced identification of regions of interest on the surface being inspected in real-time, where the regions of interest may include one or more defects. These systems and methods also enable automated detection of presence of any defects and identification/classification of the defects, in real-time, in the identified regions of interest. Use of the present systems and methods provides significant advantages in reliably providing significant enhancement in the quality of surface inspection of steel and reducing rejects, thereby overcoming the drawbacks of currently available methods of surface inspection and detection of defects. Additionally, the present systems and methods entail identification of regions of interest on the surface of steel in real-time. Further, only the identified regions of interest are processed to facilitate automated detection and/or classification of any defects in real-time, thereby resulting in significant decrease in computational load on the processing system. The decreased computational load in turn advantageously allows the processing of the regions of interest and subsequently the identification of the defects on low compute resource devices such as edge devices.
[0018] For ease of understanding, the exemplary embodiments of the present systems and methods are described in the context of a surface inspection system configured to provide enhanced identification of regions of interest on the surface of steel in real-time and detection and classification of the detected defects in real-time. However, use of the exemplary embodiments of the present systems and methods illustrated hereinafter in other systems and applications such as, but not limited to, detection of defects in fabric texture in the textile industry, detection of defects/faults in assembly lines such as car dashboard assembly, inspection of abrasive surface coatings, detection of defects on painted surfaces such as uneven deposits, peels, spots, and the like is also contemplated. An exemplary environment that is suitable for practising various implementations of the present systems and methods is discussed in the following sections with reference to FIG. 1.
[0019] As used herein, the term “region of interest” refers to a section on the surface of the steel being manufactured/monitored that may include one or more potential defects. Also, as used herein, the term “defect” refers to an irregularity on the surface of the steel which may in turn adversely impact the quality of the steel. Some non-limiting examples of the defect include cracks, pits, scratches, holes, blisters, and the like.
[0020] Also, as used herein, the term “edge device” refers to a device that is a part of a distributed computing topology in which information processing is performed close to where things and/or people produce or consume information. Some non-limiting examples of the edge device include a mobile phone, a tablet, a laptop, a smart television (TV), and the like. Additionally, the term “edge device” may also be used to encompass a device that is operatively coupled to an edge device noted hereinabove. Some non-limiting examples of such a device include a streaming media player that is connected to a viewing device such as a TV and allows a user to stream video and/or music, a gaming device/console, and the like. Other examples of the edge device also include networking devices such as a router, a modem, and the like.
[0021] Moreover, as used herein, the term “low compute resource device” refers to a computing device that integrates an algorithmic logic unit (ALU), memory, input/output (I/O) channels, and optionally a graphic processing unit (GPU), on a single silicon wafer and consumes low power in the range of a few milliamperes of current delivered though a compact battery.
[0022] Referring now to the drawings, FIG. 1 illustrates an exemplary system 100 for real-time surface inspection. The system 100 is configured to provide real-time surface inspection of steel. It may be noted that in certain embodiments, the system 100 is configured to facilitate real-time surface inspection of the steel as it is being manufactured. More particularly, the system 100 is configured to continuously monitor the surface of the steel to identify, in real-time, one or more regions of interest on the surface of the steel that may potentially include one or more defects. Additionally, the system 100 is also configured to process only the identified regions of interest to detect presence of any defects in the regions of interest, thereby decreasing the computational load of the system 100. This reduction in the computational load advantageously allows the processing to be implemented on a low compute resource device such as an edge device and allows the surface inspection to be performed in real-time.
[0023] Reference numeral 102 generally refers to an object being monitored. In the present example, the object is representative of steel 102, the surface 104 of which is being monitored. Also, reference numeral 106 generally represents a potential defect on the surface 104 of the steel 102.
[0024] In a presently contemplated configuration, the system 100 includes a surface inspection system 108. The surface inspection system 108 is configured to monitor and inspect the surface 104 of the steel 102 to identify, in real-time, any regions of interest on the surface 104 of the steel 102 that may potentially include one or more defects 104. Additionally, the surface inspection system 108 is also configured to detect the defects by processing only the identified regions of interest.
[0025] As depicted in FIG. 1, in one example, the system 100 may include an illumination unit 110. The illumination unit 110 is configured to optimally illuminate the surface 104 being inspected. In one embodiment, the illumination unit 110 may include one or more dark field illumination sources. The dark field illumination sources may be positioned close to the object 102 being inspected and the light source may be oriented between 0 and 45 degrees from the surface 104 being inspected (off horizontal). Moreover, in certain embodiments, the illumination unit 110 may also include one or more bright field illumination sources. The bright field illumination source is typically oriented between 90 and 45 degrees from the surface 104 being inspected. Some non-limiting examples of an illumination source for use in the illumination unit 110 include fluorescence, halogen, xenon lamp, light emitting diode (LED), and the like.
[0026] Furthermore, the system 100 may also include a camera unit 112. The camera unit 112 is configured to acquire video capture of the surface 104 being inspected. In one example, the camera unit 112 is configured to acquire the video capture of the surface 104 based on light reflected from the surface 104 and directed towards a field of view (FOV) of the camera unit 112. The camera unit 112 is configured to communicate the acquired video capture of the steel surface 104 to the surface inspection system 108 for real-time surface inspection.
[0027] In accordance with aspects of the present specification, the surface inspection system 108 is configured to receive as input the video capture of the surface 104 being inspected and process the video capture to identify one or more regions of interest on the surface 104 that may potentially include one or more defects. In particular, the surface inspection system 108 is configured to obtain one or more image frames from the acquired video capture of the surface 104. Further, the surface inspection system 108 is configured to process each image frame to identify one or more regions of interest. The regions of interest are generally representative of sections on the surface 104 that may potentially include one or more defects, such as a defect 106.
[0028] Furthermore, in accordance with exemplary aspects of the present specification, the surface inspection system 108 is configured to process only the identified regions of interest to detect presence of any defects on the surface 104. In addition, the surface inspection system 108 may be configured to identify and/or classify the detected defects. In one embodiment, the surface inspection system 108 may include one or more artificial intelligence (AI) models that are employed to identify the presence of any defects and/or classify the detected defects. In certain embodiments, the models may include a neural network that is trained and configured to perform detection of a defect, classification of the defect, and the like. It may be noted that the terms neural network, neural network model, AI model, and model may be used interchangeably.
[0029] Once any defects are identified and/or classified, the surface inspection system 108 may be configured to present the defects, the regions of interest including the defects, the image frames corresponding to the identified defects, and the like. In one embodiment, the defects, the corresponding image frames, and other relevant data may be communicated to an inspector or another processing system for further analysis or for storage and further processing to a computer 114 and/or a data repository.
[0030] With continuing reference to FIG. 1, in one embodiment, the surface inspection system 108 is configured to maintain a model. As used herein, “maintain a model” may entail generating an artificial intelligence (AI) model and hosting the AI model. The model may be hosted in a data repository, the cloud, and the like. Also, the model may be hosted in a local repository, a remote repository, the cloud, and the like. Moreover, in one example, the model may include one or more models configured to perform the task of detecting presence of a defect, identifying a type of defect, classifying a defect, and the like.
[0031] As previously noted, currently available computer vision techniques for surface inspection entail processing the entire surface area to detect presence of any defects on the surface of the object being monitored, thereby disadvantageously resulting in excessive computational activity on the inspection system. Identifying and processing only the identified regions of interest to ascertain the presence of any defects on the surface 104 as described hereinabove advantageously results in a substantial decrease in the computational load on the surface inspection system 108. Consequently, the system 100 that includes the surface inspection system 108 provides a robust framework that facilitates real-time inspection of the surface 104 of steel, while reducing the computational load on the system 100. This reduction in the computational load advantageously allows the surface inspection to be implemented on a low compute resource device such as an edge device. Additionally, the surface inspection system 108 may be plugged-in between the camera feed and the workstation such as the computer 114 and may be configured to handle processing of the feed data on a real-time basis.
[0032] Turning now to FIG. 2, one embodiment 200 of the system 100 of FIG. 1 is presented. In a presently contemplated configuration, the system 200 is configured to provide real-time surface inspection of steel as it is being manufactured. More particularly, the system 200 is configured to continuously monitor the surface of the steel to identify, in real-time, one or more regions of interest on the surface of the steel that may potentially include one or more defects. Additionally, the system 200 is also configured to process only the identified regions of interest to detect presence of any defects in the regions of interest, thereby decreasing the computational load on the system 200. This reduction in the computational load advantageously allows the processing to be implemented on a low compute resource device such as an edge device.
[0033] As previously noted, it is highly desirable to efficiently monitor the surface of steel to enable timely detection of defects, which in turn aids in maximizing yield while minimizing discards and cost. Reference numeral 202 generally refers to an object being monitored. In the present example, the object represents steel 202 being manufactured. It may be noted that the terms “object,” and “steel” may be used interchangeably. Reference numeral 204 is used to generally represent a surface of steel 202. Further, a production plant such as a steel production plant is generally represented by reference numeral 205. Also, reference numeral 206 generally represents a potential defect on the surface of the steel 202. Moreover, reference numeral 208 is generally representative of the direction of movement of the steel 202 being inspected.
[0034] As previously noted, optimally illuminating or lighting a surface of the object 202 being inspected impacts the imaging quality of the surface 204 being monitored. Accordingly, the system 200 may include an illumination unit 110 configured to optimally illuminate the surface 204 of the steel 202. Further, the illumination unit 110 may include one or more illumination or lighting sources that are configured to orient light towards the object 202 to adequately illuminate the surface 204 to enable enhanced imaging of the surface 204. Some non-limiting examples of a light source include fluorescence, halogen, xenon lamp, light emitting diode (LED), a laser beam, and the like.
[0035] In accordance with aspects of the present specification, the illumination unit 110 may include one or more dark field illumination sources 210. As will be appreciated, dark field lighting or illumination entails positioning the dark field illumination light source 210 close to the object 202 being inspected and orienting the light source 210 between 0 and 45 degrees from the surface 204 being inspected (off horizontal). It may be noted that in dark field illumination, since the dark field illumination light source 210 is incident on the surface 204 being inspected at a shallow angle, a majority of the light reflected from the surface 204 falls out of the FOV of a camera such as the camera unit 112 and hence the FOV remains dark. The dark FOV generally implies a defect free object 202. However, the presence of any features or defects on the surface 204 results in the scattering of the light from the dark field illumination source 210 that is incident on these features/defects. Some of the scattered light may be directed towards the camera unit 112 to reveal presence of a defect 206 on the surface 204. Hence, a resultant image generated by the camera unit 112 may be a dark image with some distinctive features indicating presence of a defect 206. Accordingly, the resultant dark image having the distinctive features indicative of the presence of a defect 206 may be representative of a signature of the presence of a defect 206 on the surface 204. This dark image may then be processed by the surface inspection system 108 to confirm the presence of the defect 206 on the surface 204.
[0036] In certain embodiments, the illumination unit 110 may also include one or more bright field illumination sources 212. As previously noted, bright field lighting or illumination entails mounting or orienting the light source 212 between 90 and 45 degrees from the surface 204 being inspected (off horizontal). Also, the bright field is the area in which any reflected light falls within the FOV of the camera unit 112. A bright field image so generated may be used to detect/reveal defects that are prominent on the surface 204. In one example, the bright field image may be utilized to account for any variations in defect type and/or geometry of the defect.
[0037] Furthermore, the system 200 also includes a camera such as the camera unit 112. It may be noted that the terms “camera unit” and “camera” may be used interchangeably. In one embodiment, the camera unit 112 is configured to generate images/video corresponding to the surface 204 being inspected. In one example, any light oriented by the dark field illumination source 210 onto the surface 204 may be reflected off the surface 204 and directed towards the camera unit 112. The camera unit 112 is configured to generate a video of the surface 204 being inspected based on captured light that has reflected off the surface 204. This video generated in response to illumination provided by the dark field illumination source 210 may be processed to identify one or more regions of interest and confirm presence of any defects on the surface 204 of steel 202. Moreover, the camera unit 112 is configured to communicate the video capture of the steel surface 204 to the surface inspection system 108.
[0038] In some other embodiments, the camera unit 112 is also configured to generate images/video based on the illumination provided by the bright field illumination source 212. In this example, any light oriented by the bright field illumination source 212 onto the surface 204 may be reflected off the surface 204 and directed towards the camera unit 112. Furthermore, the camera unit 112 is configured to generate a video of the surface 204 being inspected based on captured light that has reflected off the surface 204. In one example, this video generated in response to illumination provided by the bright field illumination source 212 may be utilized to account for any variations in defect type and/or geometry of the defect.
[0039] Moreover, as depicted in the embodiment of FIG. 2, the system 200 includes a surface inspection system 108. In accordance with aspects of the present specification, the surface inspection system 108 is configured to receive as input the video capture corresponding to the surface 204 being inspected and process the video capture to identify one or more regions of interest on the surface 204 that may potentially include one or more defects. In a presently contemplated configuration, the surface inspection system 108 includes an acquisition subsystem 214 and a processing subsystem 216.
[0040] The acquisition subsystem 214 is configured to receive the video capture from the camera unit 112 and obtain one or more image frames from the video capture. Further, the acquisition subsystem 214 is configured to communicate the image frames to the processing subsystem 216 for further processing. It may be noted that in one embodiment, the acquisition subsystem 214 may be configured to directly obtain the video capture from the camera unit 112. However, in certain other embodiments, the acquisition subsystem 214 may obtain the video capture from a storage such as a data repository 228, an optical data storage article such as a compact disc (CD), a digital versatile disc (DVD), a Blu-ray disc, and the like.
[0041] Once the image frames obtained from the video capture corresponding to the surface 204 are received from the acquisition subsystem 214, the processing subsystem 216 is configured to process the image frames to identify one or more regions of interest to facilitate the detection of any defects on the surface 204. In a non-limiting example, the processing subsystem 216 may include one or more application-specific processors, digital signal processors, microcomputers, graphical processing units, microcontrollers, Application Specific Integrated Circuits (ASICs), Programmable Logic Arrays (PLAs), Field Programmable Gate Arrays (FGPAs), and/or any other suitable processing devices. In alternative embodiments, the processing subsystem 216 may be configured to retrieve the image frames/video capture from the data repository 228. The data repository 228 may include a hard disk drive, a floppy disk drive, a read/write CD, a DVD, a Blu-ray disc, a flash drive, a solid-state storage device, a local database, and the like.
[0042] In addition, the examples, demonstrations, and/or process steps performed by certain components of the system 200 such as the processing subsystem 216 may be implemented by suitable code on a processor-based system, where the processor-based system may include a general-purpose computer or a special-purpose computer. Also, different implementations of the present specification may perform some or all of the steps described herein in different orders or substantially concurrently.
[0043] In a presently contemplated configuration, the processing subsystem 216 is depicted as including a region of interest (ROI) identification platform 218, a defect detection unit 220, and one or more models 222. The processing subsystem 216 is configured to process the image frames to identify one or more regions of interest on the surface 204 of steel 202 that may potentially include one or more defects such as the defect 206. It may be noted that other implementations of the processing subsystem 216 are also contemplated.
[0044] In one embodiment, the ROI identification platform 218 may be configured to process the image frames to identify one or more regions of interest. To facilitate the identification of the regions of interest, the ROI identification platform 218 may be configured to divide each image frame into one or more segments. In one non-limiting example, the image frame may be divided into 32 segments. It may be noted that for ease of description, the identification of a region of interest is described with reference to one image frame. A similar technique may be employed to process all the other image frames to identify any regions of interest.
[0045] The segmented image frame may be processed by the ROI identification platform 218 to enhance the identification of one or more regions of interest. In particular, the image frame may be processed to identify any locations indicative of the likelihood or probability of one or more defects. In certain embodiments, prior to being processed by the ROI identification platform 218 to identify one or more regions of interest, the segmented image frame may be reformatted to generate a format that is compatible with automated analysis and is platform independent. Furthermore, in certain other embodiments, the segmented image frame may also be pre-processed to remove any noise in the image data. Also, in some embodiments, a color space conversion may be performed on the image data.
[0046] As noted hereinabove, use of the dark field illumination source 210 to illuminate the surface 204 at a shallow angle results in a dark image in the absence of any defects since a majority of the light is reflected from the surface 204 out of the FOV of the camera unit 112. However, in the presence of any features or defects 206 on the surface 204, the light from the dark field illumination source 210 is scattered and some of the scattered light may be directed towards the camera unit 112 to indicate presence of a defect 206 on the surface 204. Accordingly, a resultant dark image with some distinctive features indicating presence of a defect 206 may be representative of a signature of the presence of a defect 206 on the surface 204. Therefore, it is desirable to process the image segments in the image frame to identify the presence of a signature indicative of the presence of a defect 206 on the surface 204.
[0047] In accordance with aspects of the present specification, to identify the presence of any signature indicative of the presence of a defect 206 on the surface 204, the image segments in the image frame may be transformed from a spatial domain to a frequency domain. In one example, the image segments in the image frame may be transformed from the spatial domain to the frequency domain via use of a direct cosine transform (DCT), a wavelet transform, and the like. More particularly, processing the image segments in the image frame via DCT aids in separating the image frame into various sections having differing information content with respect to the visual quality of the image frame.
[0048] In accordance with exemplary aspects of the present specification, it may be noted that sections of the image frame having low information content may be indicative of a signature associated with the presence of a defect 206 on the surface 204. In particular, when the surface 204 is illuminated by light from the illumination source 210, 212 in the illumination unit 110, the light interacts with the material of the surface 204 and is reflected. Accordingly, sections in an image frame captured by the camera unit 112 that correspond to this interaction between the light and the surface 204 include high information content. However, in the presence of a defect such as the defect 206 on the surface 204, there is a lower possibility of the light interacting with the material inside the defect. Hence, sections in the image frame captured by the camera unit 112 that correspond to this interaction between the light and the defect 206 include low information content. Accordingly, it is desirable to segregate the sections having low information content from the sections having high information content. To that end, in one embodiment, a threshold may be applied to the plurality of image segments in the image frame in the frequency domain to segregate the sections having low information content from the sections having high information content. In addition, the sections having high information content may be set to zero.
[0049] Subsequently, the image frame may be transformed or reconstructed from the frequency domain back to the spatial domain. In one embodiment, the image frame may be transformed from the frequency domain to the spatial domain via use of an inverse DCT (IDCT).
[0050] Moreover, in accordance with aspects of the present specification, it is desirable to identify one or more regions of interest. As previously noted, the regions of interest on the surface 204 of the steel 202 being manufactured/monitored are representative of regions that may include one or more potential defects. Accordingly, it is desirable to process the image frame to identify regions or locations on the surface 204 that are indicative of the likelihood of one or more defects. In one embodiment, the regions of interest that potentially may include one or more defects 206 may be identified by detecting contiguous pixels in the image frame having low information content. To that end, in one example, another threshold such as an intensity threshold may be applied to the image frame in the spatial domain to detect similarity among pixels based on intensity levels. Consequently, regions associated with contiguous pixels having low information content may be identified. Also, in one non-limiting example, to identify a region of interest it may be desirable to detect three or more contiguous pixels having similar intensity levels. Consequent to this processing, regions corresponding to the identified three or more contiguous pixels having low information content may be indicative of a signature of a presence of a defect 206 on the surface 204. Accordingly, these regions may be identified as one or more one or more regions of interest that may potentially include one or more defects 206. Furthermore, in certain embodiments, the (x, y) coordinates of these regions of interest may be captured.
[0051] Subsequent to identifying one or more regions of interest, the ROI identification platform 218 may be configured to identify one or more image segments in the image frame that correspond to the identified regions of interest. In one embodiment, the captured (x, y) coordinates associated with the identified regions of interest may be employed to identify corresponding image segments in the image frame. Consequent to the processing of the image frame by the ROI identification platform 218, one or more regions of interest and the one or more image segments that include the one or more regions of interest are identified.
[0052] The ROI identification platform 218 may then be configured to communicate the image segments having the regions of interest to the defect detection unit 220 for further processing. In certain embodiments, the ROI identification platform 218 may also be configured to communicate the image segments having the regions of interest to the user of the system 200.
[0053] In accordance with aspects of the present specification, the defect detection unit 220 is configured to process only the image segments having the identified regions of interest to detect presence of any defects 206. In one embodiment, the defect detection unit 220 may be configured to process only the image segments having the identified regions of interest via use of one or more AI models 222 to identify the presence of any defects 206.
[0054] It may be noted that the AI model 222 may include a neural network (NN) that is trained or tuned to perform one or more tasks. As will be appreciated, a neural network is a computational model. Further, the neural network model includes several layers. Each layer in the neural network model in turn includes several computational nodes. The computational node is configured to perform mathematical operations based on received input to generate an output. Some non-limiting examples of the mathematical operations include summation, passing through a non-linearity, comparing a present state of the node with a previous state, and the like. Moreover, the neural network model also includes weights that are typically associated between each node in a layer and one or more nodes in subsequent layers.
[0055] In certain embodiments, the defect detection unit 220 may be configured to process the image segments having the identified regions of interest via use of an AI model 222 such as a defect detection model to identify presence of any defects 206. In one embodiment, the defect detection model 222 may include a neural network that is trained and configured to perform an operation such as the detection of presence of a defect in the identified image segments having the regions of interest. Furthermore, in one non-limiting example, this defect detection model 222 may be generated by training a neural network using a dataset indicative of presence of a defect or absence of a defect on the surface 204 of steel 202. The model 222 so generated may be trained to recognize the presence of one or more defects on the surface 204 of steel 202. Processing the image segments via the defect detection model 222 facilitates the detection of presence of one or more defects in the identified image segments having the regions of interest. Further, the model 222 may be hosted in a local repository 228, a remote repository, the cloud, and the like. Subsequent to the processing of the image segments having the regions of interest via use of a model 222 such as the defect detection model noted hereinabove, presence of one or more defects 206 on the surface 204 may be identified.
[0056] Additionally or optionally, once the presence of one or more defects 206 on the surface 204 is confirmed, the defect detection unit 220 may be configured to classify the type of identified defects 206. In one embodiment, the defect detection unit 220 may be configured to classify the type of identified defects via use of a model 222 such as a defect classification model. In one embodiment, the defect classification model 222 may include a neural network that is trained and configured to perform an operation such as the classification of a detected defect in the identified image segments having the regions of interest. In one non-limiting example, this model 222 may be generated by training a neural network using a dataset that includes a plurality of types of defects associated with the manufacturing of steel 202, a plurality of defect patterns associated with each type of defect 206, or a combination thereof.
[0057] The model 222 so generated may be trained to recognize defect patterns on the surface 204 of steel 202, thereby facilitating the classification of the defect 206 in the identified image segments having the regions of interest. In one example, the defect classification model 222 when deployed may be trained to classify the detected defects based on a similarity metric associated with a defect pattern embedded in the neural network model 222. The similarity metric is configured to bin the defect in the region of interest into a class defect label using multiple attributes of the defect pattern such as area of the region of interest, perimeter of the region of interest, shape geometry of the region of interest, and the like. The defect classification model 222 may be hosted in the local repository 228, a remote repository, the cloud, and the like.
[0058] Moreover, the defect detection unit 220 may also be configured to determine location coordinates corresponding to the detected defects. Further, the defect detection unit 220 may also be configured to annotate the image frames and/or image segments with relevant data and/or other indicators. The functioning of the surface inspection system 108 will be described in greater detail with reference to FIGs. 3-5.
[0059] It may be noted that although the embodiment depicted in FIG. 2 depicts the processing subsystem 216 as including the ROI identification platform 218, the defect detection unit 220, and the models 222, in some embodiments, the ROI identification platform 218, the defect detection unit 220, and/or the models 222 may be employed as standalone units that are physically separate from the processing subsystem 216 and/or the surface inspection system 108. Also, in some embodiments, the ROI identification platform 218 and defect detection unit 220 may be integrated into end user systems such as, but not limited to, an edge device, such as a phone or a tablet. Moreover, in certain embodiments, the processing subsystem 216 may include only the ROI identification platform 218 that may be configured to perform the functionalities of identifying the regions of interest and identifying and/or classifying the defects via use of the models 222.
[0060] With continuing reference to FIG. 2, in one embodiment, the ROI identification platform 218 is configured to maintain a model, such as the AI models 222. As used herein, “maintain a model” may entail generating the AI model 222 and hosting the model 222. The model 222 may be hosted in a data repository 228, the cloud, and the like. Also, the model 222 may be hosted in a local repository, a remote repository, the cloud, and the like. In one example, the model 222 may include a neural network such as a CNN, a deep neural network, and the like.
[0061] Generating an AI model entails “training” a neural network to “learn” a desired task. Accordingly, the neural network is trained with appropriate inputs corresponding to the desired task. In certain embodiments, the region of interest platform 218 may include the neural network and may be configured to generate the AI models 222. To that end one or more inputs may be provided to a neural network. In addition, one or more desired outputs may also be provided to the neural network. Once the inputs and the desired outputs are provided to the neural network, the neural network may be trained to generate an AI model 222 that is configured to perform a desired task. In one example, the neural network may be trained to generate an AI model that is configured to provide a desired outcome. Some non-limiting examples of the desired outcome include detection of presence of a defect or absence of a defect, classification of the detected defects, or combinations thereof.
[0062] Furthermore, with continuing reference to maintaining a model, in one example, a neural network such as a CNN or deep neural network (not shown) may be “trained” to generate an AI model 222 that is configured to facilitate detection of presence of defects on the surface 204 of steel 202. In particular, the neural network is trained with appropriate inputs corresponding to defect detection. In one non-limiting example, information corresponding to a presence of one or more defects, absence of one or more defects on the surface 204 of steel 202, or a combination thereof may be provided to the deep neural network as input. Additionally, one or more desired outputs may also be provided to the deep neural network. In the example of the defect detection model 222, the desired outputs may include a detection of the presence of one or more defects or detection of absence of one or more defects on the surface 204 of steel 202.
[0063] Subsequently, the deep neural network may be trained to provide an outcome in the form of a detection of the presence of a defect(s) on the surface 204 of the steel 202 or the absence of a defect(s) on the surface 204 of steel 202. As will be appreciated, during the training or learning phase of the deep neural network, one or more model parameters in the form of weights of the deep neural network for predicting desired outcomes may be optimized. In particular, the model parameters may be optimized such that loss between the predicted outcomes and the desired outputs is minimized to ensure that the predicted outcomes closely match with the values of desired outputs. Consequent to the training phase of the deep neural network, a model 222 such as a defect detection model that is configured to provide an output, in real-time, in the form of an AI based “detection of presence of defect(s)” on the surface 204 of the steel 204 is generated.
[0064] Similarly, in another example, an AI model 222 such as a defect classification model configured to facilitate an AI-based classification of any detected defects may be generated. The defect classification model 222 may include a CNN, a deep neural network, and the like. In particular, the neural network is trained with appropriate inputs corresponding to defect classification. As previously noted, some non-limiting examples of the defects include cracks, pits, scratches, holes, blisters, and the like. In one non-limiting example, a dataset including one or more defect patterns corresponding to one or more defects, types of defects, or a combination thereof that may occur on the surface 204 during the manufacturing of steel 202 may be provided to the deep neural network as input. Additionally, one or more desired outputs may also be provided to the deep neural network. In the example of the defect classification model 222, the desired outputs may include a classification of one or more detected defects.
[0065] Subsequently, the deep neural network may be trained to provide an outcome in the form of a classification of a detected defect. Consequent to the training phase of the deep neural network, an AI model 222 such as a defect classification model that is configured to provide an output, in real-time, in the form of an AI based “classification of defects” on the surface 204 of the steel 204 is generated. In one non-limiting example, the model 222 so generated may be trained to recognize defect patterns on the surface 204 of steel 202, thereby facilitating the classification of a defect in the identified image segments having the regions of interest.
[0066] In yet another example, an AI model 222 configured to facilitate an AI-based defect detection and defect classification may be generated. In this example, the AI-based approach provided by the model 222 is configured to detect the presence of defects on the surface 204 of steel 202 and classify the detected defects in real-time. The model 222 may include a CNN, a deep neural network, and the like.
[0067] The models 222 so generated may be stored in the data repository 228. In other embodiments, the models 222 may be transmitted for storage in a remote facility. It may be noted that in some embodiments, the models 222 may be generated offline and stored in the data repository 228, for example.
[0068] With continuing reference to FIG. 2, the surface inspection system 108 may include a display 224 and a user interface 226. The display 224 and the user interface 226 may overlap in some embodiments such as a touch screen. Further, in some embodiments, the display 224 and the user interface 226 may include a common area. The display 224 may be configured to visualize or present the identified regions of interest, the image segments having the regions of interest, the detected defects, the classification of the detected defects, and any other information such as location coordinates of the defects, annotations, and the like. In certain other embodiments, the detected defects, the classifications, the identified regions of interest, the corresponding image segments, location coordinates of the defects, annotations, and the like may be stored in a local data repository 228, a remote data repository, cloud, and the like.
[0069] The user interface 226 of the surface inspection system 108 may include a human interface device (not shown) that is configured to aid a user such as an inspector in providing inputs or manipulating the images, location coordinates, annotations, and the like visualized on the display 224. The user interface 226 may be used to add labels, annotations, and other information to the image frames. In certain embodiments, the human interface device may include a trackball, a joystick, a stylus, a mouse, or a touch screen. It may be noted that the user interface 226 may be configured to aid the user in navigating through the inputs and/or outcomes/indicators generated by the surface inspection system 108.
[0070] The system 200 as described hereinabove provides an AI-based approach for an automated, real-time surface inspection. Implementing the surface inspection system 108 that includes the ROI identification platform 218, the defect detection unit 220, and the models 222 as described hereinabove aids in enhancing the performance of the system 200. More particularly, the system 200 that includes the ROI identification platform 218, the defect detection unit 220, and the models 222 provides a robust framework for the automated, real-time identification of the presence of defects on the surface 204 and/or the classification of the defects by identifying and processing only the identified regions of interest to ascertain the presence of any defects on the surface 204. This in turn circumvents the need for computationally intensive processing of vast amounts of data corresponding to the entire surface 204 of the steel 202, thereby decreasing the computational load on the surface inspection system 108. Moreover, the reduction in the computational load of the surface inspection system 108 advantageously allows the surface inspection to be implemented on a low compute resource device such as an edge device. Additionally, the design of the surface inspection system 108 presented hereinabove allows the surface inspection system 108 to be plugged in between the camera feed and the workstation such as the computer 114 (see FIG. 1) and handle processing of the feed data on a real-time basis. The working of the system 200 may be better understood with reference to FIGs. 3-5.
[0071] Embodiments of the exemplary methods of FIGs. 3-4 may be described in a general context of computer executable instructions on computing systems or a processor. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types.
[0072] Moreover, the embodiments of the exemplary methods may be practised in a distributed computing environment where optimization functions are performed by remote processing devices that are linked through a wired and/or wireless communication network. In the distributed computing environment, the computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0073] In addition, in FIGs. 3-4, the exemplary methods are illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, firmware, or combinations thereof. It may be noted that the various operations are depicted in the blocks to illustrate the functions that are performed. In the context of software, the blocks represent computer instructions that, when executed by one or more processing subsystems, perform the recited operations.
[0074] Moreover, the order in which the exemplary methods are described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary methods disclosed herein, or equivalent alternative methods. Further, certain blocks may be deleted from the exemplary methods or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein.
[0075] Referring to FIG. 3, a flow chart 300 of an exemplary method for surface inspection, in accordance with aspects of the present specification, is presented. In particular, the method 300 entails real-time monitoring and inspection of a surface of an object such as steel being manufactured. The method 300 of FIG. 3 is described with reference to the components of FIGs. 1-2. Moreover, in certain embodiments, the method 300 may be performed by the surface inspection system 108.
[0076] As depicted in FIG. 3, at step 302, the surface 204 of the object being inspected, such as steel 202, is illuminated. As previously noted with reference to FIGs. 1-2, the illumination unit 110 is used to illuminate the surface 204. In accordance with aspects of the present specification, a dark field illumination source 210 is used to optimally illuminate the surface 204. As previously noted, a dark field image may be utilized to identify one or more regions of interest and confirm the presence of any defects 206 on the surface 204. In some embodiments, a bright field illumination source 212 may also be employed to enhance the illumination of the surface 204. In one example, a bright field image may be utilized to account for any variations in defect type and/or geometry of the defect, as previously noted.
[0077] Moreover, in accordance with aspects of the present specification, once the surface is illuminated by the dark field illumination source 210, a video capture of the surface 204 may be obtained, as depicted by step 304. In one example, the camera unit 112 may be used to obtain a video capture of the surface 204 being inspected. In particular, the camera unit 112 is configured to acquire the video capture of the surface 204 based on light reflected from the surface 204 and directed towards a FOV of the camera unit 112. This video may be utilized to identify one or more regions of interest on the surface 204 and confirm presence of one or more defects 206.
[0078] In some embodiments, the camera unit 112 may also be configured to acquire a video capture of the surface 204 based on illumination provided by the bright field illumination source 212 and light reflected from the surface 204 and directed towards a FOV of the camera unit 112. This video may be employed to account for any variations in defect type and/or geometry of the defect.
[0079] Subsequently, the video capture may be communicated to the surface inspection system 108 for further processing. In particular, the camera unit 112 is configured to communicate the video capture of the surface 204 to the acquisition subsystem 214 of the surface inspection system 108. Further, at step 306, the acquisition subsystem 214 is configured to obtain one or more image frames from the video capture of the surface 204.
[0080] Moreover, at depicted by step 308, each image frame is processed to identify one or more regions of interest. As previously noted, a region of interest is representative of a section of the image frame that may potentially include one or more defects. In one example, the ROI identification platform 218 is configured to process the image frame to identify a region of interest on the surface 204. The identification of a region of interest on the surface 204 of step 308 will be described in greater detail with reference to FIG. 4.
[0081] Consequent to the processing of the image frame at step 308, one or more regions of interest on the image frame may be identified in real-time. In accordance with aspects of the present specification, the identified one or more regions of interest in the image frame are indicative of a possible presence of one or more defects on the surface 204.
[0082] Currently available surface inspection techniques typically entail processing data corresponding to the entire surface of the object being inspected, thereby resulting in a heavy computational load on the inspection system. To circumvent these shortcomings of the currently available surface inspection techniques, in accordance with exemplary aspects of the present specification, only the identified regions of interest on the surface 204 may be further processed to detect presence of any defects on the surface 204, as indicated by step 310. In one example, the defect detection unit 220 may be configured to perform step 310. In accordance with aspects of the present specification, the defect detection unit 220 may be configured to employ one or more models 222 such as a defect detection model to detect presence of any defects in the identified regions of interest.
[0083] Furthermore, as indicated by step 312, the detected defects may be processed to classify a type of the detected defect in real-time. Some non-limiting examples of defects include cracks, pits, scratches, holes, blisters, and the like. In one example, the defect detection unit 220 may be configured to perform step 312. In accordance with aspects of the present specification, the defect detection unit 220 may be configured to employ one or more models 222 such as a defect classification model to classify the detected defects. However, in some embodiments, the defect detection unit 220 may employ a model 222 that is configured to perform both the functions of identifying the presence of defects on the surface 204 of steel 202 and classifying the detected defects in real-time.
[0084] As previously noted, in certain embodiments, the ROI identification platform 218 may be configured to maintain one or more models 222. In particular, the ROI identification platform 218 may be configured to generate the models 222 by training a neural network to aid in the detection and/or classification of defects in the regions of interest. Additionally, the ROI identification platform 218 may also be configured to host the models 222. The models 222 may be employed to detect the presence of a defect in the identified regions of interest. The models 222 may also be configured to classify the detected defects.
[0085] Processing only the regions of interest to determine the presence of any defects as depicted by step 310 results in a substantial reduction in computational load on the surface inspection system 108, which in turn permits real-time surface inspection. Additionally, the reduction in computational load on the surface inspection system 108 also permits running the real-time surface inspection on low compute resource devices such as an edge device.
[0086] Subsequently, if the presence of any defects is detected in the regions of interest at step 310, the identified defects and/or their corresponding classifications (see step 312) may be presented to a user such as an inspector, as indicated by step 314. Moreover, coordinates of the location of the detected defects may also be obtained. The locations and/or other annotations may be added to the image frames. Additionally, the identified regions of interest and image segments corresponding to the regions of interest may be presented to the user for any follow up action or further analysis. In certain embodiments, the detected defects, their classifications, the identified regions of interest, the corresponding image segments, the location coordinates of the defects, annotations, and the like may be presented to the user via visualization on the display 224. In other embodiments, the information may be represented in the form of a graphic, a chart, or any other form of audio, and/or visual representation. Additionally, in some embodiments, information related to the detected defects, the Identified regions of interest, the corresponding image segments, location coordinates of the defects, annotations, and the like may be provided to facilitate further analysis, recommendations, and/or planning. In certain other embodiments, the detected defects, the identified regions of interest, the corresponding image segments, location coordinates of the defects, annotations, and the like may be stored in a local data repository 228, a remote data repository, cloud, and the like. In addition, reports may be generated to include the defects, the classification of the defects, the regions of interest, the notes, the labels, the annotations, location coordinates of the defects, or combinations thereof.
[0087] FIG. 4 is a flow chart 400 of an exemplary detailed method describing steps 308-312 of FIG. 3. In particular, the method 400 describes in detail the processing of each image frame to identify one or more regions of interest (step 308), in accordance with aspects of the present specification. Additionally, the method 400 also describes the processing of the identified regions of interest to detect presence of one or more defects (step 310) and the classification of the detected defects (step 312). The method 400 of FIG. 4 is described with reference to the components of FIGs. 1-3. Moreover, in certain embodiments, steps 402-428 of method 400 may be performed by the ROI identification platform 218 and the defect detection unit 220 in conjunction with the model(s) 222.
[0088] The method starts at step 402, where to facilitate the identification of regions of interest each received image frame is divided into a plurality of segments. In one example, the image frame may be divided into 32 image segments. It may be noted that for ease of description, steps 402-428 of method 400 are described with reference to the processing of one image frame. The same processing of steps 402-428 may be applicable to each image frame.
[0089] The segmented image frame may then be processed by the ROI identification platform 218 to identify one or more regions of interest. As previously noted, the region of interest refers to a section on the surface 204 of the steel 202 being manufactured/inspected that may potentially include one or more defects 206. In certain embodiments, as depicted by step 404, the plurality of image segments (image segments) in the image frame may be reformatted to generate a format that is compatible with automated analysis and is platform independent. In addition, the segmented image frame may be pre-processed to remove any noise in the image data, as indicated by step 406. Also, in some embodiments, at step 408, a color space conversion may be performed on the image data.
[0090] As previously noted, a dark image is obtained in the absence of any defects when the dark field illumination source 210 is used to illuminate the surface 204 at a shallow angle since a majority of the incident light is reflected from the surface 204 out of the FOV of the camera unit 112. However, in the presence of a defect 206 on the surface 204, the incident light from the dark field illumination source 210 is scattered and some of the scattered light may be directed towards the camera unit 112 resulting in a dark image with some distinctive features indicating presence of a defect 206. This dark image having some distinctive features may be representative of a signature of the presence of a defect 206 on the surface 204. Therefore, it is desirable to process the image segments in the image frame to identify presence of a signature, which in turn may be indicative of the presence of a defect 206 on the surface 204.
[0091] Furthermore, in accordance with aspects of the present specification, in order to facilitate the identification of the presence of a signature indicative of a defect 206 on the surface 204, the plurality of image segments in the image frame may be transformed from a spatial domain to a frequency domain, as depicted by step 410. By way of a non-limiting example, the plurality of image segments in the image frame may be transformed from a spatial domain to a frequency domain via use of a DCT, a wavelet transform, and the like. It may be noted that processing the image segments in the Image frame via DCT aids in separating the image frame into various sections having differing information content with respect to the visual quality of the image frame.
[0092] As previously noted, in accordance with exemplary aspects of the present specification, the sections of the image frame having low information content may be indicative of a signature associated with the presence of a defect 206 on the surface 204. Furthermore, as previously noted with reference to FIG. 2, when the surface 204 is illuminated by light from the illumination source(s) 210, 212 in the illumination unit 110, the light interacts with the material of the surface 204 and is reflected. Accordingly, sections in an image frame captured by the camera unit 112 that correspond to this interaction between the light and the surface 204 include high information content. However, in the presence of a defect such as the defect 206 on the surface 204, there is a lower possibility of the light interacting with the material inside the defect. Hence, sections in the image frame captured by the camera unit 112 that correspond to this interaction between the light and the defect 206 typically include low information content. Accordingly, it is desirable to segregate sections having low information content from sections having non-low/high information content. To that end, in one embodiment, at step 412, a threshold may be applied to the image frame in the frequency domain to segregate the sections having low information content from the sections having high information content. Further, the sections having high information content may be set to zero, as depicted by step 414.
[0093] Subsequently, at step 416, the plurality of image segments in the image frame may be transformed from the frequency domain back to the spatial domain. In one embodiment, the plurality of image segments in the image frame may be transformed from the frequency domain to the spatial domain via use of an inverse DCT (IDCT).
[0094] Furthermore, in accordance with aspects of the present specification, it is desirable to identify one or more regions of interest. Accordingly, it is desirable to process the image frame to identify regions or locations on the surface 204 that are indicative of the likelihood of one or more defects. In one embodiment, the regions of interest that potentially include one or more defects 206 may be identified by detecting contiguous pixels in the image frame having low information content. To that end, in one embodiment, another threshold such as an intensity threshold may be applied to the image frame in the spatial domain to detect similarity among contiguous pixels based on intensity levels. Consequently, regions associated with contiguous pixels having low information content may be identified, as depicted by step 418. In one non-limiting example, it may be desirable to detect three or more contiguous pixels having similar intensity levels to facilitate the identification of a region of interest.
[0095] Consequent to this processing, regions corresponding to the identified three or more contiguous pixels having low information content may be indicative of a signature of a presence of a defect 206 on the surface 204. Accordingly, these regions may be categorized as corresponding to regions of interest that may potentially include one or more defects 206. Processing of step 418 facilitates the identification of one or more regions of interest 420.
[0096] Once the one or more regions of interest 420 have been identified, one or more image segments in the image frame that correspond to the identified regions of interest 420 may be identified, as indicated by step 422. In one embodiment, the captured (x, y) coordinates associated with the identified regions of interest may be employed to identify corresponding image segments in the image frame. Consequent to the processing of the image frame by steps 402-422, one or more regions of interest 420 and one or more image segments in the image frame that include the one or more regions of interest 420 are identified. Moreover, only the image segments having the identified regions of interest may be communicated to the defect detection unit 220 for further processing. Also, in some embodiments, the ROI identification platform 218 may be configured to communicate the image segments having the regions of interest to the user of the system 200.
[0097] Further, at step 424, in accordance with aspects of the present specification, only the image segments having the identified regions of interest are processed to identify presence of any defects. It may be noted that processing only the image segments having the identified regions of interest advantageously results in a reduced computational load on the surface inspection system 108 and thereby facilitates the detection of defects in real-time. In one embodiment, the image segments having the identified regions of interest may be processed via use of one or more models 222 to detect presence of any defects 206 on the surface 204. As previously noted, the model 222, such as a defect detection model may include a neural network that is trained to perform an operation or task such as the detection of presence of a defect in the identified image segments having the regions of interest. Processing the image segments having the identified regions of interest via the defect detection model 222 provides, in real-time, an outcome in the form of a detection of the presence of a defect(s) 206 on the surface 204 of the steel 202 or the absence of a defect(s) on the surface 204 of steel 202. In particular, the defect detection model that is configured to provide an outcome, in real-time, in the form of an AI based “detection of presence of defect(s)” on the surface 204 of the steel 204. Also, in certain embodiments, the model 222 may be retrieved from the local repository 228, a remote repository, the cloud, and the like.
[0098] Moreover, in certain embodiments, once the presence of one or more defects on the surface 204 is established, the defect detection unit 220 may also be configured to classify a type of the identified defects, as depicted in step 426. Some non-limiting examples of defects include cracks, pits, scratches, holes, blisters, and the like. Also, as previously noted, the model 222 such as a defect classification model may include a neural network that is trained to perform an operation such as the classification of the type of defect detected in the identified image segments having the regions of interest by recognizing defect patterns of the detected defects 206. In one example, the model 222 may be trained to classify the detected defects based on a similarity metric associated with a defect pattern embedded in the neural network model, where the similarity metric is configured to bin the region of interest into a class defect label using multiple attributes such as area of the region of interest, perimeter of the region of interest, shape geometry of the region of interest, and the like. Also, the model 222 may be retrieved from the local repository 228, a remote repository, the cloud, and the like.
[0099] Consequent to the processing of steps 402-426, one or more defects on the surface 204 may be detected and classified. These defects are generally represented by reference numeral 428. Subsequently, these defects 428 may be used for further analysis or processing. Additionally, in certain embodiments, notes, labels, annotations, and the like may be added to the image frames either manually or automatically. Also, the defects 428, the classification of the defects, the regions of interest 420, location coordinates of the defects 428, the notes, the labels, the annotations, or combinations thereof may be provided to facilitate further analysis or processing, as indicated by step 314 of FIG. 3. Moreover, reports may be generated to include the defects 428, the classification of the defects, the regions of interest 420, the notes, the labels, the annotations, or combinations thereof. These reports may be stored in the data repository 228, for example.
[0100] The methods 300, 400 for the real-time surface inspection facilitate the enhanced detection of presence of defects 428 on the surface 204. Since these methods 300, 400 only process data corresponding to image segments having the identified regions of interest, the need for the computationally intensive processing of vast amounts of data is advantageously circumvented, thereby allowing real-time surface inspection to be implemented on a low compute resource device such as an edge device.
[0101] Referring now to FIG. 5, a schematic representation 500 of one embodiment 502 of a digital processing system implementing the surface inspection system 108 (see FIG. 1), in accordance with aspects of the present specification, is depicted. Also, FIG. 5 is described with reference to the components of FIGs. 1-4.
[0102] It may be noted that while the ROI identification platform 218 and defect detection unit 220 are shown as being a part of the surface inspection system 108, in certain embodiments, the ROI identification platform 218 and defect detection unit 220 may also be integrated into end user systems such as, but not limited to, an edge device, such as a phone or a tablet. Moreover, the example of the digital processing system 502 presented in FIG. 5 is for illustrative purposes. Other designs are also anticipated.
[0103] The digital processing system 502 may contain one or more processors such as a central processing unit (CPU) 504, a random access memory (RAM) 506, a secondary memory 508, a graphics controller 510, a display unit 512, a network interface 514, and an input interface 516. It may be noted that the components of the digital processing system 502 except the display unit 512 may communicate with each other over a communication path 518. In certain embodiments, the communication path 518 may include several buses, as is well known in the relevant arts.
[0104] The CPU 504 may execute instructions stored in the RAM 506 to provide several features of the present specification. Moreover, the CPU 504 may include multiple processing units, with each processing unit potentially being designed for a specific task. Alternatively, the CPU 504 may include only a single general-purpose processing unit.
[0105] Furthermore, the RAM 506 may receive instructions from the secondary memory 508 using the communication path 518. Also, in the embodiment of FIG. 5, the RAM 506 is shown as including software instructions constituting a shared operating environment 520 and/or other user programs 522 (such as other applications, DBMS, and the like). In addition to the shared operating environment 520, the RAM 506 may also include other software programs such as device drivers, virtual machines, and the like, which provide a (common) run time environment for execution of other/user programs. Moreover, in certain embodiments, the RAM may also include a model 524. The model 524 may be the models 222 (see FIG. 2).
[0106] With continuing reference to FIG. 5, the graphics controller 510 is configured to generate display signals (e.g., in RGB format) for display on the display unit 512 based on data/instructions received from the CPU 504. The display unit 512 may include a display screen to display images defined by the display signals. Furthermore, the input interface 516 may correspond to a keyboard and a pointing device (e.g., a touchpad, a mouse, and the like) and may be used to provide inputs. In addition, the network interface 514 may be configured to provide connectivity to a network (e.g., using Internet Protocol), and may be used to communicate with other systems connected to a network, for example.
[0107] Moreover, the secondary memory 508 may include a hard drive 526, a flash memory 528, and a removable storage drive 530. The secondary memory 508 may store data generated by the system 100 (see FIG. 1) and software instructions (for example, for implementing the various features of the present specification), which enable the digital processing system 502 to provide several features in accordance with the present specification. The code/instructions stored in the secondary memory 508 may either be copied to the RAM 506 prior to execution by the CPU 504 for higher execution speeds or may be directly executed by the CPU 504.
[0108] Some or all of the data and/or instructions may be provided on a removable storage unit 532, and the data and/or instructions may be read and provided by the removable storage drive 530 to the CPU 504. Further, the removable storage unit 532 may be implemented using medium and storage format compatible with the removable storage drive 530 such that the removable storage drive 530 can read the data and/or instructions. Thus, the removable storage unit 532 includes a computer readable (storage) medium having stored therein computer software and/or data. However, the computer (or machine, in general) readable medium can also be in other forms (e.g., non-removable, random access, and the like).
[0109] It may be noted that as used herein, the term “computer program product” is used to generally refer to the removable storage unit 532 or a hard disk installed in the hard drive 526. These computer program products are means for providing software to the digital processing system 502. The CPU 504 may retrieve the software instructions and execute the instructions to provide various features of the present specification.
[0110] Also, the term “storage media/medium” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may include non-volatile media and/or volatile media. Non-volatile media include, for example, optical disks, magnetic disks, or solid-state drives, such as the secondary memory 508. Volatile media include dynamic memory, such as the RAM 506. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
[0111] Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, the transmission media may include coaxial cables, copper wire, and fiber optics, including the wires that include the communication path 518. Moreover, the transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
[0112] Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present specification. Thus, appearances of the phrases “in one embodiment,” “in an embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[0113] Furthermore, the described features, structures, or characteristics of the specification may be combined in any suitable manner in one or more embodiments. In the description presented hereinabove, numerous specific details are provided such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, and the like, to provide a thorough understanding of embodiments of the specification.
[0114] The aforementioned components may be dedicated hardware elements such as circuit boards with digital signal processors or may be software running on a general-purpose computer or processor such as a commercial, off-the-shelf personal computer (PC). The various components may be combined or separated according to various embodiments of the invention.
[0115] Furthermore, the foregoing examples, demonstrations, and process steps such as those that may be performed by the system may be implemented by suitable code on a processor-based system, such as a general-purpose or special-purpose computer. It should also be noted that different implementations of the present specification may perform some or all of the steps described herein in different orders or substantially concurrently, that is, in parallel. Furthermore, the functions may be implemented in a variety of programming languages, including but not limited to C++, Python, and Java. Such code may be stored or adapted for storage on one or more tangible, machine readable media, such as on data repository chips, local or remote hard disks, optical disks (that is, CDs or DVDs), memory or other media, which may be accessed by a processor-based system to execute the stored code. Note that the tangible media may include paper or another suitable medium upon which the instructions are printed. For instance, the instructions may be electronically captured via optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in the data repository or memory.
[0116] Embodiments of the systems and methods for surface inspection described hereinabove advantageously present a robust AI-based framework for the real-time automated inspection of the surface of steel to monitor and detect any defects on the surface of steel. Additionally, the systems and methods presented herein aid in enhancing the performance of the system by identifying and processing only the identified regions of interest to ascertain the presence of any defects on the surface, thereby decreasing the computational load on the surface inspection system. Further, the reduction in the computational load of the surface inspection system advantageously allows the real-time automated surface inspection to be implemented on a low compute resource device such as an edge device. Moreover, implementing the systems and methods for surface inspection as described hereinabove advantageously provides enhanced timely detection of defects and higher accuracy rates of defect detection, thereby maximizing yield, while minimizing discards due to defects. Additionally, the design of the surface inspection system 108 presented hereinabove allows the surface inspection system 108 to be plugged in between the camera feed and the workstation and facilitates processing of the feed data on a real-time basis.
[0117] Although specific features of embodiments of the present specification may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments.
[0118] While only certain features of the present specification have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the present specification is intended to cover all such modifications and changes as fall within the true spirit of the invention.
,CLAIMS:WE CLAIM:
1. A system (100, 200) for real-time inspection of a surface (104, 204) of an object (102, 202), the system (100, 200) comprising:
an illumination unit (110) configured to illuminate the surface (104, 204) of the object (102, 202);
a camera unit (112) configured to acquire video capture of the surface (104, 204);
a surface inspection system (108) comprising:
an acquisition subsystem (214) configured to:
receive the video capture of the object (102, 202) being inspected;
obtain one or more image frames from the video capture;
a processing subsystem (216) in operative association with the acquisition subsystem (214) and comprising:
a region of interest identification platform (218) configured to process each image frame to identify one or more regions of interest (420), and wherein to identify the one or more regions of interest (420) the region of interest identification platform (218) is configured to:
transform a plurality of segments in the image frame from a spatial domain to a frequency domain;
threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content;
set one or more sections having non-low or high information content to zero;
transform the plurality of segments in the image frame from the frequency domain to the spatial domain;
threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest (420);
identify one or more image segments in the image frame corresponding to the one or more regions of interest (420); and
an interface unit (224, 226) configured to provide, in real-time, the one or more regions of interest (420) to facilitate analysis.
2. The system (100, 200) of claim 1, wherein the illumination unit (108) comprises one or more dark field illumination sources (210), one or more bright field illumination sources (212), or a combination thereof.
3. The system (100, 200) of claim 1, wherein to process each image frame to identify the one or more regions of interest (420), the region of interest identification platform (218) configured to:
divide each image frame into a plurality of segments;
reformat data in the plurality of segments to generate a format that is platform independent and compatible with automated analysis;
pre-process the plurality of segments to remove noise; and
perform color space conversion on the plurality of segments in the image frame.
4. The system (100, 200) of claim 1, wherein to transform the plurality of segments in the image frame from the spatial domain to the frequency domain the region of interest identification platform (218) is configured to process the plurality of segments in the image frame via a direct cosine transform, a wavelet transform, or a combination thereof, and wherein to transform the plurality of segments in the image frame from the frequency domain to the spatial domain the region of interest identification platform (218) is configured to process the plurality of segments in the image frame via an inverse direct cosine transform.
5. The system (100, 200) of claim 1, wherein the processing subsystem (216) further comprises a defect detection unit (220) configured to:
process, based on at least one artificial intelligence model (222), each of the one or more regions of interest (420) to identify presence of one or more defects (106, 206, 428); and
classify, based on at least one artificial intelligence model (222), the identified one or more defects (106, 206, 428).
6. The system (100, 200) of claim 5, wherein the region of interest identification platform (218) is configured to retrieve one or more artificial intelligence models (222).
7. The system (100, 200) of claim 5, wherein to identify the presence of one or more defects (106, 206, 428) the defect detection unit (220) is configured to process each of the one or more regions of interest (420), based on the at least one artificial intelligence model (222), to recognize, in real-time, one or more defect patterns in the one or more image segments corresponding to the one or more regions of interest (420).
8. The system (100, 200) of claim 7, wherein to classify the identified one or more defects (106, 206, 428) the defect detection unit (220) is configured to process the identified one or more defects (106, 206, 428), based on the at least one artificial intelligence model (222), to categorize, in real-time, the one or more defects (106, 206, 428), and wherein the one or more defects (106, 206, 428) comprise cracks, pits, scratches, holes, blisters, or combinations thereof.
9. The system (100, 200) of claim 1, wherein the processing subsystem (216) is configured to perform automated surface inspection, in real-time, on an edge device.
10. The system (100, 200) of claim 1, wherein the region of interest identification platform (218) is further configured to generate one or more artificial intelligence models (222), and wherein the one or more artificial intelligence models (222) are tuned for performing one or more tasks.
11. The system (100, 200) of claim 10, wherein to generate the one or more artificial intelligence models (222) the region of interest identification platform (218) is configured to:
obtain information corresponding to presence of one or more defects, absence of defects, or a combination thereof;
obtain information corresponding to a plurality of defect patterns on a surface (104, 204) of an object (102, 202), a plurality of types of defects on a surface (104, 204) of the object (102, 202), or a combination thereof;
receive an input corresponding to detection of presence of one or more defects, absence of one or more defects, or a combination thereof;
receive an input corresponding to one or more defect types, one or more defect patterns associated with the one or more defect types, or a combination thereof;
receive an input corresponding to one or more desired outcomes, wherein the one or more desired outcomes correspond to detection of presence of one or more defects, detection of absence of one or more defects, one or more defect patterns, one or more defect types, or combinations thereof;
optimize model parameters of a neural network based on the information corresponding to the presence of one or more defects and the absence of one or more defects, the information corresponding to the plurality of types of defects, the plurality of defect patterns, the one or more desired outcomes, or combinations thereof; and
train the neural network to perform the more or more tasks to generate the one or more artificial intelligence models (222).
12. A system (100, 200) for real-time inspection of a surface (104, 204) of an object (102, 202), the system (100, 200) comprising:
an illumination unit (110) configured to illuminate the surface (104, 204) of the object (102, 202);
a camera unit (112) configured to acquire video capture of the surface (104, 204);
a surface inspection system (108) comprising:
an acquisition subsystem (214) configured to:
receive the video capture of the object (102, 202) being inspected;
obtain one or more image frames from the video capture;
a processing subsystem (216) in operative association with the acquisition subsystem (214) and comprising:
a region of interest identification platform (218) configured to process each image frame to identify one or more regions of interest (420), and wherein to identify the one or more regions of interest (420) the region of interest identification platform (218) is configured to:
transform a plurality of segments in the image frame from a spatial domain to a frequency domain;
threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content;
set one or more sections having non-low or high information content to zero;
transform the plurality of segments in the image frame from the frequency domain to the spatial domain;
threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest (420);
identify one or more image segments in the image frame corresponding to the one or more regions of interest (420);
a defect detection unit (220) configured to:
process, based on at least one artificial intelligence model (222), each of the one or more regions of interest (420) to identify presence of one or more defects (106, 206, 428);
classify, based on at least one artificial intelligence model (222), the identified one or more defects (106, 206, 428); and
an interface unit (224, 226) configured to provide, in real-time, the one or more regions of interest (420), the one or more defects (106, 206, 428), the classification of the one or more defects (106, 206, 428), or combinations thereof to facilitate analysis.
13. A method (300, 400) for real-time inspection of a surface (104, 204) of an object (102, 202), the method (300, 400) comprising:
illuminating (302) the surface (104, 204) of the object (102, 202);
acquiring (304) video capture of the surface (104, 204) responsive to the illumination of the surface (104, 204) of the object (102, 202);
obtaining (306) one or more image frames from the video capture;
processing (308) each image frame to identify one or more regions of interest (420), and wherein identifying the one or more regions of interest (420) comprises:
transforming (410) a plurality of segments in the image frame from a spatial domain to a frequency domain;
thresholding (412) the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content;
setting (414) one or more sections having non-low or high information content to zero;
transforming (416) the plurality of segments in the image frame from the frequency domain to the spatial domain;
thresholding (418) the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest (420);
identifying (422) one or more image segments in the image frame corresponding to the one or more regions of interest (420); and
providing (314), in real-time, the one or more regions of interest (420) to facilitate analysis.
14. The method (300, 400) of claim 13, wherein the illumination unit (110) comprises one or more dark field illumination sources (210), one or more bright field illumination sources (212), or a combination thereof:
15. The method (300, 400) of claim 13, wherein processing (308) each image frame to identify the one or more regions of interest (420) comprises:
dividing (402) each image frame into a plurality of segments;
reformatting (404) data in the plurality of segments to generate a format that is platform independent and compatible with automated analysis;
pre-processing (406) the plurality of segments to remove noise; and
performing (408) color space conversion on the plurality of segments in the image frame.
16. The method (300, 400) of claim 13, wherein transforming (410) the plurality of segments in the image frame from the spatial domain to the frequency domain comprises processing the plurality of segments in the image frame via a direct cosine transform, a wavelet transform, or a combination thereof, and wherein transforming (416) the plurality of segments in the image frame from the frequency domain to the spatial domain comprises processing the plurality of segments in the image frame via an inverse direct cosine transform.
17. The method (300, 400) of claim 13, further comprising:
processing (308), based on at least one artificial intelligence model (222), each of the one or more image segments having the one or more regions of interest (420) to identify presence of one or more defects (106, 206, 428); and
classifying (312, 426), based on at least one artificial intelligence model (222), the identified one or more defects (106, 206, 428).
18. The method (300, 400) of claim 17, wherein identifying the presence of one or more defects (106, 206, 428) comprises processing (424) each of the one or more image segments having the one or more regions of interest (420), based on the at least one artificial intelligence model (222), to recognize, in real-time, one or more defect patterns in the one or more image segments corresponding to the one or more regions of interest (420).
19. The method (300, 400) of claim 17, wherein classifying the identified one or more defects (106, 206, 428) comprises processing the identified one or more defects (106, 206, 428), based on the at least one artificial intelligence model (222), to categorize, in real-time, the one or more defects (106, 206, 428), and wherein the one or more defects (106, 206, 428) comprise cracks, pits, scratches, holes, blisters, or combinations thereof.
20. The method (300, 400) of claim 13, further comprising performing automated surface inspection, in real-time, on an edge device.
21. The method (300, 400) of claim 13, further comprising retrieving one or more artificial intelligence models (222).
22. The method (300, 400) of claim 13, further comprising generating one or more artificial intelligence models (222), wherein the one or more artificial intelligence models (222) are tuned for performing the one or more tasks, and wherein generating the one or more artificial intelligence models (222) comprises:
obtaining information corresponding to presence of one or more defects, absence of defects, or a combination thereof;
obtaining information corresponding to a plurality of defect patterns on a surface (104, 204) of an object (102, 202), a plurality of types of defects on a surface (104, 204) of the object (102, 202), or a combination thereof;
receiving an input corresponding to detection of presence of one or more defects, absence of one or more defects, or a combination thereof;
receiving an input corresponding to one or more defect types, one or more defect patterns associated with the one or more defect types, or a combination thereof;
receiving an input corresponding to one or more desired outcomes, wherein the one or more desired outcomes correspond to detection of presence of one or more defects, detection of absence of one or more defects, information corresponding to a plurality of types of defects, a plurality patterns corresponding to the plurality of defect patterns, or combinations thereof;
optimizing model parameters of a neural network based on the information corresponding to the presence of one or more defects and the absence of one or more defects, the information corresponding to the plurality of types of defects, the plurality of defect patterns, the one or more desired outcomes, or combinations thereof; and
training the neural network to perform the more or more tasks to generate the one or more artificial intelligence models (222).
23. A surface inspection system (108) for real-time inspection of a surface (104, 204) of an object (102, 202), the system (108) comprising:
an acquisition subsystem (214) configured to:
receive the video capture of the object (102, 202) being inspected;
obtain one or more image frames from the video capture;
a processing subsystem (216) in operative association with the acquisition subsystem (214) and comprising:
a region of interest identification platform (218) configured to process an image frame to identify one or more regions of interest (420), and wherein to identify the one or more regions of interest (420) the region of interest identification platform (218) is configured to:
transform a plurality of segments in the image frame from a spatial domain to a frequency domain;
threshold the plurality of segments in the image frame in the frequency domain to identify one or more sections having low information content;
set one or more sections having non-low or high information content to zero;
transform the plurality of segments in the image frame from the frequency domain to the spatial domain;
threshold the image frame in the spatial domain to identify one or more regions corresponding to contiguous pixels having low information content to identify the one or more regions of interest (420);
identify one or more image segments in the image frame corresponding to the one or more regions of interest (420);
a defect detection unit (220) configured to:
process, based on at least one artificial intelligence model (222), each of the one or more regions of interest (420) to identify presence of one or more defects (106, 206, 428); and
classify, based on the at least one artificial intelligence model (222), the identified one or more defects (106, 206, 428).

Documents

Orders

Section Controller Decision Date
43 Karteek Viswanadha 2025-05-14
43 Karteek Viswanadha 2025-05-14

Application Documents

# Name Date
1 202341024758-PROVISIONAL SPECIFICATION [31-03-2023(online)].pdf 2023-03-31
2 202341024758-POWER OF AUTHORITY [31-03-2023(online)].pdf 2023-03-31
3 202341024758-FORM FOR SMALL ENTITY(FORM-28) [31-03-2023(online)].pdf 2023-03-31
4 202341024758-FORM 1 [31-03-2023(online)].pdf 2023-03-31
5 202341024758-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-03-2023(online)].pdf 2023-03-31
6 202341024758-EVIDENCE FOR REGISTRATION UNDER SSI [31-03-2023(online)].pdf 2023-03-31
7 202341024758-DRAWINGS [31-03-2023(online)].pdf 2023-03-31
8 202341024758-Request Letter-Correspondence [08-09-2023(online)].pdf 2023-09-08
9 202341024758-Power of Attorney [08-09-2023(online)].pdf 2023-09-08
10 202341024758-Form 1 (Submitted on date of filing) [08-09-2023(online)].pdf 2023-09-08
11 202341024758-Covering Letter [08-09-2023(online)].pdf 2023-09-08
12 202341024758-FORM 3 [25-10-2023(online)].pdf 2023-10-25
13 202341024758-DRAWING [22-03-2024(online)].pdf 2024-03-22
14 202341024758-CORRESPONDENCE-OTHERS [22-03-2024(online)].pdf 2024-03-22
15 202341024758-COMPLETE SPECIFICATION [22-03-2024(online)].pdf 2024-03-22
16 202341024758-STARTUP [08-04-2024(online)].pdf 2024-04-08
17 202341024758-FORM28 [08-04-2024(online)].pdf 2024-04-08
18 202341024758-FORM-9 [08-04-2024(online)].pdf 2024-04-08
19 202341024758-FORM 18A [08-04-2024(online)].pdf 2024-04-08
20 202341024758-FER.pdf 2024-05-22
21 202341024758-FORM 3 [20-08-2024(online)].pdf 2024-08-20
22 202341024758-FORM 3 [23-09-2024(online)].pdf 2024-09-23
23 202341024758-Proof of Right [15-11-2024(online)].pdf 2024-11-15
24 202341024758-PETITION UNDER RULE 137 [20-11-2024(online)].pdf 2024-11-20
25 202341024758-FORM 13 [20-11-2024(online)].pdf 2024-11-20
26 202341024758-OTHERS [22-11-2024(online)].pdf 2024-11-22
27 202341024758-FER_SER_REPLY [22-11-2024(online)].pdf 2024-11-22
28 202341024758-CLAIMS [22-11-2024(online)].pdf 2024-11-22
29 202341024758-US(14)-HearingNotice-(HearingDate-24-03-2025).pdf 2025-02-28
30 202341024758-Correspondence to notify the Controller [28-02-2025(online)].pdf 2025-02-28
31 202341024758-US(14)-ExtendedHearingNotice-(HearingDate-24-03-2025)-1600.pdf 2025-03-24
32 202341024758-Written submissions and relevant documents [03-04-2025(online)].pdf 2025-04-03
33 202341024758-Annexure [03-04-2025(online)].pdf 2025-04-03
34 202341024758-PatentCertificate14-05-2025.pdf 2025-05-14
35 202341024758-IntimationOfGrant14-05-2025.pdf 2025-05-14

Search Strategy

1 202341024758E_20-05-2024.pdf

ERegister / Renewals

3rd: 22 May 2025

From 31/03/2025 - To 31/03/2026