Sign In to Follow Application
View All Documents & Correspondence

System, Apparatus, And Method For Detection Of Pipeline Defects

Abstract: Disclosed is a data processing apparatus (106) including processing circuitry (120). The processing circuitry (120) is configured to select one of every five consecutive images of a set of testing images extracted from the testing video to generate a compressed set of images, attach a label of a plurality of labels to each image of the compressed set of images, segregate the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images, identify one or more duplicate images and one or more misclassified images from each group of the plurality of groups, remove the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images, combine the plurality of filtered groups of images to generate final classified data. FIG. 1 is selected.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 December 2023
Publication Number
30/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SOLINAS INTEGRITY PRIVATE LIMITED
IITM Incubation Cell, Module 2, D Block, Third Floor Phase II, IITM Research Park, Chennai, Tamil Nadu, 600113, India

Inventors

1. Moumita Mukherjee
IITM Incubation Cell, Module 2, D Block, Third Floor Phase II, IITM Research Park, Chennai, Tamil Nadu, 600113, India
2. Rahul Subramonian Bama
IITM Incubation Cell, Module 2, D Block, Third Floor Phase II, IITM Research Park, Chennai, Tamil Nadu, 600113, India

Specification

Description:TECHNICAL FIELD
The present disclosure relates to image processing, and more particularly to a system, an apparatus, and a method for detection of pipeline defects.
BACKGROUND
In pipelines, mechanical defects such as dents, cracks, gouges, and scratches are prevalent. Such defects can sometimes occur close to one another, resulting in a single cumulative defect in the pipe wall. Corrosion, aging pipelines, natural disasters, and excavation are a few of the leading causes of such defects in pipelines that result in leakages in the pipelines and cause loss of resources and pollute the environment.
The state of art mainly focuses on image processing-based artificial intelligence techniques (specifically computer-vision-based techniques) for the detection of pipeline defects. Such techniques rely on the utilization of standard datasets for training classifier models to detect defects inside the pipelines. However, as the environmental conditions inside both freshwater pipelines and sewer water pipelines are different, therefore the appearance of different types of defects in each type of pipeline is also different. Further, the contemporary systems fail to detect a cumulative defect in the pipeline that is formed by a combination of more than one defect. Therefore, such systems result in inaccurate defect detection inside the pipelines and thus provide insufficient data for leakages due to such defects. Such systems further have a high computational load and thus are computationally very expensive.
Thus, to address the aforementioned problems, there remains a need for a technical solution to provide an accurate and resource-effective pipeline defect detection.
SUMMARY
In an aspect of the present disclosure, a data processing apparatus includes processing circuitry. The processing circuitry is configured to select one of every five consecutive images of a set of testing images to generate a compressed set of images. The processing circuitry is further configured to attach a label of a plurality of labels to each image of the compressed set of images. Furthermore, the processing circuitry is configured to segregate the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images. Furthermore, the processing circuitry is configured to identify one or more duplicate images when a count of consecutive images of a group of the plurality of groups is greater than or equal to four and one or more misclassified images from each group of the plurality of groups when a count of consecutive images of a group of the plurality of groups is less than four. Furthermore, the processing circuitry is configured to remove one or more duplicate images and one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images. Furthermore, the processing circuitry is configured to merge the plurality of filtered groups of images to generate final classified data.
In some aspects, prior to the attachment of the label to each image of the compressed set of images, the processing circuitry is configured to receive a set of initial images from a sensing unit and segregate the set of initial images into a set of training images and a set of validation images. The set of training images includes randomly selected 80% images of the sets of initial images, and the set of validation images includes 20% images of the sets of initial images other than the set of training images.
In some aspects, upon segregation of the set of initial images, the processing circuitry is configured to iteratively adjust a set of classification parameters of the processing circuitry based on the set of training images, and fine-tune the set of classification parameters of the processing circuitry based on the set of validation images to generate a trained classifier model.
In some aspects, to identify one or more duplicate images of the group of the one or more groups, the processing circuitry is configured to determine a mean value of first and last frame numbers of the cumulative images of the group and assign the consecutive images of the group except the image having frame number equal to the mean value as the one or more duplicate images of the group.
In some aspects, prior to the attachment of the label to each image of the compressed set of images, the processing circuitry is configured to determine a class of a plurality of classes for each image of the compressed set of images using the trained classifier model.
In some aspects, the processing circuitry is configured to receive the set of initial images and the set of testing images from an initial video and a testing video, respectively, captured by a sensing unit at a frame resolution of 25 frames per second (fps).
In another aspect of the present disclosure, a method includes selecting, by way of processing circuitry, one of every five consecutive images of a set of testing images to generate a compressed set of images. The method further includes attaching, by way of the processing circuitry, a label of a plurality of labels to each image of the compressed set of images. Furthermore, the method includes segregating, by way of the processing circuitry, the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images. Furthermore, the method includes identifying, by way of the processing circuitry, one or more duplicate images when a count of consecutive images of a group of the plurality of groups is greater than or equal to four and one or more misclassified images from each group of the plurality of groups when a count of consecutive images of a group of the plurality of groups is less than four. Furthermore, the method includes removing, by way of the processing circuitry, the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images. Furthermore, the method includes combining, by way of the processing circuitry, the plurality of filtered groups of images to generate final classified data.
BRIEF DESCRIPTION OF DRAWINGS
The drawing/s mentioned herein disclose exemplary aspects of the present disclosure. Other objects, features, and advantages of the present disclosure will be apparent from the following description when read with reference to the accompanying drawing.
FIG. 1 is a block diagram that illustrates a system for the detection of pipeline defects, in accordance with an aspect of the present disclosure;
FIG. 2 is a block diagram that illustrates a data processing apparatus of the system of the system of FIG. 1, in with an aspect of the present disclosure;
FIG. 3 is a flowchart that illustrates a method for the detection of pipeline defects, in accordance with an aspect of the present disclosure; and
FIG. 4 illustrates defects that are identified by the system of FIG. 1, in accordance with an aspect of the present disclosure.
To facilitate understanding, reference numerals have been used, where possible to designate elements common to the figures.
DETAILED DESCRIPTION OF PREFERRED ASPECTS
Various aspects of the present disclosure provide a system, an apparatus, and a method for the detection of pipeline defects (specifically for fresh-water pipelines and sewer pipelines). The following description provides specific details of certain aspects of the disclosure illustrated in the drawings to provide a thorough understanding of those aspects. It should be recognized, however, that the present disclosure can be reflected in additional aspects and the disclosure may be practiced without some of the details in the following description.
The various aspects including the example aspects are now described more fully with reference to the accompanying drawings, in which the various aspects of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
It is understood that when an element is referred to as being “on,” “connected to,” or “coupled to” another element, it can be directly on, connected to, or coupled to the other element or intervening elements that may be present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The subject matter of example aspects, as disclosed herein, is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventor/inventors have contemplated that the presented subject matter might also be embodied in other ways, including different features or combinations of features similar to the ones described in this document, in conjunction with other technologies. As mentioned, there remains a need for a technical solution to provide an accurate and resource effective pipeline defect detection. Generally, the various aspects including the example aspects relate to the system, the apparatus, and the method for effective and resource effective detection of pipeline defects, specifically for freshwater pipelines and sewer pipelines.
Referring initially to the drawings, FIG. 1 illustrates a block diagram of system 100 for defect detection, in accordance with an aspect of the present disclosure. The system 100 may include a sensing unit 102, a user device 104, and a data processing apparatus 106. The sensing unit 102 may be coupled to the user device 104 by way of one of, a wired communication channel, a wireless communication channel, or a combination thereof. The user device 104 may be communicatively coupled to the data processing apparatus 106 by way of a communication network 108.
The sensing unit 102 may be configured to capture an initial video and a testing video or image. Specifically, the sensing unit 102 may be configured to capture the initial and the testing videos at a frame resolution of 25 frames per second (fps). The sensing unit 102 may further be configured to generate a set of initial images and a set of testing images from the captured initial video and the testing video, respectively. The set of initial images may include a first plurality of freshwater pipeline images and a first plurality of sewer pipeline images. In an exemplary aspect of the present disclosure, a count of the first plurality of the freshwater pipeline images and a count of the first plurality of sewer pipeline images may specifically be 12000 and 11500, respectively. In some aspects of the present disclosure, the testing video may be a live recorded video, that may be recorded at a frame resolution of 25 frames per second (fps) such that the set of testing images may be shared by the sensing unit 102 to the user device 104 in real-time.
The user device 104 may include a user interface 110, a processing unit 112, a device memory 114, a console 116, and a communication interface 118. The user interface 110 may include an input interface (not shown) for receiving the initial video and the testing video. The input interface may further be configured to receive one or more inputs from a user. The one or more inputs may be associated with an attachment of one or more labels to the set of initial images and/or for registration of the user on the system 100. Examples of the input interface may include but are not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, or the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies.
The user interface 110 may further include an output interface (not shown) for displaying (or presenting) one or more outputs to the user. Specifically, the output interface may be configured to display (or present) one or more notifications, one or more acknowledgment signals, one or more alert signals, and one or more defect reports that may be generated and/or related to one or more operations of the system 100 for pipeline defect detection. Examples of the output interface may include but are not limited to, a digital display, an analog display, a touch screen display, a graphical user interface, a website, a webpage, a keyboard, a mouse, a light pen, an appearance of a desktop, and/or illuminated characters. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the output interface including known and/or related, or later developed technologies.
The processing unit 112 may include suitable logic, instructions, circuitry, interfaces, and/or codes for executing various operations, such as the operations associated with the user device 104. In some aspects of the present disclosure, the processing unit 112 may utilize one or more processors such as Arduino or Raspberry Pi or the like. Further, the processing unit 112 may be configured to control one or more operations executed by the user device 104 in response to the one or more inputs received at the user interface 110 from the user. Examples of the processing unit 112 may include, but are not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a field-programmable gate array (FPGA), a Programmable Logic Control unit (PLC), a data center, and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the processing unit 112 including known, related art, and/or later developed processing units.
The processing circuitry 120 may be configured to generate a trained classifier model for the classification of one or more images of the set of training images (i.e., specifically selected and filtered from the training video at 25fps) into a plurality of classes and attach a label to each image of the one or more images of the set of training images associated with a class of the plurality of classes associated with (or detected for) each image. The plurality of classes may be classified into the first and second categories of classes such that the first category of classes may depict the status of freshwater pipelines and the second category of classes may depict the status of sewer pipelines. Preferably, the first and second categories of classes (i.e., associated with freshwater and sewer pipelines, respectively) may include nine and eight classes of defects, respectively. The first and second categories of classes may further include a non-defect class for the classification of images without any defect (in both freshwater and sewer pipelines). Examples of the defect classes may include but are not limited to, fracture surface damage, encrustation, ferrule, joint displacement, partial blockage, complete blockage, roots, sludge, stone, crack surface damage, and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the defect classes associated with pipeline defects without deviating from the scope of the present disclosure.
The device memory 114 may be configured to store the logic, instructions, circuitry, interfaces, and/or codes of the processing unit 112, data associated with the user device 104, and/or data associated with the system 100. Aspects of the present disclosure are intended to include or otherwise cover any type of the data without deviating from the scope of the present disclosure. Examples of the device memory 114 may include but are not limited to, a Read-Only Memory (ROM), a Random-Access Memory (RAM), a flash memory, a removable storage drive, a hard disk drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read Only Memory (PROM), an Erasable PROM (EPROM), and/or an Electrically EPROM (EEPROM). Aspects of the present disclosure are intended to include or otherwise cover any type of the device memory 114 including known, related art, and/or later developed memories.
The console 116 may be configured as computer-executable applications, to be executed by the processing unit 112. The one or more computer-executable applications corresponding to the console 116 may be stored in the device memory 114. Examples of the one or more computer-executable applications may include, but are not limited to, an audio application, a video application, a social media application, a navigation application, and the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the computer-executable application including known, related art, and/or later developed computer-executable applications.
The communication interface 118 may be configured to enable the user device 104 to communicate with the data processing apparatus 106. Examples of the communication interface 118 may include but are not limited to, a modem, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication interface 118 may include any device and/or apparatus capable of providing wireless or wired communication between the user device 104 and the data processing apparatus 106.
The data processing apparatus 106 may be a network of computers, a software-hardware framework, or a combination thereof, that may provide a generalized approach to create the server implementation. Examples of the data processing apparatus 106 may include but are not limited to, personal computers, laptops, mini-computers, mainframe computers, and any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The data processing apparatus 106 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any web-application framework. The data processing apparatus 106 may include a plurality of processing engines (hereinafter interchangeably referred to and designated as ‘processing circuitry 120’) and a plurality of data repositories (hereinafter cumulatively referred to and designated as ‘database 122’) such that the processing circuitry 120 may be configured to perform one or more computational, logical and/or decision making operations of the data processing apparatus 106, and the database 122 may be configured to perform one or more data and/or instructions storage operations of the data processing apparatus 106.
The communication network 108 may include suitable logic, circuitry, and interfaces that may be configured to provide a plurality of network ports and a plurality of communication channels for the transmission and reception of either, data, signals, instructions, information, and the like related to operations of various entities of the system 100 (specifically, the user device 104 and the data processing apparatus 106). Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address), and the physical address may be a Media Access Control (MAC) address. The communication network 108 may be associated with an application layer for the implementation of communication protocols based on one or more communication requests from the user device 104 and the data processing apparatus 106. The communication data may be transmitted or received, via the communication protocols. Examples of the communication protocols may include but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof.
In an aspect of the present disclosure, the communication data may be transmitted or received via at least one communication channel of a plurality of communication channels in the communication network 108. The communication channels may include but are not limited to, a wireless channel, a wired channel, or a combination of wireless and wired channels thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
In operation, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to receive the set of initial images/video (as the initial video at 25 fps) from the sensing unit 102. The data processing apparatus 106, by way of the processing circuitry 120 may further be configured to attach a label of a plurality of labels (i.e., associated with the plurality of classes) to each image of the set of initial images based on the one or more inputs of the user. The data processing apparatus 106, by way of the processing circuitry 120, may further be configured to segregate the set of initial images into a set of training images and a set of validation images. Upon segregation of the set of initial images, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to iteratively adjust a set of classification parameters of the processing circuitry 120 based on attached labels on the set of training images, and fine-tune the set of classification parameters of the processing circuitry 120 based on the attached labels on the set of validation images to generate a trained classifier model. Upon generation of the trained classifier model, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to receive the set of testing images (as the testing video at 25 fps) from the sensing unit 102. The data processing apparatus 106, by way of the processing circuitry 120, may be configured to select one of every five consecutive images of the set of testing images to generate a compressed set of images (i.e., the compressed set of images in combination may generate a compressed video at 5 fps). The selection of one of every five consecutive images of the set of testing images may advantageously reduce processing time of the processing circuitry 120. Upon generation of the compressed set of images, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to attach a label of the plurality of labels to each image of the compressed set of images. The data processing apparatus 106, by way of the processing circuitry 120, may further be configured to segregate the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images. Furthermore, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to identify one or more duplicate images, when a count of consecutive images of a group of the plurality of groups is greater than or equal to four and one or more misclassified images from each group of the plurality of groups, when the count of consecutive images of a group of the plurality of groups is less than four. Upon determination of the one or more duplicate images and the one or more misclassified images, the data processing apparatus 106, by way of the processing circuitry 120, may be configured to remove the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images. The data processing apparatus 106, by way of the processing circuitry 120, may further be configured to combine the plurality of filtered groups of images to generate final classified data such that the final classified data may include one or more selected and filtered images of the set of compressed images with labels of the plurality of classes attached for detection of one or more pipeline defects captured by way of the testing video (i.e., the set of testing images).
FIG. 2 is a block diagram that illustrates the data processing apparatus 106 of system 100 of FIG. 1, in accordance with an aspect of the present disclosure. The data processing apparatus 106 may include the processing circuitry 120, the database 122, a network interface 200, and an input-output (I/O) interface 202 communicatively coupled to one another by way of a first communication bus 203.
In some aspects of the present disclosure, the processing circuitry 120 may include a data exchange engine 204, a registration engine 206, an authentication engine 208, a classification engine 210, a data processing engine 212, and a notification engine 214, that may be coupled to each other by way of a second communication bus 216.
The data exchange engine 204 may be configured to enable a transfer of data between the database and various other engines of the processing circuitry 120. The data exchange engine 204 may further be configured to enable a transfer of data between various engines of the processing circuitry 120. Furthermore, the data exchange engine 204 may be configured to enable the transfer of data and/or instructions from the user device 104 and/or the sensing unit 102 (via the user device 104) to the data processing apparatus 106. In some aspects of the present disclosure, the data exchange engine 204 may be configured to receive the initial video (at 25fps) and the testing video (at 25fps) from the user device 104 by way of the I/O interface 202. The data exchange engine 204 may further be configured to receive one or more inputs for registration of the user from the user device 104 and/or attaching one or more labels to the set of initial images via the I/O interface 202.
The registration engine 206 may be configured to enable the user to register into the system 100 by providing registration data through a registration menu (not shown) of the console 116 that may be displayed, by way of the user device 104.
The authentication engine 208, by way of the data exchange engine 204 may be configured to fetch the registration data of the user and authenticate the registration data of the user. The authentication engine 208, upon successful authentication of the registration data of the user, may be configured to enable the user to log-in or sign up to the system 100. In some aspects of the present disclosure, the authentication engine 208 may enable the user to set the password protection for logging in to the system 100. In such a scenario, the authentication engine 208 may be configured to verify a password entered by the user for logging in to the system 100 by comparing the password entered by the user with the set password protection. In some aspects, when the password entered by the user is verified by the authentication engine 208, the authentication engine 208 may enable the user to log in to the system 100. In some other aspects of the present disclosure, when the password entered by the user is not verified by the authentication engine 208, the authentication engine 208 may generate a signal for the notification engine 214 to generate a login-failure notification for the user.
Upon successful registration and authentication of the user, the data processing engine 212 may be configured to receive the initial video at 25 fps from the user device 104 (i.e., sensed by the sensing unit 102). The data processing engine 212 may further be configured to generate the set of initial images from the initial video such that a total of 25 images are generated from one second of the initial video. The data processing engine 212 may further be configured to attach a label to each image of the set of initial images based on the one or more inputs received by the user. The label may correspond to a class of the plurality of classes of defects associated with the pipelines (i.e., the freshwater and sewer pipelines). Upon attachment of the label to each image of the set of initial images, the data processing engine 212 may be configured to segregate the set of initial images into the set of training images and the set of validation images. In some aspects of the present disclosure, the set of training images may include randomly selected 80% images of the sets of initial images, and the set of validation images may include 20% images of the sets of initial images other than the set of training images, however the scope of the present disclosure is not limited to it. In some other aspects of the present disclosure, the set of initial images may be segregated in any ratio that may be suitable for training of the classifier model (i.e., the set of training images and the set of validation images may be in any ratio), without deviating from the scope of the present disclosure.
Upon segregation of the set of initial images with one or more labels associated with the plurality of classes, the data processing engine 212 may be configured to provide the set of training images and the set of validation images to the classification engine 210. In some aspects of the present disclosure, the classification engine 210 may include a set of parameters that may correspond to the generation of the detection model. Preferably, the defect detection model may be You Only Look Once Version-5 (YOLO-V5) model. The YOLO-V5 model may be iteratively trained based on the set of training images. Specifically, the YOLO-V5 model may be iteratively trained to facilitate identification and localization of the defect to eventually extract a plurality of trained weights. The notification engine 214 may facilitate to amend one or more weights of the plurality of weights of the defect detection model based on the set of validation images. Thus, the notification engine 214 may advantageously enhance a defect detection accuracy for the defect present in the pipelines.
In some aspects of the present disclosure, upon segregation of the set of initial images, the processing circuitry 120 may be configured to (i) train the defect detection model multiple times based on the set of training images and (ii) validate the trained model using the validation images to check the performance of the defect detection model. The model may be able to detect 10 classes of defects with the severity grade that may be present in both freshwater and sewer water pipelines, such as, encrustation, crack surface damage, fracture surface damage, stone, complete blockage, partial blockage, stone, sludge, joint displacement, root blockage, and ferrule. The severity grade may be used to predict the remaining life span of the pipeline.
Upon generation of the trained classifier model, the classification engine 210 may be configured to generate a training completion signal for the data processing engine 212. The data processing engine 212, upon reception of the training completion signal may further be configured to receive the testing video at 25fps from the user device 104 (i.e., sensed by the sensing unit 102). The data processing engine 212 may further be configured to generate the set of testing images from the testing video such that a total of 25 images are generated from one second of the testing video. Upon generation of the set of testing images, the data processing engine 212 may be configured to select one of every five consecutive images of the set of testing images to generate the compressed set of images that reduces the count of images to be processed by 80% which increase the processing speed of the model by 80% (as the frame rate drops from 25 fps to 5 fps). The data processing engine 212 may further be configured to provide the compressed set of images to the classification engine 210.
Upon reception of the compressed set of testing images/video, the classification engine 210 may be configured to detect the defect classes present in each test image using the trained classifier model weights. The classification engine 210 may further be configured to attach a label of a plurality of labels to each image of the compressed set of images such that the label may correspond to a class of the plurality of classes. The engine 210 may further be configured to provide the defect detected compressed set of images with attached labels to the data processing engine 212.
Upon reception of the compressed set of images with attached labels (i.e., post classification of the compressed set of images), the data processing engine 212 may be configured to identify the one or more duplicate images when a count of consecutive images of a group of the plurality of groups is greater than or equal to four. The data processing engine 212 may further be configured to identify the one or more misclassified images from each group of the plurality of groups when the count of consecutive images of a group of the plurality of groups is less than four. In some aspects of the present disclosure, to identify the one or more duplicate images of the group of the one or more groups, the data processing engine 212 may be configured to determine a mean value of first and last frame numbers of the cumulative images of the group. The data processing engine 212 may further be configured to assign the consecutive images of the group except the image having frame number equal to the mean value as the one or more duplicate images of the group such that only one image of the group may be selected from each group and other images (i.e., duplicate images of the group) can be discarded. By discarding the duplicate images, the data processing engine 212 efficiently reduces the storage occupancy of the database 122. Upon identification of the one or more mismatched images and the one or more duplicate images, the data processing engine 212 may be configured to remove the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate the plurality of filtered groups of images. In some aspects of the present disclosure, the filtered group of images may include images specific to at least one class (or defect in the pipelines) that may be detected by the processing circuitry 120. The data processing engine 212 may further be configured to combine the plurality of filtered groups of images to generate the final classified data.
Upon generation of the final classified data that may depict at least one class of the plurality of classes associated with defects of the pipelines, the notification engine 214 may be configured to generate one or more notification signals to notify the user via the output interface of the user device 104.
In some aspects of the present disclosure, the database 122 may include an instructions repository 216, a training data repository 218, a testing data repository 220, and a defect data repository 222. The instructions repository 216 may be configured to store one or more instructions of the various components of the data processing apparatus 106. The training data repository 218 may be configured to store data and/or metadata of the data utilized by the classification engine to train the classifier model for the classification of the testing images into the plurality of classes, and the like. Specifically, the training data repository 218 may be configured to store the initial video at 25fps, the set of training images, and the set of validation images.
The testing data repository 220 may be configured to store data to be classified into the plurality of classes by the classifier model. Specifically, the testing data repository 220 may be configured to store therein, the testing video at 25fps, the set of testing images, and the compressed set of images (i.e., 5 out of 25 frames of each second of the testing video). The defect data repository 222 may be configured to store the final classified data generated by the processing circuitry 120. The defect data repository 122 may further be configured to store therein one or more signals generated and/or received by the data processing apparatus 106.
The network interface 200 may include suitable logic, circuitry, and interfaces that may be configured to establish and enable a communication between the data processing apparatus 106 and user device 104 via the communication network 108. The network interface 200 may be implemented by use of various known technologies to support wired or wireless communication of the data processing apparatus 106 with the communication network 108. The network interface 200 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and a local buffer circuit.
The I/O interface 202 may include suitable logic, circuitry, interfaces, and/or code that may be configured to receive inputs (e.g., orders) and transmit outputs via a plurality of data ports in the data processing apparatus 106. The I/O interface 202 may include various input and output data ports for different I/O devices. Examples of such I/O devices may include, but are not limited to, a touch screen, a keyboard, a mouse, a joystick, a projector audio output, a microphone, an image-capture device, a liquid crystal display (LCD) screen and/or a speaker.
FIG. 3 is a flowchart that illustrates a method 300 for detection of pipeline defects, in accordance with an aspect of the present disclosure.
At step 302, the data processing apparatus 106 may receive the set of initial images from the sensing unit 102 via the user device 104.
In some aspects of the present disclosure, at step 302, the data processing apparatus 106 may receive the initial video (at 25fps) from the user device 104 and may generate the set of initial images from the initial video.
At step 304, the data processing apparatus 106 may segregate the set of initial images into the set of training images and the set of validation images.
In some aspects of the present disclosure, the set of training images may include randomly selected 80% images of the sets of initial images, and the set of validation images may include 20% images of the sets of initial images other than the set of training images. In some aspects of the present disclosure, the data processing engine may use the set of training images and the set of validation images for training and validation of the detection model, respectively.
In some aspects of the present disclosure, the data processing apparatus 106 may receive the training video (at 25fps) from the user device 104 and may generate the set of training images from the training video.
At step 306, the data processing apparatus 106 may generate the trained classifier model by iterative adjustment of the set of classification parameters based on the sets of training and validation images.
In some aspects of the present disclosure, to generate the trained classifier model, the data processing apparatus 106 may iteratively adjust a set of classification parameters of the processing circuitry 120 based on the set of training images. The data processing apparatus 106 may further fine-tune the set of detection parameters of the processing circuitry 120 based on the set of validation images.
At step 308, the data processing apparatus 106 may receive the set of testing images/videos from the sensing unit 102 via the user device 104.
At step 310, the data processing apparatus 106 may select one of every five consecutive images of the set of testing images to generate the compressed set of images (i.e., down-sample the set of testing images from 25fps to 5fps to generate the compressed set of images).
At step 312, the data processing apparatus 106 may determine a class of the plurality of classes for each image of the compressed set of images using the trained classifier model.
At step 314, the data processing apparatus 106 may attach a label of the plurality of labels to each image of the compressed set of images.
At step 316, the data processing apparatus 106 may segregate the compressed set of images into the plurality of groups based on the label attached to each image of the compressed set of images.
At step 318, the data processing apparatus 106 may identify the one or more duplicate images when the count of consecutive images of a group of the plurality of groups is greater than or equal to four. The data processing apparatus 106 may identify the one or more misclassified images from each group of the plurality of groups when the count of consecutive images of the group of the plurality of groups is less than four.
In some aspects of the present disclosure, to identify the one or more duplicate images of the group, the data processing apparatus 106 may determine the mean value of first and last frame numbers of the cumulative images of the group.
At step 320, the data processing apparatus 106 may remove the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate the plurality of filtered groups of images.
At step 322, the data processing apparatus 106 may combine the plurality of filtered groups of images to generate the final classified data. In some aspects of the present disclosure, the final classified data may include one image associated with each defect location in the pipelines detected by the data processing apparatus 106.
FIG. 4 illustrates defects 400 that may be identified by the system 100 of FIG. 1, in accordance with an aspect of the present disclosure. The defects 400 that may be identified by the system 100 may include a stone 402, a root blockage 404, a sludge accumulation 406, a ferrule 408, a fracture 410, an encrustation 412, a joint displacement 414, and a surface damage (surface degradation) 416. Specifically, the system 100 may be configured to identify the defects in the pipeline. For example, the system 100 may be configured to identify the stone 402 in the pipeline. In some examples, the system 100 may be configured to identify the root blockage 404 in the pipeline. In some other examples, the system 100 may be configured to identify the sludge accumulation 406 in the pipeline. In some other examples, the system 100 may be configured to identify the ferrule 408 in the pipeline. In some other examples, the system 100 may be configured to identify the fracture 410 in the pipeline. In some other examples, the system 100 may be configured to identify the encrustation 412 in the pipeline. In some other examples, the system 100 may be configured to identify the joint displacement 414 in the pipeline. In some other examples, the system 100 may be configured to identify the surface damage 416 in the pipeline.
As mentioned hereinabove, there remains a need for a technical solution to provide an accurate and resource effective pipeline defect detection. The system 100 by way of the data processing apparatus 106 through the method 300 provides accurate and resource effective pipeline defect detection. The system 100 provides a customized dataset (i.e., specifically designed for defect detection in freshwater and sewer pipelines) used for training and validating the classifier model. The data processing apparatus 106 responsible for classification of the set of testing images (i.e., testing dataset) by way of training and fine-tuning the set of classification parameters enhanced a classification accuracy for detection of the defects present in the pipelines. The data processing apparatus 106 further reduces the count of images to be processed by 80% (as the frame rate drops from 25 fps to 5 fps) and thus reduces the computational and storage load of the classifier engine by 80%. Furthermore, the data processing apparatus 106 discards (or removes) the one or more misclassified images and the one or more duplicate images from the classified testing dataset efficiently reduces the storage occupancy of the database 122, and thus reduces the storage load of the system 100.
The foregoing discussion of the present disclosure has been presented for purposes of illustration and description. It is not intended to limit the present disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the present disclosure are grouped together in one or more aspects, configurations, or aspects for the purpose of streamlining the disclosure. The features of the aspects, configurations, or aspects may be combined in alternate aspects, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention the present disclosure requires more features than are expressly recited in each aspect. Rather, as the following aspects reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, configuration, or aspect. Thus, the following aspects are hereby incorporated into this Detailed Description, with each aspect standing on its own as a separate aspect of the present disclosure.
Moreover, though the description of the present disclosure has included a description of one or more aspects, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the present disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those disclosed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
As one skilled in the art will appreciate, the system 100 includes a number of functional blocks in the form of a number of units and/or engines. The functionality of each unit and/or engine goes beyond merely finding one or more computer algorithms to carry out one or more procedures and/or methods in the form of a predefined sequential manner, rather each engine explores adding up and/or obtaining one or more objectives contributing to an overall functionality of the system 100. Each unit and/or engine may not be limited to an algorithmic and/or coded form, rather may be implemented by way of one or more hardware elements operating together to achieve one or more objectives contributing to the overall functionality of the system 100. Further, as it will be readily apparent to those skilled in the art, all the steps, methods and/or procedures of the system 100 are generic and procedural in nature and are not specific and sequential.
Certain terms are used throughout the following description and aspects to refer to particular features or components. As one skilled in the art will appreciate, different persons may refer to the same feature or component by different names. This document does not intend to distinguish between components or features that differ in name but not structure or function. While various aspects of the present disclosure have been illustrated and described, it will be clear that the present disclosure is not limited to these aspects only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present disclosure. , Claims:1. A data processing apparatus (106) comprising:
processing circuitry (120) configured to:
select one of every five consecutive images of a set of testing images to generate a compressed set of images;
attach a label of a plurality of labels to each image of the compressed set of images;
segregate the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images;
identify one or more duplicate images when a count of consecutive images of a group of the plurality of groups is greater than or equal to four and one or more misclassified images from each group of the plurality of groups when the count of consecutive images of a group of the plurality of groups is less than four;
remove the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images; and
combine the plurality of filtered groups of images to generate final classified data.

2. The data processing apparatus (106) as claimed in claim 1, wherein, prior to the attachment of the label to each image of the compressed set of images, the processing circuitry (120) is configured to (i) receive a set of initial images from a sensing unit (102) and (ii) segregate the set of initial images into a set of training images and a set of validation images, wherein the set of training images comprising randomly selected 80% images of the sets of initial images, and the set of validation images comprising 20% images of the sets of initial images other than the set of training images.
3. The data processing apparatus (106) as claimed in claim 2, wherein, upon segregation of the set of initial images, the processing circuitry (120) is configured to (i) iteratively adjust a set of classification parameters of the processing circuitry (120) based on the set of training images, and (ii) fine-tune the set of classification parameters of the processing circuitry (120) based on the set of validation images to generate a trained classifier model.

4. The data processing apparatus (106) as claimed in claim 1, wherein, to identify the one or more duplicate images of the group of the one or more groups, the processing circuitry (120) is configured to (i) determine a mean value of first and last frame numbers of the cumulative images of the group and (ii) assign the consecutive images of the group except the image having frame number equal to the mean value as the one or more duplicate images of the group.

5. The data processing apparatus (106) as claimed in claim 2, wherein, prior to the attachment of the label to each image of the compressed set of images, the processing circuitry (120) is configured to determine a class of a plurality of classes for each image of the compressed set of images using the trained classifier model.

6. The data processing apparatus (106) as claimed in claim 2, wherein the processing circuitry (120) is configured to receive the set of initial images and the set of testing images from an initial video and a testing video, respectively, captured by a sensing unit (102) at a frame resolution of 25 frames per second (fps).

7. A method (300) comprising:
selecting, by way of processing circuitry (120), one of every five consecutive images of a set of testing images to generate a compressed set of images;
attaching, by way of the processing circuitry (120), a label of a plurality of labels to each image of the compressed set of images;
segregating, by way of the processing circuitry (120), the compressed set of images into a plurality of groups based on the label attached to each image of the compressed set of images;
identifying, by way of the processing circuitry (120), one or more duplicate images when a count of consecutive images of a group of the plurality of groups is greater than or equal to four and one or more misclassified images from each group of the plurality of groups when the count of consecutive images of a group of the plurality of groups is less than four;
removing, by way of the processing circuitry (120), the one or more duplicate images and the one or more misclassified images from each group of the plurality of groups to generate a plurality of filtered groups of images; and
combining, by way of the processing circuitry (120), the plurality of filtered groups of images to generate final classified data.

8. The method (300) as claimed in claim 7, wherein, prior to attaching the label to each image of the compressed set of images, the method (300) comprising (i) receiving, by way of the processing circuitry (120), a set of initial images from a sensing unit (102) and (ii) segregating, by way of the processing circuitry (120), the set of initial images into a set of training images and a set of validation images, wherein the set of training images comprising randomly selected 80% images of the sets of initial images, and the set of validation images comprising 20% images of the sets of initial images other than the set of training images.

9. The method (300) as claimed in clam 8, wherein, upon segregating the set of initial images, the method (300) comprising (i) iteratively adjusting, by way of the processing circuitry (120), a set of classification parameters of the processing circuitry (120) based on the set of training images, and (ii) fine-tuning, by way of the processing circuitry (120), the set of classification parameters of the processing circuitry (120) based on the set of validation images to generate a trained classifier model.
10. The method (300) as claimed in claim 7, wherein, for identifying the one or more duplicate images of the group of the one or more groups, the method (300) comprising (i) determining, by way of the processing circuitry (120) a mean value of first and last frame numbers of the cumulative images of the group and (ii) assigning, by way of the processing circuitry (120), the consecutive images of the group except the image having frame number equal to the mean value as the one or more duplicate images of the group.

11. The method (300) as claimed in claim 8, wherein, prior to the attachment of the label to each image of the compressed set of images, the method (300) comprising determining, by way of the processing circuitry (120), a class of a plurality of classes for each image of the compressed set of images using the trained classifier model.

12. The method (300) as claimed in claim 8, further comprising receiving, by way of the processing circuitry (120), the set of initial images and the set of testing images from an initial video and a testing video, respectively, captured by a sensing unit (102) at a frame resolution of 25 frames per second (fps).

Documents

Application Documents

# Name Date
1 202341085846-STATEMENT OF UNDERTAKING (FORM 3) [15-12-2023(online)].pdf 2023-12-15
2 202341085846-FORM FOR STARTUP [15-12-2023(online)].pdf 2023-12-15
3 202341085846-FORM FOR SMALL ENTITY(FORM-28) [15-12-2023(online)].pdf 2023-12-15
4 202341085846-FORM 1 [15-12-2023(online)].pdf 2023-12-15
5 202341085846-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-12-2023(online)].pdf 2023-12-15
6 202341085846-EVIDENCE FOR REGISTRATION UNDER SSI [15-12-2023(online)].pdf 2023-12-15
7 202341085846-DRAWINGS [15-12-2023(online)].pdf 2023-12-15
8 202341085846-DECLARATION OF INVENTORSHIP (FORM 5) [15-12-2023(online)].pdf 2023-12-15
9 202341085846-COMPLETE SPECIFICATION [15-12-2023(online)].pdf 2023-12-15
10 202341085846-FORM-26 [18-03-2024(online)].pdf 2024-03-18
11 202341085846-Proof of Right [11-06-2024(online)].pdf 2024-06-11
12 202341085846-FORM-9 [19-07-2024(online)].pdf 2024-07-19
13 202341085846-STARTUP [23-07-2024(online)].pdf 2024-07-23
14 202341085846-FORM28 [23-07-2024(online)].pdf 2024-07-23
15 202341085846-FORM 18A [23-07-2024(online)].pdf 2024-07-23
16 202341085846-Covering Letter [26-08-2024(online)].pdf 2024-08-26
17 202341085846-FER.pdf 2024-09-05
18 202341085846-FORM 3 [26-09-2024(online)].pdf 2024-09-26
19 202341085846-FER_SER_REPLY [31-12-2024(online)].pdf 2024-12-31
20 202341085846-RELEVANT DOCUMENTS [22-08-2025(online)].pdf 2025-08-22
21 202341085846-RELEVANT DOCUMENTS [22-08-2025(online)]-1.pdf 2025-08-22
22 202341085846-POA [22-08-2025(online)].pdf 2025-08-22
23 202341085846-POA [22-08-2025(online)]-1.pdf 2025-08-22
24 202341085846-FORM 13 [22-08-2025(online)].pdf 2025-08-22
25 202341085846-FORM 13 [22-08-2025(online)]-1.pdf 2025-08-22
26 202341085846-AMENDED DOCUMENTS [22-08-2025(online)].pdf 2025-08-22
27 202341085846-AMENDED DOCUMENTS [22-08-2025(online)]-1.pdf 2025-08-22
28 202341085846-US(14)-HearingNotice-(HearingDate-08-12-2025).pdf 2025-11-06

Search Strategy

1 searchdoc-GoogleDocsE_04-09-2024.pdf
2 202341085846_SearchStrategyAmended_E_SearchStrategyAE_24-10-2025.pdf