Sign In to Follow Application
View All Documents & Correspondence

A Smart Road Asset Management System And Method Thereof

Abstract: The present disclosure relates to image processing techniques and more particularly to method and system for managing road asset using smart road asset management system. The system may capture real time images of road infrastructure using one or more image capturing units and classify the real time images into road asset categories based on image classifier model. System may determine fault associated with each of the classified real time images, based on predefined image processing rules and determine dimensional parameters associated with the determined fault using one or more sensing units. Further, the system may predict overall material required for rectifying the determined fault based on the determined dimensional parameters and output the predicted overall material required for rectifying the determined fault on a user interface of an electronic device. The system may estimate overall cost value required for the predicted overall material required for rectifying the determined fault.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 December 2020
Publication Number
01/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-07-22
Renewal Date

Applicants

Indian Institute of Science
C V Raman Road, Bangalore -560012, Karnataka, India.

Inventors

1. VERMA, Ashish
Associate Professor, Transportation Systems Engg. (TSE), Dept. of Civil Engg., Indian Institute of Science (IISc), Bangalore - 560012, Karnataka, India.
2. TIWARI, Aruna
Associate Professor, Computer Science and Engineering, Indian Institute of Technology Indore (IITI), Khandwa Road, Simrol, Indore - 453552, Madhya Pradesh, India.
3. KUMAR, Neetesh
Assistant Professor, Department of Computer Science and Engineering, Indian Institute of Technology Roorkee (IITR), Uttarakhand -247667, India.
4. PATIDAR, Sanjay
Assistant Professor, Department of Software Engineering, Delhi Technological University (DTU), Bawana Road, Shahbad Daulatpur Village, Rohini, Delhi -110042, India.
5. SINGH, Upendra
CEO, Innovation House Technologies Private Limited, Delhi -110040, India.

Specification

DESC:TECHNICAL FIELD
[0001] The present disclosure relates to image processing techniques. More particularly, the present disclosure relates to method and system for managing road asset using smart road asset management system.

BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Generally, road asset management may be based on an analysis of road data related to inventory, condition, traffic, unit costs, road deterioration models, and the like. The data may be entered into a conventional Road Asset Management System (RAMS) that may allow the data to be analysed, and optimal budget levels and allocations to be determined.
[0004] However, the road asset management may not be trivial in developing countries, since bitumen and concrete roads constitute a significant problem for both citizens and government. In an instance, pothole, bleeding, block crack, edge crack, longitudinal cracks, ravelling and transverse cracks can create severe damage to the vehicles such as flat vehicle tyres, scratches, dents, leaks, and the like. Generally, estimation of dimensions of potholes, bleeding, block crack, edge crack, longitudinal cracks, ravelling, transverse cracks, and the like may be carried out manually by concerned agencies which may in turn require more manpower, equipment, time and cost.
[0005] Therefore, there is a need for a smart system that can accurately detect the cracks with the appropriate dimensions of the cracks, which helps in the estimation of actual cost in maintenance of these cracks and there by assist in efficient road asset management with the definitive information at a dedicated place and accessible to all the partakers in the task.

OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0007] An object of the present disclosure is to provide a method and a system for managing road asset.
[0008] An object of the present disclosure is to provide a method and a system to automatically detect and classify of all types of road cracks or faults in a road infrastructure on bitumen and concrete road images.
[0009] An object of the present disclosure is to provide a method and a system for managing highway maintenance automatically by determining road cracks.
[0010] An object of the present disclosure is to provide a method and a system for determining one or more dimensional parameters of the crack or road infrastructure.
[0011] An object of the present disclosure is to provide a method and a system for automatically classifying roads based on material with which roads are constructed.
[0012] An object of the present disclosure is to provide a method and a system for estimating overall cost value required for the predicted overall material required for rectifying the determined fault.

BRIEF DESCRIPTION OF THE DRAWINGS
[0013] In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[0014] FIG. 1 illustrates exemplary network architecture in which or with which proposed system can be implemented in accordance with an embodiment of the present disclosure.
[0015] FIG. 2A illustrates an exemplary architecture of a proposed system, in accordance with an embodiment of the present disclosure.
[0016] FIG. 2B illustrates an exemplary schematic diagram of a scenario of implementation of the network architecture in a vehicle, in accordance with an embodiment of the present disclosure.
[0017] FIG. 3A illustrates an exemplary flow diagram depicting method for classifying road asset categories, in accordance with an embodiment of the present disclosure.
[0018] FIG. 3B illustrates an exemplary flow diagram depicting method for estimating cost, in accordance with an embodiment of the present disclosure.
[0019] FIG. 4 illustrates an exemplary flow chart depicting a method for managing road asset, in accordance with an embodiment of the present disclosure.
[0020] FIG. 5 illustrates a hardware platform 500 for implementation of the disclosed system 110, according to an example embodiment of the present disclosure.

DETAILED DESCRIPTION
[0021] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0022] The present disclosure relates to image processing techniques. More particularly, the present disclosure relates to method and system for managing road asset using smart road asset management system.
[0023] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
[0024] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. These exemplary embodiments are provided only for illustrative purposes and so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. The invention disclosed may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Various modifications will be readily apparent to persons skilled in the art. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed.
[0025] Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system.
[0026] Embodiments of the present disclosure provides a method and a system for managing road asset. The present disclosure provides a method and a system to automatically detect and classify of all types of road cracks or faults in a road infrastructure on bitumen and concrete road images. The present disclosure provides a method and a system for managing highway maintenance automatically by determining road cracks. The present disclosure provides a method and a system for determining one or more dimensional parameters of the crack or road infrastructure. The present disclosure provides a method and a system for automatically classifying roads based on material with which roads are constructed. The present disclosure provides a method and a system for estimating overall cost value required for the predicted overall material required for rectifying the determined fault.
[0027] Embodiments herein reduces the time spent on regular maintenance of bitumen and concrete roads, prevents road accidents occurring due to unwanted cracks on bitumen and concrete roads. Embodiments herein prevents damage to vehicles due to cracks on bitumen and concrete roads. Embodiments herein provides a system that operated in air such as drone and on land, such as a system equipped with vehicles bath and help in efficiently maintenance of bitumen and concrete roads. Embodiments herein provides an application that includes information about road cracks to the driver of vehicle driving on particular bitumen and concrete road and thereby provide driving assistance. Embodiments herein provides a cloud-based dataset that contains information about road cracks on particular bitumen and concrete road.
[0028] FIG. 1 illustrates exemplary network architecture 100 in which or with which proposed system 110 can be implemented, in accordance with an embodiment of the present disclosure. As illustrated, the network architecture 100 may include one or more sensing units 102-1, 102-2, …...102-N (collectively referred as sensing units 102 and individually referred as sensing unit 102), one or more image capturing units 104-1, 104-2, …...104-N (collectively referred as image capturing units 104 and individually referred as image capturing unit 104), an electronic device 108, a system 110, and a centralized server 118. In an embodiment, the network architecture 100 may include one or more power supply units (not shown in FIG. 1) that can be, but not limited to, electrical power supply, one or more batteries, and any other power source. The sensing units 102, and the image capturing units 104 may be powered using aforementioned one or more power supply units. Further, the sensing units 102, and the image capturing units 104 may be implemented in, but are not limited to, vehicles, autonomous vehicles, instrumented vehicles, unmanned aerial vehicles, airplanes, drones, satellites, ground penetrating radars, traffic speed deflectometers, smartphones, accelerometers, non-nuclear density gauge, vibro-acoustic signatures, road infrastructures, and the like.
[0029] The electronic device 108 may be connected to the image capturing units 104 via a communication network 106. The system 110 may be connected to the centralized server 118 via the communication network 106. The centralized server 118 may include, but are not limited to, a stand-alone server, a remote server, cloud computing server, a dedicated server, a rack server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof, and the like. The communication network 106 may be a wired communication network or a wireless communication network. The wireless communication network may be any wireless communication network capable to transfer data between entities of that network such as, but are not limited to, a carrier network including circuit switched network, a public switched network, a Content Delivery Network (CDN) network, a Long-Term Evolution (LTE) network, a Global System for Mobile Communications (GSM) network and a Universal Mobile Telecommunications System (UMTS) network, an Internet, intranets, local area networks, wide area networks, mobile communication networks, combinations thereof, and the like.
[0030] In one instance, the system 110 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. For instance, the system 110 may be implemented by way of standalone device such as the centralized server 118, and the like, and may be communicatively coupled to the electronic device 108. In another instance, the system 110 may be implemented in or associated with the electronic device 110. The electronic device 108 may be any electrical, electronic, electromechanical and computing device. The electronic device 108 may include, but are not limited to, a mobile device, a smart phone, a Personal Digital Assistant (PDA), a tablet computer, a phablet computer, a wearable device, a Virtual Reality/Augment Reality (VR/AR) device, a laptop, a desktop, and the like. The system 110 may be implemented in hardware or a suitable combination of hardware and software. Further, the system 110 may include a processor 112, an Input/Output (I/O) interface 114, and a memory 116. The Input/Output (I/O) interface 114 on the system 110 may be used to receive input from the image capturing units 104.
[0031] Further, the system 110 may also include other units such as a user interface, a display unit, an input unit, an output unit and the like, however the same are not shown in the FIG. 1, for the purpose of clarity. Also, in FIG. 1 only few units are shown, however the system 110 may include multiple such units or the system 110 may include any such numbers of the units, obvious to a person skilled in the art or as required to implement the features of the present disclosure. The system 110 may be a hardware device including the processor 112 executing machine-readable program instructions to manage road asset. Execution of the machine-readable program instructions by the processor 112 may enable the proposed system 110 to manage road asset. The “hardware” may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field programmable gate array, a digital signal processor, or other suitable hardware. The “software” may comprise one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code or other suitable software structures operating in one or more software applications or on one or more processors. The processor 112 may include, for example, but are not limited to, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, any devices that manipulate data or signals based on operational instructions, and the like. Among other capabilities, the processor 112 may fetch and execute computer-readable instructions in the memory 116 operationally coupled with the system 110 for performing tasks such as data processing, input/output processing, feature extraction, image processing, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data.
[0032] In an embodiment, the system 110 may capture real time images of a road infrastructure using one or more image capturing units 104. The road infrastructure may include, but are not limited to, roads, pavements, footpaths, crosswalks, dividers, barricaders, flyovers, sky-walkers, earthworks, drainages, structures such as culverts, bridges, buildings, and the like. Further, the system 110 may classify the real time images into one or more road asset categories based on an image classifier model. For classifying the real time images into one or more road asset categories based on the image classifier model, the system 110 may extract one or more features associated with the captured real time images of the road infrastructure. Further, the system 110 may classify the extracted one or more features into one or more road asset categories by applying the extracted one or more features onto a trained image classifier model. In an embodiment, the trained image classified model includes, but are not limited to, a mask regional convolution neural network, computer vision methods, and the like. Further, the system 110 may assign a class label for each of the classified one or more features based on a type of the one or more road asset categories.
[0033] Furthermore, the system 110 may determine a fault associated with each of the classified real time images, based on one or more predefined image processing rules. The fault may include, but are not limited to, cracking, rutting, ravelling, and roughness, structural evaluation, layer thicknesses, moduli, shear wave velocity, deflection, distress identification, materials modelling, road pothole, a pothole, a bleeding, a block crack, an edge crack, a longitudinal crack, a transverse crack and the like. For determining the fault associated with each of the classified real time images based on the one or more predefined image processing rules, the system 110 may compare each of the classified real time images with corresponding one or more prestored road images. Further, the system 110 may determine deviation in each of the classified real time images based on output of the comparison. Furthermore, the system 110 may determine, whether the deviation in each of the classified real time image amounts to a fault based on the one or more predefined image processing rules. Further, the system 110 may declare the classified real time image as comprising the fault, if the deviation in each of the classified real time image amounts to the fault. In an embodiment, the system 100 may perform root-cause analysis for the determined fault based on a trained neural network model, other models based on requirements, and the like. Further, the system 110 may predict the overall material required for rectifying the determined fault based on the performed root cause analysis.
[0034] Furthermore, the system 110 may determine one or more dimensional parameters associated with the determined fault using the one or more sensing units 102. The one or more sensing units 102 may include, but are not limited to, ultrasonic sensors, laser sensors, and the like. The one or more-dimensional parameters includes, but are not limited to, a depth, an area covered by the fault over a surface of a road, a length of the fault, and the like. In an embodiment, the system 110 may predict overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters. Thereafter, the system 110 may output the predicted overall material required for rectifying the determined fault on the user interface of the electronic device 108. In an embodiment, the system 110 may estimate an overall cost value required for the predicted overall material required for rectifying the determined fault. For predicting the overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters, the system 110 may apply the one or more-dimensional parameters onto a trained deep learning-based prediction model. The system 110 may determine a weight score associated with each of the one or more-dimensional parameters, based on an output of the trained deep learning-based prediction model. Further, the system 110 may predict an overall material required for rectifying the determined fault based on order of the determined weight score associated with each of the one or more-dimensional parameters. For outputting the predicted overall material required for rectifying the determined fault on the user interface of the electronic device 108, the system 110 may generate one or more warning messages indicating the determined faults. The one or more warning messages comprise the predicted overall material required for rectifying the determined fault, the determined fault and possible cause of the determined fault. Further, the system 110 may transmit the generated one or more warning messages to the electronic device 108 using the communication network 106.
[0035] FIG. 2A illustrates an exemplary architecture of the proposed system 110, in accordance with an embodiment of the present disclosure. The system 110 may include the processor 112, the Input/Output (I/O) interface 114, and the memory 116. In some implementations, the system 110 may include data 202, and modules 204. As an example, the data 202 may be stored in the memory 116 configured in the system 110 as shown in the FIG. 2A. In an embodiment, the data 202 may include image data 206, classification data 208, fault data 210, dimensional parametric data 212, material data 214, output data 216, and other data 218. In an embodiment, the data 202 may be stored in the memory 116 in the form of various data structures. Additionally, the data 202 can be organized using data models, such as relational or hierarchical data models. The other data 218 may store data, including temporary data and temporary files, generated by the modules 204 for performing the various functions of the system 110.
[0036] In an embodiment, the modules 204, may include a capturing module 220, a classifying module 222, a determining module 224, a predicting module 226, an outputting module 228, and other modules 230.
[0037] In an embodiment, the data 202 stored in the memory 116 may be processed by the modules 204 of the system 110. The modules 204 may be stored within the memory 116. In an example, the modules 204 may be communicatively coupled to the processor 112 configured in the system 110, may also be present outside the memory 116, as shown in FIG. 2A, and implemented as hardware. As used herein, the term modules refer to an Application-Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
[0038] The memory 116 can include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0039] In an embodiment, the capturing module 220 may capture real time images of a road infrastructure using one or more image capturing units 104. The captured real time images may be stored as image data 206. Further, the classifying module 222 may classify the real time images into one or more road asset categories based on an image classifier model. The classified real time images may be stored as the classification data 208. Furthermore, the determining module 224 may determine a fault associated with each of the classified real time images, based on one or more predefined image processing rules. The determined fault associated with each of the classified real time images may be stored as the fault data 210. Furthermore, the determining module 224 may determine one or more dimensional parameters associated with the determined fault using one or more sensing units 102. The one or more sensing units 102 includes ultrasonic sensors, laser sensors and the like. The one or more-dimensional parameters comprise a depth, an area covered by the fault over a surface of a road, and a length of the fault. The determined one or more dimensional parameters may be stored as the dimensional parametric data 212. In an embodiment, the predicting module 226 may predict overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters. The predicted overall material required for rectifying the determined fault may be stored as the material data 214. The fault includes, but are not limited to, a road crack, a road pothole, a pothole, a bleeding, a block crack, an edge crack, a longitudinal crack, a ravelling, a transverse crack, and the like. Thereafter, the outputting module 228 may output the predicted overall material required for rectifying the determined fault on a user interface of the electronic device 108. The outputted the predicted overall material required for rectifying the determined fault may be stored as the output data 216.
[0040] In an embodiment, the system 110 may estimate an overall cost value required for the predicted overall material required for rectifying the determined fault. For classifying the real time images into one or more road asset categories based on the image classifier model, the system 110 may extract one or more features associated with the captured real time images of the road infrastructure. Further, the classifying module 222 may classify the extracted one or more features into one or more road asset categories by applying the extracted one or more features onto a trained image classifier model. Further, the system 110 may assig a class label for each of the classified one or more features based on a type of the one or more road asset categories.
[0041] In an embodiment, the trained image classified model comprises a mask regional convolution neural network, computer vision methods, and the like. For determining the fault associated with each of the classified real time images based on the one or more predefined image processing rules, the system 110 may compare each of the classified real time images with corresponding one or more prestored road images. Further, the determining module 224 may determine deviation in each of the classified real time images based on output of the comparison. Furthermore, the determining module 224 may determine, whether the deviation in each of the classified real time image amounts to a fault based on the one or more predefined image processing rules. Further, the system 110 may declare the classified real time image as comprising the fault, if the deviation in each of the classified real time image amounts to the fault. For predicting the overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters, the system 110 may apply the one or more-dimensional parameters onto a trained deep learning-based prediction model. The determining module 224 may determine a weight score associated with each of the one or more-dimensional parameters, based on an output of the trained deep learning-based prediction model. Further, the predicting module 226 may predict an overall material required for rectifying the determined fault based on order of the determined weight score associated with each of the one or more-dimensional parameters.
[0042] In an embodiment, the system 100 may perform root-cause analysis for the determined fault based on a trained neural network model, other models based on requirements, and the like. Further, the predicting module 226 may predict the overall material required for rectifying the determined fault based on the performed root cause analysis. For outputting the predicted overall material required for rectifying the determined fault on the user interface of the electronic device 108, the system 110 may generate one or more warning messages indicating the determined faults. The one or more warning messages comprise the predicted overall material required for rectifying the determined fault, the determined fault and possible cause of the determined fault. Further, the system 110 may transmit the generated one or more warning messages to the electronic device 108 using the communication network 106.
Exemplary scenario
[0043] Consider a vehicle shown in FIG. 2B, where an exemplary network architecture may be implemented in the vehicle. The vehicle may include a platform 231 for sensor and camera support, a smart camera 232, a ultrasonic/ laser sensor/suitable sensor 233, a solar panel 234, a battery 235 for solar panel, an Air Conditioner (AC) 236 for cooling temperature inside vehicle, a back end device clones 237 like 231, 232, 233, 244, a Central Processing Unit (CPU) 238 for back end and front end devices, an activity display screen 239 for front end device, example classification of crack and cost estimation, an activity display screen 240 for back end device, example classification of crack and cost estimation, a keyboard, mouse, etc. gadgets stand 241, a sitting chair 242, a storage space 243 for front and back device store after folding, and a stand support 244 for cameras and sensors.
[0044] In an instance, the system 110 may be associated with the CPU 238, the system 110 may receive a first and second set of attributes from a first set of data packets received from one or more image capturing units 104 such as the smart camera 232, and a second set of data packets received from one or more sensing units 102 such as the ultrasonic/ laser sensor/suitable sensor 233, respectively. In an exemplary embodiment, the first set of data packets pertain to images associated with roads and the second set of data packets pertain to parameters associated with cracks on the roads. In an embodiment, the system 110 may classify and predict the first set of data packets pertaining to images of roads based on the extracted first set of attributes into a first output data set. The first output data set can pertain to the classification of various types of roads based on the extracted second set of attributes. In an exemplary embodiment, the classification of various types of roads can be done through complex image processing techniques through the use of Deep learning technique such as kth nearest neighbour (KNN) and convolution neural network (CNN) such as Mask CNN, but not limited to the like.
[0045] In another instance, the system 110 may determine cost analysis for cracks based on the extracted first and second set of attributes applied on the first output data set. The system 110 can determine area covered by the crack over the surface of the road based on the extracted first and the second set of attributes. The system 110 can then determine cost of cracks by evaluating dimensions of the crack and amount and volume of construction material to be required associated with a particular road that can be obtained from the first output data set corresponding to the classification of roads. Based upon these parameters, the system 110 can generate an estimated cost and construction material required to fill the specific crack. Thus, embodiments of the present disclosure provide a complete solution for sending alerts and reminders to the end users.
[0046] FIG. 3A illustrates an exemplary flow diagram depicting method 300A for classifying road asset categories, in accordance with an embodiment of the present disclosure.
[0047] At step 302, the method 300A may include capturing real-time images of the road as the input data. At step 304, the method 300A may include applying feature extraction to extract features from the captured images. At step 306, the method 300A may include applying image processing algorithms such as classifier using KNN and mask R-CNN. Further, at step 308, the method may include classifying image the roads. Furthermore, at step 310, the method 300A may include predicting to assign each image of road a particular class of road. At step 312, the method 300A may include providing each image with its class label or output with a class label.
[0048] FIG. 3B illustrates an exemplary flow diagram depicting method 300B for estimating cost, in accordance with an embodiment of the present disclosure.
[0049] At step 314, the method 300B may include receiving input images that are already classified. At step 316, the method 300B may include applying image processing algorithm such as MASK R CNN which can determine area covered by crack over the surface of the road. At step 318, the method 300B may include determining the depth of the crack by using one or more sensing units 102 such as ultrasonic sensors, laser sensors, but not limited to it. At step 320, the method 300B may include estimating the dimension of the crack. At step 322, the method 300B may include determining, based upon dimensions of the crack, the total material required to fill the crack. At step 324, the method 300B may include outputting total cost for repairing the crack and displaying the output.
[0050] FIG. 4 illustrates an exemplary flow chart depicting a method 400 for managing road asset, in accordance with an embodiment of the present disclosure.
[0051] At block 402, the method 400 includes capturing, by the processor 112 associated with the system 110, real time images of a road infrastructure using one or more image capturing units 104. At block 404, the method 400 includes classifying, by the processor 112, the real time images into one or more road asset categories based on an image classifier model. At block 406, the method 400 includes determining, by the processor, a fault associated with each of the classified real time images, based on one or more predefined image processing rules. At block 408, the method 400 includes determining, by the processor 112, one or more dimensional parameters associated with the determined fault using one or more sensing units. The one or more-dimensional parameters includes a depth, an area covered by the fault over a surface of a road, a length of the fault, and the like. At block 410, the method 400 includes predicting, by the processor 112, overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters. At block 412, the method 400 includes outputting, by the processor 112, the predicted overall material required for rectifying the determined fault on a user interface of the electronic device 108.
[0052] FIG. 5 illustrates a hardware platform 500 for implementation of the disclosed system 110, according to an example embodiment of the present disclosure. For the sake of brevity, construction and operational features of the system 110 which are explained in detail above are not explained in detail herein. Particularly, computing machines such as but not limited to internal/external server clusters, quantum computers, desktops, laptops, smartphones, tablets, and wearables which may be used to execute the system 110 or may include the structure of the hardware platform 500. As illustrated, the hardware platform 500 may include additional components not shown, and that some of the components described may be removed and/or modified. For example, a computer system with multiple GPUs may be located on external-cloud platforms including Amazon® Web Services, or internal corporate cloud computing clusters, or organizational computing resources, etc.
[0053] The hardware platform 500 may be a computer system such as the system 110 that may be used with the embodiments described herein. The computer system may represent a computational platform that includes components that may be in a server or another computer system. The computer system may execute, by the processor 505 (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions, and other processes described herein. These methods, functions, and other processes may be embodied as machine-readable instructions stored on a computer-readable medium, which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory). The computer system may include the processor 505 that executes software instructions or code stored on a non-transitory computer-readable storage medium 510 to perform methods of the present disclosure. The software code includes, for example, instructions to gather data and images and analyse images. In an example, the modules 204 may be software codes or components performing these steps.
[0054] The instructions on the computer-readable storage medium 510 are read and stored the instructions in storage 515 or in random access memory (RAM). The storage 515 may provide a space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM such as RAM 520. The processor 505 may read instructions from the RAM 520 and perform actions as instructed.
[0055] The computer system may further include the output device 525 to provide at least some of the results of the execution as output including, but not limited to, visual information to users, such as external agents. The output device 525 may include a display on computing devices and virtual reality glasses. For example, the display may be a mobile phone screen or a laptop screen. GUIs and/or text may be presented as an output on the display screen. The computer system may further include an input device 530 to provide a user or another device with mechanisms for entering data and/or otherwise interact with the computer system. The input device 530 may include, for example, a keyboard, a keypad, a mouse, or a touchscreen. Each of these output devices 525 and input device 530 may be joined by one or more additional peripherals. For example, the output device 525 may be used to display the results such as bot responses by the executable chatbot.
[0056] A network communicator 535 may be provided to connect the computer system to a network and in turn to other devices connected to the network including other clients, servers, data stores, and interfaces, for instance. A network communicator 535 may include, for example, a network adapter such as a LAN adapter or a wireless adapter. The computer system may include a data sources interface 540 to access the data source 545. The data source 545 may be an information resource. As an example, a database of exceptions and rules may be provided as the data source 545. Moreover, knowledge repositories and curated data may be other examples of the data source 545.
[0057] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0058] Some of the advantages of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0059] The present disclosure provides a method and a system for managing road asset.
[0060] The present disclosure provides a method and a system to automatically detect and classify of all types of road cracks or faults in a road infrastructure on bitumen and concrete road images.
[0061] The present disclosure provides a method and a system for managing highway maintenance automatically by determining road cracks.
[0062] The present disclosure provides a method and a system for determining one or more dimensional parameters of the crack or road infrastructure.
[0063] The present disclosure provides a method and a system for automatically classifying roads based on material with which roads are constructed.
[0064] The present disclosure provides a method and a system for estimating overall cost value required for the predicted overall material required for rectifying the determined fault.

,CLAIMS:1. A method for managing road asset comprising:
capturing, by a processor (112) associated with a system (110), real time images of a road infrastructure using one or more image capturing units (104);
classifying, by the processor (112), the real time images into one or more road asset categories based on an image classifier model;
determining, by the processor (112), a fault associated with each of the classified real time images, based on one or more predefined image processing rules;
determining, by the processor (112), one or more dimensional parameters associated with the determined fault using one or more sensing units (102), wherein the one or more-dimensional parameters comprise a depth, an area covered by the fault over a surface of a road, and a length of the fault;
predicting, by the processor (112), overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters; and
outputting, by the processor (112), the predicted overall material required for rectifying the determined fault on a user interface of an electronic device (108).
2. The method as claimed in claim 1, further comprising:
estimating, by the processor (112), an overall cost value required for the predicted overall material required for rectifying the determined fault.
3. The method as claimed in claim 1, wherein classifying the real time images into one or more road asset categories based on the image classifier model further comprises:
extracting, by the processor (112), one or more features associated with the captured real time images of the road infrastructure;
classifying, by the processor (112), the extracted one or more features into one or more road asset categories by applying the extracted one or more features onto a trained image classifier model; and
assigning, by the processor (112), a class label for each of the classified one or more features based on a type of the one or more road asset categories.
4. The method as claimed in claim 3, wherein the trained image classified model comprises a mask regional convolution neural network, and computer vision methods.
5. The method as claimed in claim 1, wherein determining the fault associated with each of the classified real time images based on the one or more predefined image processing rules further comprises:
comparing, by the processor (112), each of the classified real time images with corresponding one or more prestored road images;
determining, by the processor (112), a deviation in each of the classified real time images based on output of the comparison;
determining, by the processor (112), whether the deviation in each of the classified real time image amounts to a fault based on the one or more predefined image processing rules; and
declaring, by the processor (112), the classified real time image as comprising the fault, if the deviation in each of the classified real time image amounts to the fault.
6. The method as claimed in claim 1, wherein predicting the overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters further comprises:
applying, by the processor (112), the one or more-dimensional parameters onto a trained deep learning-based prediction model;
determining, by the processor (112), a weight score associated with each of the one or more-dimensional parameters, based on an output of the trained deep learning-based prediction model; and
predicting, by the processor (112), an overall material required for rectifying the determined fault based on order of the determined weight score associated with each of the one or more-dimensional parameters.
7. The method as claimed in claim 1, further comprising:
performing, by the processor (112), root-cause analysis for the determined fault based on a trained neural network model, and other models based on requirements; and
predicting, by the processor (112), the overall material required for rectifying the determined fault based on the performed root cause analysis.
8. The method as claimed in claim 1, wherein outputting the predicted overall material required for rectifying the determined fault on the user interface of the electronic device (108) further comprises:
generating, by the processor (112), one or more warning messages indicating the determined faults, wherein the one or more warning messages comprise the predicted overall material required for rectifying the determined fault, the determined fault and possible cause of the determined fault; and
transmitting, by the processor (112), the generated one or more warning messages to the electronic device (108) using a communication network (106).
9. The method as claimed in claim 1, wherein the fault comprises at least one of a road cracks, a road pothole, a pothole, a bleeding, a block crack, an edge crack, a longitudinal crack, a ravelling, and a transverse crack.
10. The method as claimed in claim 1, wherein the one or more sensing units (102) comprises ultrasonic sensors, and laser sensors.
11. A system (110) for managing road asset comprising:
a processor (112);
a memory (116) coupled to the processor (112), wherein the memory (116) comprising processor executable instructions, which on execution, causes the processor (112) to:
capture real time images of a road infrastructure using one or more image capturing units (104);
classify the real time images into one or more road asset categories based on an image classifier model;
determine a fault associated with each of the classified real time images, based on one or more predefined image processing rules;
determine one or more dimensional parameters associated with the determined fault using one or more sensing units (102), wherein the one or more-dimensional parameters comprise a depth, an area covered by the fault over a surface of a road, and a length of the fault;
predict overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters; and
output the predicted overall material required for rectifying the determined fault on a user interface of an electronic device (108).
12. The system (110) as claimed in claim 11, wherein the processor (112) is further configured to:
estimate an overall cost value required for the predicted overall material required for rectifying the determined fault.
13. The system (110) as claimed in claim 11, wherein for classifying the real time images into one or more road asset categories based on the image classifier mode, the processor (112) is configured to:
extract one or more features associated with the captured real time images of the road infrastructure;
classify the extracted one or more features into one or more road asset categories by applying the extracted one or more features onto a trained image classifier model; and
assign a class label for each of the classified one or more features based on a type of the one or more road asset categories.
14. The system (110) as claimed in claim 13, wherein the trained image classified model comprises a mask regional convolution neural network, and computer vision methods.
15. The system (110) as claimed in claim 11, wherein for determining the fault associated with each of the classified real time images based on the one or more predefined image processing rules, the processor (112) is configured to:
compare each of the classified real time images with corresponding one or more prestored road images;
determine a deviation in each of the classified real time images based on output of the comparison;
determine whether the deviation in each of the classified real time image amounts to a fault based on the one or more predefined image processing rules; and
declare the classified real time image as comprising the fault, if the deviation in each of the classified real time image amounts to the fault.
16. The system (110) as claimed in claim 11, wherein for predicting the overall material required for rectifying the determined fault based on the determined one or more-dimensional parameters, the processor (112) is configured to:
apply the one or more-dimensional parameters onto a trained deep learning-based prediction model;
determine a weight score associated with each of the one or more-dimensional parameters, based on an output of the trained deep learning-based prediction model; and
predict an overall material required for rectifying the determined fault based on order of the determined weight score associated with each of the one or more-dimensional parameters.
17. The system (110) as claimed in claim 11, wherein the processor (112) is further configured to:
perform root-cause analysis for the determined fault based on a trained neural network model, other models based on requirements; and
predict the overall material required for rectifying the determined fault based on the performed root cause analysis.
18. The system (110) as claimed in claim 11, wherein for outputting the predicted overall material required for rectifying the determined fault on the user interface of the electronic device (108), the processor (112) is further configured to:
generate one or more warning messages indicating the determined faults, wherein the one or more warning messages comprise the predicted overall material required for rectifying the determined fault, the determined fault and possible cause of the determined fault; and
transmit the generated one or more warning messages to the electronic device (108) using a communication network (106).
19. The system (110) as claimed in claim 11, wherein the fault comprises at least one of a road crack, a road pothole, a pothole, a bleeding, a block crack, an edge crack, a longitudinal crack, a ravelling and a transverse crack.
20. The system (110) as claimed in claim 11, wherein the one or more sensing units (102) comprises ultrasonic sensors, and laser sensors.

Documents

Application Documents

# Name Date
1 202041057251-STATEMENT OF UNDERTAKING (FORM 3) [30-12-2020(online)].pdf 2020-12-30
2 202041057251-PROVISIONAL SPECIFICATION [30-12-2020(online)].pdf 2020-12-30
3 202041057251-POWER OF AUTHORITY [30-12-2020(online)].pdf 2020-12-30
4 202041057251-FORM 1 [30-12-2020(online)].pdf 2020-12-30
5 202041057251-DRAWINGS [30-12-2020(online)].pdf 2020-12-30
6 202041057251-DECLARATION OF INVENTORSHIP (FORM 5) [30-12-2020(online)].pdf 2020-12-30
7 202041057251-Proof of Right [03-02-2021(online)].pdf 2021-02-03
8 202041057251-OTHERS [30-12-2021(online)].pdf 2021-12-30
9 202041057251-ENDORSEMENT BY INVENTORS [30-12-2021(online)].pdf 2021-12-30
10 202041057251-EDUCATIONAL INSTITUTION(S) [30-12-2021(online)].pdf 2021-12-30
11 202041057251-DRAWING [30-12-2021(online)].pdf 2021-12-30
12 202041057251-CORRESPONDENCE-OTHERS [30-12-2021(online)].pdf 2021-12-30
13 202041057251-COMPLETE SPECIFICATION [30-12-2021(online)].pdf 2021-12-30
14 202041057251-RELEVANT DOCUMENTS [31-12-2021(online)].pdf 2021-12-31
15 202041057251-FORM-9 [31-12-2021(online)].pdf 2021-12-31
16 202041057251-FORM 13 [31-12-2021(online)].pdf 2021-12-31
17 202041057251-FORM 18A [04-01-2022(online)].pdf 2022-01-04
18 202041057251-EVIDENCE OF ELIGIBILTY RULE 24C1f [04-01-2022(online)].pdf 2022-01-04
19 202041057251-FER.pdf 2022-04-21
20 202041057251-FORM-26 [18-10-2022(online)].pdf 2022-10-18
21 202041057251-FER_SER_REPLY [18-10-2022(online)].pdf 2022-10-18
22 202041057251-CORRESPONDENCE [18-10-2022(online)].pdf 2022-10-18
23 202041057251-CLAIMS [18-10-2022(online)].pdf 2022-10-18
24 202041057251-US(14)-HearingNotice-(HearingDate-08-05-2023).pdf 2023-04-21
25 202041057251-Correspondence to notify the Controller [05-05-2023(online)].pdf 2023-05-05
26 202041057251-Written submissions and relevant documents [22-05-2023(online)].pdf 2023-05-22
27 202041057251-RELEVANT DOCUMENTS [22-05-2023(online)].pdf 2023-05-22
28 202041057251-RELEVANT DOCUMENTS [22-05-2023(online)]-1.pdf 2023-05-22
29 202041057251-MARKED COPIES OF AMENDEMENTS [22-05-2023(online)].pdf 2023-05-22
30 202041057251-FORM 13 [22-05-2023(online)].pdf 2023-05-22
31 202041057251-FORM 13 [22-05-2023(online)]-2.pdf 2023-05-22
32 202041057251-FORM 13 [22-05-2023(online)]-1.pdf 2023-05-22
33 202041057251-Annexure [22-05-2023(online)].pdf 2023-05-22
34 202041057251-AMMENDED DOCUMENTS [22-05-2023(online)].pdf 2023-05-22
35 202041057251-PatentCertificate22-07-2023.pdf 2023-07-22
36 202041057251-IntimationOfGrant22-07-2023.pdf 2023-07-22

Search Strategy

1 SEARCHSTRATEGY_202041057251E_20-04-2022.pdf

ERegister / Renewals

3rd: 13 Oct 2023

From 30/12/2022 - To 30/12/2023

4th: 13 Oct 2023

From 30/12/2023 - To 30/12/2024

5th: 13 Oct 2023

From 30/12/2024 - To 30/12/2025

6th: 13 Oct 2023

From 30/12/2025 - To 30/12/2026

7th: 13 Oct 2023

From 30/12/2026 - To 30/12/2027

8th: 13 Oct 2023

From 30/12/2027 - To 30/12/2028

9th: 13 Oct 2023

From 30/12/2028 - To 30/12/2029

10th: 13 Oct 2023

From 30/12/2029 - To 30/12/2030