Abstract: The present invention discloses a system and a method for analysing roadways. The system (102) comprises a surveillance vehicle (112), a space-based radio-navigation unit (114), a graphics processing unit (GPU) (116), a synchronization unit (118), a data analytics engine (120), and a graphical user interface unit (122). The surveillance vehicle (112) traverses the roadways to generate raw sensor data through one or more sensors and a sensor box. The space-based radio-navigation unit (114) generates geospatial coordinate data. The GPU (116) pre-processes the raw sensor data and the geospatial coordinate data. The synchronization unit (118) performs a temporal synchronisation and a spatial synchronisation for generating unified roadway data. The data analytics engine (120) analyses pre-processed unified roadway data to detect one or more roadway defects obscured in adverse conditions. The graphical user interface unit (122) provides a visual representation of the one or more roadway defects. Figure 1
DESC: EARLIEST PRIORITY DATE:
This Application claims priority from a Provisional patent application filed in India having Patent Application No. 202311079932, filed on January 24, 2024 and titled “SYSTEM FOR MONITORING ROADWAY AND METHOD THEREOF”
FIELD OF INVENTION
Embodiments of the present invention relate to road condition assessment systems and more particularly relate to a system and a method for analysing roadways.
BACKGROUND
Conditions of road infrastructure are a matter of significant concern for both public safety and economic efficiency. Roadways, highways, and other transportation networks serve as vital arteries for movement of individuals and goods, and upkeep of the individuals and goods is essential for maintaining a functioning society. Proper road maintenance ensures road safety, minimizes wear and tear on vehicles, and reduces overall transportation costs.
Design and construction of roadways play a pivotal role in upholding optimal infrastructure performance within urban areas, as well as ensuring safety of the individuals travelling on the roadways. With an escalating number of the vehicles on the roadways, there is a growing imperative to intensify efforts in deploying monitoring systems dedicated to safeguarding the conditions of the roadways. This surge in vehicular traffic also contributes to increased wear and tear on roadway surfaces.
In recent years, advancements in a transportation sector have prompted a global emphasis on monitoring the roadways to establish a sustainable transportation network. Abundance of information and sensing technologies drives the development of advanced road pavement monitoring systems. Traditional road monitoring and maintenance systems have been in practice for many years and serve as the basis for guaranteeing the safety, operational efficiency, and longevity of the roadways and highway infrastructures.
However, the traditional road monitoring and maintenance systems exhibit inefficiencies in adverse weather conditions, including fog, rain, and during day and night operations. The traditional road monitoring and maintenance systems primarily rely on a limited number of sensors, particularly laser technology, which yields less precise outcomes. Currently employed ground-penetrating radar (GPR) systems are bulky and not economical, rendering the GPR systems impractical for comprehensive road defect detection. Furthermore, issues arise due to the lack of synchronisation among sensor data collected from the limited number of sensors, thereby resulting in potential errors.
In an existing technology, a system for monitoring roadway anomalies is disclosed. The system processes the sensor data to detect and classify a type of roadway anomaly based on predefined filters. The system then provides detailed information about the roadway anomaly, such as size, depth, and location of the roadway anomaly, to maintenance entities for remediation. Nevertheless, the system lacks real-time video synchronization and integration of the one or more sensors for more comprehensive anomaly detection, which may improve accuracy and reduce false positives.
There are various technical problems with road monitoring and maintenance systems in the prior art. In the existing technology, the traditional road monitoring and maintenance systems primarily rely on visual inspections and manual surveys, which provide limited data on road conditions. This results in a lack of detailed information about the extent and severity of the roadway anomalies. The manual surveys are adapted to cover a small portion of a road network at a given time, thus some road segments are neglected and inspected infrequently. Weather conditions significantly impact the effectiveness of the traditional road monitoring and maintenance systems. The rain, snow, and fog tend to hinder the inspections and maintenance operations of the traditional road monitoring and maintenance systems. Furthermore, the traditional road monitoring and maintenance systems require a significant allocation of resources, including labour, equipment, and time. This in turn strains budgets and leads to inefficiencies.
Therefore, there is a need for a system to address the aforementioned issues by providing a more proactive and efficient approach to maintaining the road infrastructure effectively and enhancing safety for the individuals.
SUMMARY
This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
In order to overcome the above deficiencies of the prior art, the present disclosure is to solve the technical problem by providing a system for analysing roadways.
In accordance with an embodiment of the present invention, the system for analysing the roadways is disclosed. The system comprises a surveillance vehicle, a space-based radio-navigation unit, a graphics processing unit (GPU), a synchronization unit, a data analytics engine, and a graphical user interface unit.
Yet in another embodiment, the surveillance vehicle is equipped with at least one of: one or more sensors and a sensor box. The surveillance vehicle is configured to traverse the roadways to generate raw sensor data through at least one of: the one or more sensors and the sensor box. The one or more sensors in the sensor box comprise at least one of: one or more complementary metal oxide semiconductor (CMOS) sensors, one or more depth sensors, one or more light detection and ranging (LiDAR) sensors, one or more inertial measurement unit (IMU) sensors, one or more ultrasonic sensors, one or more radio detection and ranging (RaDAR) sensors, one or more gyroscope sensors, one or more accelerometer sensors, one or more sound navigation and ranging (SoNAR) sensors, ground-penetrating radar (GPR), and one or more red-green-blue (RGB) sensors.
Yet in another embodiment, the space-based radio-navigation unit is operatively positioned in the surveillance vehicle. The space-based radio-navigation unit is configured to generate geospatial coordinate data associated with the surveillance vehicle. Yet in another embodiment, the GPU is operatively connected to the sensor box and the space-based radio-navigation unit. The GPU is configured to pre-process the raw sensor data and the geospatial coordinate data through at least one of: resizing, noise reduction, normalization, and data augmentation techniques.
Yet in another embodiment, the synchronization unit is operatively connected to the GPU. The synchronization unit is configured to perform at least one of: a temporal synchronisation and a spatial synchronisation of obtained at least one of: the raw sensor data and the geospatial coordinate data for generating unified roadway data. The synchronization unit is configured to: a) align the obtained raw sensor data from the one or more sensors based on timestamps obtained from the space-based radio-navigation unit for temporal synchronisation and b) synchronise the raw sensor data with the geospatial coordinate data to perform the spatial synchronisation during traversal of the surveillance vehicle. The synchronization unit is configured to compute distance parameters between consecutive data frames in the raw sensor data captured by the one or more sensors to identify and track a roadway defect progression.
Yet in another embodiment, the data analytics engine is operatively connected to the synchronization unit. The data analytics engine is configured to: a) analyse the pre-processed unified roadway data through at least one of: one or more artificial intelligence models and one or more machine learning models trained on an annotated dataset of regional roadways, to detect one or more roadway defects obscured in adverse conditions and b) classify the detected one or more roadway defects into one or more categories based on at least one of: defect severity, frequency of occurrence, potential safety risks, and proximity to critical infrastructure. The one or more categories comprise at least one of: potholes, cracks, rutting, shoving, surface deformities, alligator cracking, edge breaks, ravelling, road sags, bleeding, corrugations, pumping, and polished aggregate. The one or more artificial intelligence models and the one or more machine learning models are trained through a Single Shot Multibox Detector (SSD) framework with a MobileNetV2 backbone on the annotated dataset. The annotated dataset comprises the regional roadways with one or more characteristics comprise at least one of: varying defect types, environmental challenges, and construction materials. The data analytics engine is configured to generate at least one of: roadways analysis data and artifacts data including at least one of: one or more road condition heatmaps, one or more defect distribution visualizations, and one or more predictive maintenance schedules for roadway management.
Yet in another embodiment, the graphical user interface unit is operatively connected to the data analytics engine. The graphical user interface unit is configured to provide a visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in a colour-coded manner, with a defined threshold score based on user-defined one or more threshold parameters for analysing the roadways. The user-defined one or more threshold parameters are dynamically adjusted based on at least one of: real-time roadway defect progression, historical defect trends in the analysed region, environmental conditions detected by the one or more sensors, traffic volume and load data captured during surveillance. The user-defined one or more threshold parameters are configurable to classify the one or more roadway defects into severity levels, comprise at least one of: tolerable, marginal, and intolerable, based on one or more defect attributes. The one or more defect attributes comprise at least one of: defect size, defect frequency, and defect density within a predefined roadway segment, enabling customised analysis of the roadways.
In accordance with an embodiment of the present invention, the system comprises one or more hardware processors and a memory unit. The memory unit is coupled to the one or more hardware processors, wherein the memory unit comprises a plurality of subsystems in the form of programmable instructions executable by the one or more hardware processors. The plurality of subsystems comprises a surveillance subsystem, a synchronisation subsystem, a data processing subsystem, and an output subsystem.
Yet in another embodiment, the surveillance subsystem is configured to receive the raw sensor data from at least one of: the one or more sensors and the sensor box in real-time during surveillance of the roadways by the surveillance vehicle.
Yet in another embodiment, the synchronisation subsystem is configured to perform at least one of: the temporal synchronisation and the spatial synchronisation of obtained at least one of: the raw sensor data and the geospatial coordinate data from the space-based radio-navigation unit for generating the unified roadway data.
Yet in another embodiment, the data processing subsystem is configured with the data analytics engine to: a) pre-process the unified roadway data through at least one of: the resizing, the noise reduction, the normalization, and the data augmentation techniques through the GPU, b) analyse the pre-processed unified roadway data through at least one of: the one or more artificial intelligence models and the one or more machine learning models trained on the annotated dataset of the regional roadways, to detect the one or more roadway defects obscured in the adverse conditions and c) classify the detected one or more roadway defects into the one or more categories based on at least one of: the defect severity, the frequency of occurrence, the potential safety risks, and the proximity to critical infrastructure.
Yet in another embodiment, the output subsystem is configured with the graphical user interface unit to provide the visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in the colour-coded manner, with the defined threshold score based on the user-defined one or more threshold parameters for analysing the roadways.
In accordance with an embodiment of the present invention, a method for analysing the roadways is disclosed. In the first step, the method includes receiving, by the one or more hardware processors through the surveillance subsystem, the raw sensor data from at least one of: the one or more sensors and the sensor box in real-time during surveillance of the roadways by the surveillance vehicle.
In the next step, the method includes synchronising, by the one or more hardware processors through the synchronisation subsystem, obtained at least one of: the raw sensor data and the geospatial coordinate data from the space-based radio-navigation unit for generating the unified roadway data. Synchronising comprises at least one of: the temporal synchronisation and the spatial synchronisation.
In the next step, the method includes pre-processing, by the one or more hardware processors through the GPU, the unified roadway data through at least one of: the resizing, the noise reduction, the normalization, and the data augmentation techniques.
In the next step, the method includes analysing, by the one or more hardware processors through a data processing subsystem, the pre-processed unified roadway data through at least one of: the one or more artificial intelligence models and the one or more machine learning models trained on the annotated dataset of the regional roadways, to detect the one or more roadway defects obscured in the adverse conditions.
In the next step, the method includes classifying, by the one or more hardware processors through the data processing subsystem, the detected one or more roadway defects into the one or more categories based on at least one of: the defect severity, the frequency of occurrence, the potential safety risks, and the proximity to the critical infrastructure.
In the next step, the method includes providing, by the one or more hardware processors through the output subsystem, the visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in the colour-coded manner, with the defined threshold score based on the user-defined one or more threshold parameters to analyse the roadways.
To further clarify the advantages and features of the present invention, a more particular description of the invention will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the invention and are therefore not to be considered limiting in scope. The invention will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
Figure 1 illustrates an exemplary block diagram representation of a network architecture implementing a system for analysing roadways, in accordance with an embodiment of the present invention;
Figure 2A illustrates an exemplary block diagram representation of the system as shown in Figure 1 for analysing the roadways, in accordance with an embodiment of the present invention;
Figure 2B illustrates an exemplary visual representation depicting a sensor box associated with the system positioned on a surveillance vehicle, in accordance with an embodiment of the present invention; and
Figure 3 illustrates an exemplary flow diagram depicting a method for analysing the roadways, in accordance with an embodiment of the present invention.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the method steps, chemical compounds, equipment and parameters used herein may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more components, compounds, and ingredients preceded by "comprises... a" does not, without more constraints, preclude the existence of other components or compounds or ingredients or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present invention relate to a system for analysing roadways.
Figure 1 illustrates an exemplary block diagram representation of a network architecture 100 implementing the system 102 for analysing the roadways, in accordance with an embodiment of the present invention.
According to an exemplary embodiment of the disclosure, the system 102 for analysing the roadways is disclosed. The network architecture 100 may include the system 102, one or more databases 124, and one or more communication devices 128. The system 102, the one or more databases 124, and the one or more communication devices 128 may be communicatively coupled via one or more communication networks 126, ensuring seamless data transmission, processing, and decision-making. The system 102 acts as a central processing unit within the network architecture 100, responsible for analysing the roadways.
In an exemplary embodiment, the system 102 comprises one or more servers 104. The one or more servers 104 may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, or other suitable hardware. The one or more servers 104 comprises one or more hardware processors 106 and a memory unit 108. The memory unit 108 is operatively connected to the one or more hardware processors 106. The memory unit 108 comprises programmable instructions in the form of a plurality of subsystems 110, configured to be executed by the one or more hardware processors 106.
In an exemplary embodiment, the one or more hardware processors 106 may include, for example, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate data or signals based on operational instructions. Among other capabilities, the one or more hardware processors 106 may fetch and execute the programmable instructions in the memory unit 108 operationally coupled with the system 102 for performing tasks such as data processing, input/output processing, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data. The one or more hardware processors 106 are high-performance processors capable of handling large volumes of data and complex computations. The one or more hardware processors 106 may be, but not limited to, at least one of: multi-core central processing units (CPU), graphics processing units (GPUs), and the like, that enhance an ability of the system 102 to process real-time data from one or more sources simultaneously.
In an exemplary embodiment, the one or more databases 124 may configured to store and manage data related to various aspects of the system 102. The one or more databases 124 may store at least one of, but not limited to, raw sensor data, geospatial coordinate data, any other information necessary for the functionality and optimization of the system 102, and the like. The one or more databases 124 may include different types of databases such as, but not limited to, relational databases (e.g., Structured Query Language (SQL) databases such as PostgresDB and Oracle® databases), non-Structured Query Language (NoSQL) databases (e.g., MongoDB, Cassandra), time-series databases (e.g., InfluxDB), an OpenSearch database, object storage systems (e.g., Amazon® S3), and the like.
In an exemplary embodiment, the one or more communication devices 128 are configured to enable one or more users to interact with the system 102. The one or more communication devices 128 may be digital devices, computing devices, and/or networks. The one or more communication devices 128 may include, but not limited to, a mobile device, a smartphone, a personal digital assistant (PDA), a tablet computer, a phablet computer, a wearable computing device, a virtual reality/augmented reality (VR/AR) device, a laptop, a desktop, and the like.
In an exemplary embodiment, the one or more communication networks 126 may be, but not limited to, a wired communication network and/or a wireless communication network, a local area network (LAN), a wide area network (WAN), a Wireless Local Area Network (WLAN), a metropolitan area network (MAN), a telephone network, such as the Public Switched Telephone Network (PSTN) or a cellular network, an intranet, the Internet, a fibre optic network, a satellite network, a cloud computing network, a combination of networks, and the like. The wired communication network may comprise, but not limited to, at least one of: Ethernet connections, Fiber Optics, Power Line Communications (PLCs), Serial Communications, Coaxial Cables, Quantum Communication, Advanced Fiber Optics, Hybrid Networks, and the like. The wireless communication network may comprise, but not limited to, at least one of: wireless fidelity (wi-fi), cellular networks (including fourth generation (4G) technologies and fifth generation (5G) technologies), Bluetooth®, ZigBee®, long-range wide area network (LoRaWAN), satellite communication, radio frequency identification (RFID), 6G (sixth generation) networks, advanced IoT protocols, mesh networks, non-terrestrial networks (NTNs), near field communication (NFC), and the like.
In an exemplary embodiment, the system 102 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. The system 102 may be implemented in hardware or a suitable combination of hardware and software.
Though few components and the plurality of subsystems 110 are disclosed in Figure 1, there may be additional components and subsystems which is not shown, such as, but not limited to, ports, routers, repeaters, firewall devices, network devices, the one or more databases 124, network attached storage devices, assets, machinery, instruments, facility equipment, emergency management devices, image capturing devices, any other devices, and combination thereof. The person skilled in the art should not be limiting the components/subsystems shown in Figure 1. Although Figure 1 illustrates the system 102, and the one or more communication devices 128 connected to the one or more databases 124, one skilled in the art can envision that the system 102, and the one or more communication devices 128 may be connected to several user devices located at various locations and several databases via the one or more communication networks 126.
Those of ordinary skilled in the art will appreciate that the hardware depicted in Figure 1 may vary for particular implementations. For example, other peripheral devices such as an optical disk drive and the like, the local area network (LAN), the wide area network (WAN), wireless (e.g., wireless-fidelity (Wi-Fi)) adapter, graphics adapter, disk controller, input/output (I/O) adapter also may be used in addition or place of the hardware depicted. The depicted example is provided for explanation only and is not meant to imply architectural limitations concerning the present disclosure.
Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure are not being depicted or described herein. Instead, only so much of the system 102 as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the system 102 may conform to any of the various current implementations and practices that were known in the art.
The system 102 further comprises a surveillance vehicle 112 that further integrates at least one of: a space-based radio-navigation unit 114, a graphics processing unit (GPU) 116, a synchronization unit 118, a data analytics engine 120, a graphical user interface unit 122, and the like.
In an exemplary embodiment, the surveillance vehicle 112 is equipped with at least one of: one or more sensors 214 (shown in Figure 2B) and a sensor box 220 (shown in Figure 2B). The surveillance vehicle 112 may be, but not restricted to, at least one of: a four-wheeler automobile, a two-wheeler automobile, an unmanned aerial vehicle (UAV), and the like. The surveillance vehicle 112 is configured to traverse the roadways to generate the raw sensor data through at least one of: the one or more sensors and the sensor box. The one or more sensors in the sensor box may comprise, but not constrained to, at least one of: one or more complementary metal oxide semiconductor (CMOS) sensors, one or more depth sensors, one or more light detection and ranging (LiDAR) sensors 214c (shown in Figure 2B), one or more inertial measurement unit (IMU) sensors, one or more ultrasonic sensors 214b (shown in Figure 2B), one or more radio detection and ranging (RaDAR) sensors, one or more gyroscope sensors, one or more accelerometer sensors, one or more sound navigation and ranging (SoNAR) sensors 214d (shown in Figure 2B), ground-penetrating radar (GPR), one or more red-green-blue (RGB) sensors 214a (shown in Figure 2B), and the like.
At least one of: the one or more sensors and the sensor box are operatively positioned at diverse locations on the surveillance vehicle 112. The one or more CMOS sensors are incorporated within one or more cameras. At least one primary camera of the one or more cameras is positioned at a first side 218 (shown in Figure 2B) of the surveillance vehicle 112. The first side 218 (shown in Figure 2B) represents an upper side of the surveillance vehicle 112. The at least one primary camera of the one or more cameras facilitates capturing one or more images of a surrounding environment in proximity to the roadways. Similarly, at least one secondary camera of the one or more cameras may be positioned at a first end 216 (shown in Figure 2B) of the surveillance vehicle 112. The first end 216 (shown in Figure 2B) represents a frontal portion of the surveillance vehicle 112. The at least one secondary camera of the one or more cameras facilitates capturing the one or more images of a roadway surface. The one or more cameras are configured to convert three-dimensional (3D) information captured by the at least one primary camera and the at least one secondary camera into two-dimensional (2D) information. The one or more cameras adopt a camera calibration method. The camera calibration method is configured to determine intrinsic and extrinsic parameters of the one or more cameras to correct image distortions and accurately map image coordinates to real-world coordinates. The one or more cameras capture the one or more images of the roadway surface from various angles and then establish a radial lens distortion model to obtain a simultaneous localization and mapping (SLAM). The radial lens distortion model is configured to correct optical distortions caused by curvature of a camera lens, ensuring accurate image geometry. The SLAM is configured to build a spatial map of the environment while simultaneously determining the location of the one or more cameras within the spatial map. The one or more CMOS sensors further assist in a plurality of operations for road monitoring and maintenance including, but not limited to, at least one of: target recognition, navigation, environment mapping and localization, motion capture and analysis, defect detection, base forming for the one or more sensors, and the like.
The one or more depth sensors are incorporated within the one or more cameras, thereby extending the at least one primary camera and the at least one secondary camera. The one or more depth sensors are configured to capture stereoscopic depth information of the surrounding environment in proximity to the roadways. By comparing differences between the one or more images captured by the one or more cameras, the one or more depth sensors calculate one or more depth parameters of one or more objects within the surrounding environment in proximity to the roadways. The depth information refers to a distance between the one or more depth sensors and the roadways for the classification and identification of the one or more roadway defects and the one or more objects. The one or more objects may comprise, but not restricted to, at least one of: speed breakers, stones, buildings, curbs, pedestrians, vehicles, and the like. Furthermore, the one or more RaDAR sensors are positioned at a second end 222 (shown in Figure 2B) of the surveillance vehicle 112. The second end 222 (shown in Figure 2B) represents a rear portion of the surveillance vehicle 112.
The one or more LiDAR sensors are configured to determine the distance of the one or more objects and the roadway surfaces in the surrounding environment. The one or more LiDAR sensors are configured to determine the distance by emitting laser pulses and measuring the duration the laser pulses take to bounce off the one or more objects and return to the one or more LiDAR sensors. The one or more LiDAR sensors further facilitate to create a point cloud image, which represents the spatial distribution of the one or more objects within a field of view of the one or more LiDAR sensors. The point cloud image generated by the one or more LiDAR sensors is a crucial component of the perception capabilities of the system 102. The point cloud image enables the system 102 to perceive and recognize the one or more objects within the environment, including the one or more objects in proximity to the roadways. This perception is fundamental for at least one of: object detection, navigation, road defect identification, and the like. The one or more LiDAR sensors may comprise, but not restricted to, at least one of: a two-dimensional (2D) LiDAR, a three-dimensional (3D) LiDAR, a time of flight (TOF) LiDAR, a triangulating LiDAR, a phase-ranging LiDAR, and the like.
The one or more cameras may include the one or more RGB sensors. The one or more RGB sensors are operatively positioned on the first side 218 (shown in Figure 2B) of the surveillance vehicle 112 facing downwards. The one or more RGB sensors are configured to capture visual data for detecting and analysing roadway condition. The one or more ultrasonic sensors are operatively positioned on the first end 216 (shown in Figure 2B). The one or more ultrasonic sensors are configured to employ sound waves in measuring the one or more roadway defects. The one or more ultrasonic sensors are configured to operate at a frequency of 40 kilohertz (kHz) and a sensing range between 2 Hertz (Hz) and 400 Hertz (Hz). The distance between the one or more ultrasonic sensors and the roadway surfaces is calculated by a time difference between a transmission and a reception of an echo signal from the one or more ultrasonic sensors. An output of the echo signal is then mixed with data frames of streams from the one or more LiDAR sensors.
The one or more gyroscope sensors are configured to detect rotational movements and orientation changes of the surveillance vehicle 112. The one or more accelerometer sensors are configured to measure linear acceleration and vibrations, aiding in motion detection of the surveillance vehicle 112. The one or more SoNAR sensors are configured to detect underwater and surface-level features and assist in the object detection in adverse conditions. The GPR is configured to detect subsurface defects, such as voids and cracks, on the roadways.
The system 102 is configured to handle environmental challenges such as fog, rain, and night operations, which may affect the performance of the one or more RGB sensors. In the adverse conditions, the system 102 uses a combination of other sensors, such as the one or more LiDAR sensors and the one or more ultrasonic sensors, to ensure that the one or more roadway defects are still detected accurately. The one or more ultrasonic sensors may provide clear data even when the one or more images are obscured due to poor visibility.
In an exemplary embodiment, the space-based radio-navigation unit 114 is operatively positioned in the surveillance vehicle 112. The space-based radio-navigation unit 114 may comprise, but not limited to, one of: a Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Galileo, Apple® Maps, Here WeGo, NavIC (Navigation with Indian Constellation), Inertial Navigation Systems (INS), and the like. The space-based radio-navigation unit 114 is operatively linked to the GPU 116 through a peripheral mechanism. The space-based radio-navigation unit 114 is configured to provide the geospatial coordinate data associated with the surveillance vehicle 112. The data from the space-based radio-navigation unit 114 and the one or more IMU sensors are fused with the data associated with the one or more cameras and data from the one or more LiDAR sensors to improve the robustness and accuracy of the one or more roadway defects position along with the surveillance vehicle 112. This sensor fusion aids in obtaining the certainty of global positioning system (GPS) signals through semi-supervised clustering of inertial measurement unit (IMU) information, thereby enhancing the robustness, and achieving improved positioning of the surveillance vehicle 112 even if the GPS signals are misdirected.
The space-based radio-navigation unit 114 aids in generating essential visual representation critical for guiding a survey process of the roadways. The visual representation may encompass elements including, but not constrained to, at least one of: digital maps, route information, waypoints, and the like, that facilitate the navigation of the surveillance vehicle 112 along the roadways to be inspected. Further, the geospatial coordinate data is fundamental in the survey process of at least one of: capturing, storing, and synchronizing roadway data (sensor data). This geospatial context aids in ensuring that the gathered roadway data aligns accurately with specific locations along the roadways, enabling comprehensive and contextually relevant survey data.
In an exemplary embodiment, the GPU 116 is operatively connected to both the sensor box and the space-based radio-navigation unit 114 to ensure efficient processing of the raw sensor data and the geospatial coordinate data in real-time. The raw sensor data may be configured with noise, inconsistencies, and varying scales, which may affect the accuracy of the roadway analysis. By leveraging parallel processing capabilities of the GPU 116, the system 102 may handle large and complex datasets effectively.
The pre-processing of the raw sensor data and the geospatial coordinate data is an essential step to ensure that raw sensor data and the geospatial coordinate data are clean, consistent, and ready for analysis. The GPU 116 is configured to pre-process the raw sensor data and the geospatial coordinate data through at least one of: resizing, noise reduction, normalization, and data augmentation techniques. The resizing adjusts the raw sensor data and the geospatial coordinate data to a standard format for easier comparison and processing. The noise reduction filters out irrelevant and erroneous data that may lead to incorrect roadway defect detection. The noise reduction is performed by averaging readings from the one or more sensors, ensuring that the raw sensor data and the geospatial coordinate data remain reliable despite environmental interference. The normalization scales the raw sensor data and the geospatial coordinate data to a consistent range to eliminate discrepancies in sensor measurements. The data augmentation techniques (e.g., flipping and rotation) artificially increase the datasets by creating variations of existing raw sensor data and the geospatial coordinate data to improve accuracy.
In an exemplary embodiment, the synchronization unit 118 is operatively connected to at least one of: the one or more sensors, the sensor box, the GPU 116, and the space-based radio-navigation unit 114. The synchronization unit 118 is configured to obtain at least one of: the raw sensor data, and the geospatial coordinate data. The synchronization unit 118 is configured to synchronize and align at least one of: the raw sensor data, and the geospatial coordinate data. The synchronization unit 118 is configured to perform at least one of: a temporal synchronisation and a spatial synchronisation of the obtained at least one of: raw sensor data and geospatial coordinate data for generating unified roadway data. The unified roadway data enhances the perception, navigation, and decision-making capabilities of the system 102 for effective roadway monitoring and maintenance.
The synchronization unit 118 is configured to perform the temporal synchronisation by aligning the raw sensor data to timestamps obtained from the space-based radio-navigation unit 114 to ensure that the raw sensor data is appropriately associated with a accurate time. The synchronization unit 118 is configured to perform the spatial synchronisation, which involves mapping the raw sensor data to the geospatial coordinate data to ensure that the raw sensor data aligns with the position of the surveillance vehicle 112 on the roadways during traversal.
Further, the synchronization unit 118 is configured to track the progression of the one or more roadway defects over time. The synchronization unit 118 is configured to compute distance parameters between consecutive data frames in the raw sensor data captured by the one or more sensors. This capability enables the system 102 to accurately identify and track the one or more roadway defects as the surveillance vehicle 112 moves along the roadways, even accounting for any variations in speed and environmental conditions. By employing at least one of: the temporal synchronisation and the spatial synchronisation, the synchronization unit 118 ensures that the system 102 may provide precise, real-time insights into the status of the roadways, allowing for more effective and targeted detection and maintenance of the one or more roadway defects.
The temporal synchronisation is configured to create a unified and coherent representation of the surrounding environment. A relation between the data frames and the distance of the data frames in the raw sensor data is expressed in Equation 1.
Equation 1:
d=2 × (1000/fps) × v
From Equation 1, ‘d’ denotes the distance between two data frames of the raw sensor data, ‘fps’ denotes data frames per second configured onto the raw sensor data, and ‘v’ denotes the velocity of the surveillance vehicle 112. The distance ‘d’ is an important measurement unit for localization of the one or more roadway defects identified in the data frames. The synchronization unit 118 considers the distance ‘d’ while applying at least one of: the temporal synchronisation and the spatial synchronisation.
In an exemplary embodiment, the data analytics engine 120 is operatively connected to the synchronization unit 118. The data analytics engine 120 is configured to analyse and classify the one or more roadway defects into one or more categories by leveraging at least one of: one or more artificial intelligence models and one or more machine learning models. The one or more categories may comprise, but not limited to, at least one of: potholes, cracks, rutting, shoving, surface deformities, alligator cracking, edge breaks, ravelling, road sags, bleeding, corrugations, pumping, polished aggregate, and the like. At least one of: the one or more artificial intelligence models and the one or more machine learning models are configured to analyse the pre-processed unified roadway data. At least one of: the one or more artificial intelligence models and the one or more machine learning models are trained on an annotated dataset of regional roadways. The annotated dataset comprises the regional roadways with one or more characteristics that may comprise at least one of: varying defect types, environmental challenges, construction materials, and the like. At least one of: the one or more artificial intelligence models and the one or more machine learning models are capable of detecting the one or more roadway defects that may be obscured by adverse conditions such as weather, lighting, and road surface characteristics. By leveraging at least one of: the one or more artificial intelligence models and the one or more machine learning models, the data analytics engine 120 ensures precise roadway defect identification, thereby providing actionable insights for roadway maintenance and safety. This capability to detect hidden roadway defects under challenging conditions underscores the adaptability of the system 102 and effectiveness in diverse real-world scenarios.
The one or more artificial intelligence models and the one or more machine learning models are trained through a Single Shot Multibox Detector (SSD) framework with a MobileNetV2 backbone on the annotated dataset. The SSD framework works by discretizing an output space of bounding boxes into a set of default boxes over different aspect ratios and scales, which is then processed through a deep neural network. This configuration enables the detection of the one or more roadway defects in real-time, achieving approximately 85 percent accuracy on a test set of 1,000 images associated with the one or more images.
The SSD framework is configured to enhance the ability of at least one of: the one or more artificial intelligence models and the one or more machine learning models to generalize across different roadway conditions. The MobileNetV2 backbone is a lightweight feature extractor, which is integrated with additional layers configured for the object detection and prediction. A training phase of at least one of: the one or more artificial intelligence models and the one or more machine learning models employ 5,000 images, with hyperparameters such as a learning rate of 0.001, a batch size of 16, 50 epochs, and the like. The learning rate, set at 0.001, ensures a balanced progression of a training process, avoiding both premature convergence and prolonged computation. The batch size of 16 is employed, facilitating efficient memory usage and ensuring a robust gradient estimation during training iterations. At least one of: the one or more artificial intelligence models and the one or more machine learning models undergo 50 epochs, allowing for multiple passes over the datasets to refine weights and biases for improved roadway defect detection.
The system 102 also employs at least one of: an Adam optimizer and a combined loss function. At least one of: the Adam optimizer and the combined loss function are configured to optimize at least one of: the one or more artificial intelligence models and the one or more machine learning models. After the training phase, the system 102 assesses the performance of at least one of: the one or more artificial intelligence models and the one or more machine learning models using a mean average precision (mAP) and an Intersection over Union (IoU) with a threshold of 0.5. The mAP provides a comprehensive evaluation of the precision of at least one of: the one or more artificial intelligence models and the one or more machine learning models and recall across the one or more categories, quantifying the ability to accurately detect and classify the one or more roadway defects. Meanwhile, the IoU measures an overlap between predicted roadway defect boundaries and actual roadway defect boundaries in the test datasets. At least one of: the one or more artificial intelligence models and the one or more machine learning models demonstrate optimal performance in detecting both large defects and small defects associated with the one or more roadway defects.
In an exemplary embodiment, the graphical user interface unit 122 is operatively connected to the data analytics engine 120. The graphical user interface unit 122 may be associated with the one or more communication devices 128. The graphical user interface unit 122 is configured to provide the visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in a colour-coded manner, with a defined threshold score based on user-defined one or more threshold parameters for analysing the roadways. The user-defined one or more threshold parameters are dynamically adjusted by the system 102 based on one or more factors to enhance roadway defect classification accuracy. The one or more factors may include, but not limited to, at least one of: real-time progression of roadway defects detected during the surveillance, historical trends of the one or more roadway defects previously observed in the analyzed region, environmental conditions such as temperature and humidity levels captured by the one or more sensors, and traffic volume and load data gathered during the surveillance, and the like.
The user-defined one or more threshold parameters are configurable to classify the one or more roadway defects into severity levels. The severity levels may comprise, but not constrained to, at least one of: tolerable, marginal, and intolerable, based on one or more defect attributes. The one or more defect attributes may comprise, but not restricted to, at least one of: defect size, defect frequency, defect density within a predefined roadway segment, and the like, enabling customised analysis of the roadways. This flexibility allows for a tailored and detailed analysis of the roadway conditions, facilitating informed decision-making for maintenance and safety improvements.
The data analytics engine 120 is configured to generate at least one of: roadways analysis data and artifacts data including at least one of: one or more road condition heatmaps, one or more defect distribution visualizations, and one or more predictive maintenance schedules for roadway management. The data analytics engine 120 classifies the one or more roadway defects into the one or more categories based on at least one of: the severity levels, frequency, safety risks, and proximity to critical infrastructure, using the defined threshold score provided by the one or more users. The defined threshold score may be customized according to one of: the user-defined one or more threshold parameters and guidelines from an Indian Roads Congress (IRC). The one or more roadway defects are categorized into three severity levels: tolerable, marginal, and intolerable. Each severity level is represented in the colour-coded manner (e.g., Green, Amber, Red) on the graphical user interface unit 122 for easy interpretation.
The severity levels are represented on the graphical user interface unit 122 using colour codes: red for intolerable, amber for marginal, and green for tolerable. For instance, 0 to X roadway defects in 10 meters may be classified as tolerable, X+1 to Y roadway defects as marginal and the one or more roadway defects exceeding Y as intolerable.
Figure 2A illustrates an exemplary block diagram representation 200A of the system 102 as shown in Figure 1 for analysing the roadways, in accordance with an embodiment of the present invention; and
Figure 2B illustrates an exemplary visual representation 200B depicting the sensor box 220 associated with the system 102 positioned on the surveillance vehicle 112, in accordance with an embodiment of the present invention.
In an exemplary embodiment, the system 102 comprises the one or more servers 104, the memory unit 108, and a storage unit 204. The one or more hardware processors 106, the memory unit 108, and the storage unit 204 are communicatively coupled through a system bus 202 or any similar mechanism. The system bus 202 functions as a central conduit for data transfer and communication between the one or more hardware processors 106, the memory unit 108, and the storage unit 204. The system bus 202 facilitates the efficient exchange of information and instructions, enabling the coordinated operation of the system 102. The one or more sensors are powered through the system bus 202 such as a Universal Serial Bus 3.0 (USB 3.0) connection, providing both power and sensor data transfer capabilities. However, the one or more sensors may also be powered through other connections, depending on the system requirements. The high-speed data transfer of USB 3.0 ensures quick and reliable transmission of data from the sensors to the connected system.
In an exemplary embodiment, the memory unit 108 is operatively connected to the one or more hardware processors 106. The memory unit 108 comprises the plurality of subsystems 110 in the form of the programmable instructions executable by the one or more hardware processors 106. The plurality of subsystems 110 comprises a surveillance subsystem 206, a synchronisation subsystem 208, a data processing subsystem 210, and an output subsystem 212. The one or more hardware processors 106 associated within the one or more servers 104, as used herein, means any type of computational circuit, such as, but not limited to, the microprocessor unit, microcontroller, complex instruction set computing microprocessor unit, reduced instruction set computing microprocessor unit, very long instruction word microprocessor unit, explicitly parallel instruction computing microprocessor unit, graphics processing unit, digital signal processing unit, or any other type of processing circuit. The one or more hardware processors 106 may also include embedded controllers, such as generic or programmable logic devices or arrays, application-specific integrated circuits, single-chip computers, and the like.
The memory unit 108 may be the non-transitory volatile memory unit and the non-volatile memory unit. The memory unit 108 may be coupled to communicate with the one or more hardware processors 106, such as being a computer-readable storage medium. The memory unit 108 may include any suitable elements for storing data and machine-readable instructions, such as read-only memory, random access memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, a hard drive, a removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. In the present embodiment, the memory unit 108 includes the plurality of subsystems 110 stored in the form of the programmable instructions on any of the above-mentioned storage media and may be in communication with and executed by the one or more hardware processors 106.
The storage unit 204 may be a cloud storage or the one or more databases 124 such as those shown in Figure 1. The storage unit 204 may store, but not limited to, recommended course of action sequences dynamically generated by the system 102. The action sequences comprise surveillance, synchronisation, data processing, output generating, and the like. Additionally, the storage unit 204 may retain previous action sequences for comparison and future reference, enabling continuous refinement of the system 102 over time. The storage unit 204 may be any kind of database such as, but not limited to, relational databases, dedicated databases, dynamic databases, monetized databases, scalable databases, cloud databases, distributed databases, any other databases, and a combination thereof.
In an exemplary embodiment, the surveillance subsystem 206 is configured to receive the raw sensor data from at least one of: the one or more sensors 214 and the sensor box 220 in real-time during roadway surveillance. The surveillance subsystem 206 continuously monitors the roadways. This enables immediate detection and evaluation of potential one or more roadway defects and conditions.
In an exemplary embodiment, the synchronisation subsystem 208 is configured to perform at least one of: the temporal synchronisation and spatial synchronisation on the obtained at least one of: raw sensor data and geospatial coordinate data. The synchronisation subsystem 208 generates the unified roadway data based on the synchronisation performed.
In an exemplary embodiment, the data processing subsystem 210 is integrated with the data analytics engine 120. The data processing subsystem 210 is configured to pre-process the unified roadway data through at least one of: the resizing, the noise reduction, the normalization, and the data augmentation techniques through the GPU 116 for high-performance pre-processing of the unified roadway data. By leveraging the GPU 116, the system 102 ensures rapid processing of large volumes of the unified roadway data in real-time, which is essential for dynamic environments such as roadway monitoring. The pre-processed data is then employed for in-depth analysis by at least one of: the one or more artificial intelligence models and the one or more machine learning models.
At least one of: the one or more artificial intelligence models and the one or more machine learning models are trained on the annotated dataset of the regional roadways to detect and identify the one or more roadway defects. At least one of: the one or more artificial intelligence models and the one or more machine learning models are configured to handle challenging conditions, such as adverse weather and low-light environments, which may obscure the one or more roadway defects. The data processing subsystem 210 then classifies the detected one or more roadway defects into the one or more categories based on at least one of: the defect severity, the frequency of occurrence, the potential safety risks, the proximity to critical infrastructure, and the like.
In an exemplary embodiment, the output subsystem 212 is configured with the graphical user interface unit 122. The output subsystem 212 is configured to provide the visual representation of each detected roadway defect within each categorized category. The one or more roadway defects are displayed in the color-coded format, corresponding to the severity levels based on the user-defined one or more threshold parameters. The defined threshold score which may be set by the one or more users, determines the categorization of the one or more roadway defects.
Figure 3 illustrates an exemplary flow diagram depicting a method 300 for analysing the roadways, in accordance with an embodiment of the present invention.
According to an exemplary embodiment of the disclosure, the method 300 for analysing the roadways is disclosed. At step 302, the method 300 involves the one or more hardware processors receiving the raw sensor data through the surveillance subsystem. The raw sensor data is collected in real-time from at least one of: the one or more sensors and the sensor box. This raw sensor data acquisition occurs during the active surveillance of the roadways by the surveillance vehicle. This real-time capability enhances the responsiveness of the system to dynamic roadway conditions.
At step 304, the method 300 includes synchronising the obtained at least one of: raw sensor data and geospatial coordinate data through the synchronisation subsystem using the one or more hardware processors. This synchronisation integrates at least one of: the raw sensor data and the geospatial coordinate data sourced from the space-based radio-navigation unit to generate the unified roadway data. The synchronisation includes at least one of: the temporal synchronisation which aligns the raw sensor data based on the timestamps, and spatial synchronisation which matches the raw sensor data with the geospatial coordinate data.
At step 306, the method 300 includes pre-processing the unified roadway data by employing the GPU associated with the one or more hardware processors. The pre-processing includes applying at least one of: the resizing to standardize data dimensions, the noise reduction to eliminate irrelevant and erroneous data, the normalization to scale at least one of: the raw sensor data and the geospatial coordinate data for consistency, and the data augmentation techniques to enhance the diversity of the datasets and improve analysis accuracy.
At step 308, the method 300 involves analysing the pre-processed unified roadway data using the data processing subsystem through the one or more hardware processors. The analysis is performed by employing at least one of: the one or more artificial intelligence models and the one or more machine learning models, which are specifically trained on the annotated dataset of the regional roadways. This training ensures that at least one of: the one or more artificial intelligence models and the one or more machine learning models are capable of accurately detecting the one or more roadway defects, even in the adverse conditions such as poor lighting, weather disruptions, and occlusions.
At step 310, the method 300 involves classifying the detected one or more roadway defects using the data processing subsystem through the one or more hardware processors. This classification categorizes the one or more roadway defects into the one or more categories based on at least one of: the defect severity, the frequency of occurrence, the potential safety risks, and the proximity to critical infrastructure.
At step 312, the method 300 entails providing the visual representation of the classified one or more roadway defects through the output subsystem, facilitated by the one or more hardware processors. Each roadway defect, categorized within the respective category, is displayed in the color-coded manner, aligning with the user-defined one or more threshold parameters and the corresponding defined threshold score. This visualization enables clear and intuitive analysis of the roadway conditions, supporting decision-making processes for effective maintenance and resource allocation.
Numerous advantages of the present disclosure may be apparent from the discussion above. In accordance with the present disclosure, the system for analysing the roadways is disclosed. The raw sensor data fusion from the one or more sensors assists in mitigating false positive detections of the one or more roadway defects, thereby minimizing the generation of erroneous deformity information by the system. This augmentation of the perception process results in a substantial expansion of perceptual capabilities in the system and the ability of the system to discern a broader range of deformities. The one or more cameras provide multifaceted functionality, encompassing target recognition, navigation, environmental mapping, localization, motion analysis, defect detection, and acting as a foundational component for the system. Further, through the synchronisation of the data streams from the one or more sensors, comprehension of the surrounding environment is achievable. This holistic understanding finds application in diverse scenarios, including the object detection, tracking, and scene reconstruction. The synchronised roadway data serves as a cornerstone for enhancing performance and reliability of the system for dynamic and intricate environments.
While specific language has been used to describe the invention, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
,CLAIMS:I/We claim:
1. A system (102) for analysing roadways, comprising:
a surveillance vehicle (112) equipped with at least one of: one or more sensors (214) and a sensor box (220), configured to traverse the roadways to generate raw sensor data through at least one of: the one or more sensors (214) and the sensor box (220);
a space-based radio-navigation unit (114) operatively positioned in the surveillance vehicle (112), configured to generate geospatial coordinate data associated with the surveillance vehicle (112);
a graphics processing unit (GPU) (116) operatively connected to the sensor box (220) and the space-based radio-navigation unit (114), configured to pre-process the raw sensor data and the geospatial coordinate data through at least one of: resizing, noise reduction, normalization, and data augmentation techniques;
a synchronization unit (118) operatively connected to the graphics processing unit (GPU) (116), configured to perform at least one of: a temporal synchronisation and a spatial synchronisation of obtained at least one of: the raw sensor data and the geospatial coordinate data for generating unified roadway data;
a data analytics engine (120) operatively connected to the synchronization unit (118), configured to:
analyse the pre-processed unified roadway data through at least one of: one or more artificial intelligence models and one or more machine learning models trained on an annotated dataset of regional roadways, to detect one or more roadway defects obscured in adverse conditions; and
classify the detected one or more roadway defects into one or more categories based on at least one of: defect severity, frequency of occurrence, potential safety risks, and proximity to critical infrastructure; and
a graphical user interface unit (122) operatively connected to the data analytics engine (120), configured to provide a visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in a colour-coded manner, with a defined threshold score based on user-defined one or more threshold parameters for analysing the roadways.
2. The system (102) as claimed in claim 1, wherein the one or more sensors (214) in the sensor box (220) comprises at least one of: one or more complementary metal oxide semiconductor (CMOS) sensors, one or more depth sensors, one or more light detection and ranging (LiDAR) sensors (214c), one or more inertial measurement unit (IMU) sensors, one or more ultrasonic sensors (214b), one or more radio detection and ranging (RaDAR) sensors, one or more gyroscope sensors, one or more accelerometer sensors, one or more sound navigation and ranging (SoNAR) sensors (214d), ground-penetrating radar (GPR), and one or more red-green-blue (RGB) sensors (214a).
3. The system (102) as claimed in claim 1, wherein the synchronization unit (118) is configured to:
align the obtained raw sensor data from the one or more sensors (214) based on timestamps obtained from the space-based radio-navigation unit (114) for temporal synchronisation; and
synchronise the raw sensor data with the geospatial coordinate data to perform the spatial synchronisation during traversal of the surveillance vehicle (112).
4. The system (102) as claimed in claim 1, wherein the synchronization unit (118) is configured to compute distance parameters between consecutive data frames in the raw sensor data captured by the one or more sensors (214) to identify and track a roadway defect progression.
5. The system (102) as claimed in claim 1, wherein at least one of: the one or more artificial intelligence models and the one or more machine learning models trained through a Single Shot Multibox Detector (SSD) framework with a MobileNetV2 backbone on the annotated dataset,
the annotated dataset comprises the regional roadways with one or more characteristics comprise at least one of: varying defect types, environmental challenges, and construction materials.
6. The system (102) as claimed in claim 1, wherein the data analytics engine (120) is configured to generate at least one of: roadways analysis data and artifacts data including at least one of: one or more road condition heatmaps, one or more defect distribution visualizations, and one or more predictive maintenance schedules for roadway management.
7. The system (102) as claimed in claim 1, wherein the one or more categories comprise at least one of: potholes, cracks, rutting, shoving, surface deformities, alligator cracking, edge breaks, ravelling, road sags, bleeding, corrugations, pumping, and polished aggregate.
8. The system (102) as claimed in claim 1, wherein the user-defined one or more threshold parameters are configurable to classify the one or more roadway defects into severity levels, comprise at least one of: tolerable, marginal, and intolerable, based on one or more defect attributes,
the one or more defect attributes comprise at least one of: defect size, defect frequency, and defect density within a predefined roadway segment, enabling customised analysis of the roadways.
9. The system (102) as claimed in claim 1, wherein the user-defined one or more threshold parameters are dynamically adjusted based on at least one of: real-time roadway defect progression, historical defect trends in the analysed region, environmental conditions detected by the one or more sensors (214), traffic volume and load data captured during surveillance.
10. A system (102) for analysing roadways, comprising:
one or more hardware processors (106); and
a memory unit (108) coupled to the one or more hardware processors (106), wherein the memory unit (108) comprises a plurality of subsystems (110) in the form of programmable instructions executable by the one or more hardware processors (106), and wherein the plurality of subsystems (110) comprises:
a surveillance subsystem (206) configured to receive raw sensor data from at least one of: one or more sensors (214) and a sensor box (220) in real-time during surveillance of the roadways by a surveillance vehicle (112);
a synchronisation subsystem (208) configured to perform at least one of: a temporal synchronisation and a spatial synchronisation of obtained at least one of: the raw sensor data and geospatial coordinate data from a space-based radio-navigation unit (114) for generating unified roadway data;
a data processing subsystem (210) configured with a data analytics engine (120) to:
pre-process the unified roadway data through at least one of: resizing, noise reduction, normalization, and data augmentation techniques through a graphics processing unit (GPU) (116);
analyse the pre-processed unified roadway data through at least one of: one or more artificial intelligence models and one or more machine learning models trained on an annotated dataset of regional roadways, to detect one or more roadway defects obscured in adverse conditions; and
classify the detected one or more roadway defects into one or more categories based on at least one of: defect severity, frequency of occurrence, potential safety risks, and proximity to critical infrastructure; and
an output subsystem (212) configured with a graphical user interface unit (122) to provide a visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in a colour-coded manner, with a defined threshold score based on user-defined one or more threshold parameters for analysing the roadways.
11. A method (300) for analysing roadways, comprising:
receiving (302), by one or more hardware processors (106) through a surveillance subsystem (206), raw sensor data from at least one of: one or more sensors (214) and a sensor box (220) in real-time during surveillance of the roadways by a surveillance vehicle (112);
synchronising (304), by the one or more hardware processors (106) through a synchronisation subsystem (208), obtained at least one of: the raw sensor data and geospatial coordinate data from a space-based radio-navigation unit (114) for generating unified roadway data,
wherein synchronising comprises at least one of: a temporal synchronisation and a spatial synchronisation;
pre-processing (306), by the one or more hardware processors (106) through a graphics processing unit (GPU) (116), the unified roadway data through at least one of: resizing, noise reduction, normalization, and data augmentation techniques;
analysing (308), by the one or more hardware processors (106) through a data processing subsystem (210), the pre-processed unified roadway data through at least one of: one or more artificial intelligence models and one or more machine learning models trained on an annotated dataset of regional roadways, to detect one or more roadway defects obscured in adverse conditions;
classifying (310), by the one or more hardware processors (106) through the data processing subsystem (210), the detected one or more roadway defects into one or more categories based on at least one of: defect severity, frequency of occurrence, potential safety risks, and proximity to critical infrastructure; and
providing (312), by the one or more hardware processors (106) through an output subsystem (212), a visual representation of each roadway defect of the one or more roadway defects within each category of the one or more categories in a colour-coded manner, with a defined threshold score based on user-defined one or more threshold parameters to analyse the roadways.
Dated this 23rd day of January 2025
Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
Agent for Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202311079932-STATEMENT OF UNDERTAKING (FORM 3) [24-11-2023(online)].pdf | 2023-11-24 |
| 2 | 202311079932-PROVISIONAL SPECIFICATION [24-11-2023(online)].pdf | 2023-11-24 |
| 3 | 202311079932-FORM FOR SMALL ENTITY(FORM-28) [24-11-2023(online)].pdf | 2023-11-24 |
| 4 | 202311079932-FORM FOR SMALL ENTITY [24-11-2023(online)].pdf | 2023-11-24 |
| 5 | 202311079932-FORM 1 [24-11-2023(online)].pdf | 2023-11-24 |
| 6 | 202311079932-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-11-2023(online)].pdf | 2023-11-24 |
| 7 | 202311079932-EVIDENCE FOR REGISTRATION UNDER SSI [24-11-2023(online)].pdf | 2023-11-24 |
| 8 | 202311079932-DRAWINGS [24-11-2023(online)].pdf | 2023-11-24 |
| 9 | 202311079932-APPLICATIONFORPOSTDATING [22-11-2024(online)].pdf | 2024-11-22 |
| 10 | 202311079932-FORM-26 [26-11-2024(online)].pdf | 2024-11-26 |
| 11 | 202311079932-FORM-5 [23-01-2025(online)].pdf | 2025-01-23 |
| 12 | 202311079932-FORM-26 [23-01-2025(online)].pdf | 2025-01-23 |
| 13 | 202311079932-FORM FOR SMALL ENTITY [23-01-2025(online)].pdf | 2025-01-23 |
| 14 | 202311079932-FORM 3 [23-01-2025(online)].pdf | 2025-01-23 |
| 15 | 202311079932-EVIDENCE FOR REGISTRATION UNDER SSI [23-01-2025(online)].pdf | 2025-01-23 |
| 16 | 202311079932-DRAWING [23-01-2025(online)].pdf | 2025-01-23 |
| 17 | 202311079932-CORRESPONDENCE-OTHERS [23-01-2025(online)].pdf | 2025-01-23 |
| 18 | 202311079932-COMPLETE SPECIFICATION [23-01-2025(online)].pdf | 2025-01-23 |