Abstract: The present invention discloses a computer-implemented system and a method for performing geospatial roadway surveys. The system (102) obtains first survey data from databases (118). The system (102) determines predefined trigger conditions. The system (102) triggers a surveillance vehicle (112), sensors (114), and a space-based navigation unit (116) to generate second survey data. The system (102) stitches between the first visual data and second visual data, and the first sensor data and second sensor data. The system (102) extracts features from the first survey data and the second survey data. The system (102) aligns the features. The system (102) processes frames to maintain optimal visual quality of the first visual data, the second visual data, the first sensor data, and the second sensor data. The system (102) performs a comparative analysis between the first survey data and the second survey data. The system (102) generates comprehensive reports and visual representations. Figure 1
DESC:EARLIEST PRIORITY DATE:
This Application claims priority from a Provisional patent application filed in India having Patent Application No. 202311079934, filed on January 24, 2024 and titled “SYSTEM FOR CONDUCTING GEOSPATIAL ROADWAY RESURVEYS AND METHOD THEREOF”
FIELD OF INVENTION
Embodiments of the present invention relate to road surveying and maintenance systems and more particularly relate to a computer-implemented system and a computer-implemented method for performing the one or more geospatial roadway resurveys by a surveillance vehicle, thereby ensuring efficient and autonomous renovation of one or more roadway deformities.
BACKGROUND
Recognizing dynamic nature of road conditions is a growing need for resurveys and continuous monitoring to ensure effectiveness of maintenance efforts over time. The resurveys involve periodic re-evaluation of the road conditions to track changes, identify new defects, and assess previously repaired sections. Effective road maintenance is key for ensuring safety, longevity, and efficiency of road infrastructure. Timely identification and repair of road defects, including potholes, cracks, and surface wear, are essential to prevent accidents and minimize maintenance costs.
Conventional road surveying systems are widely employed to inspect the road conditions. The conventional road surveying systems include visual inspections, manual surveys, and data collection through one or more sensors and one or more cameras. While serving as a foundation for maintaining the road infrastructure, the conventional road surveying systems may not provide real-time, comprehensive data, potentially limiting assessment of the road conditions.
The ability to acquire and fuse data related to the road defects from multiple sources faces challenges related to data compatibility, format differences, and data quality. Inconsistent and incomplete data from the multiple sources may lead to inaccuracies in defect identification and assessment.
Traditional road surveying methods are time-consuming, sometimes taking weeks and even months to complete the surveys of large areas before the data is ready for analysis and use. The traditional road surveying methods also tend to rely heavily on one or more users for measurements and data collection, introducing potential sources of inaccuracy due to human error. This may lead to errors in construction and development projects for the road infrastructure.
Managing and organizing the vast amounts of data collected during the conventional road surveys may be cumbersome and inefficient. Storing, retrieving, and sharing paper-based and manual records leads to delays and the errors. Furthermore, environmental factors such as weather conditions, seasonal changes, and variations in vegetation density may affect the accuracy of the traditional road surveying methods, introducing inconsistencies and limiting the reliability of collected data.
The integration of real-time data acquisition and advanced data analysis techniques has the potential to revolutionize how the road conditions are assessed and maintained. Such advancements may contribute to safer and more durable road infrastructure. However, current systems struggle to provide this level of integration and real-time analysis.
In existing technology, a system for sensing and managing pothole location and pothole characteristics is disclosed. The system is capable of collecting, integrating, and scrutinizing data related to pothole detection from a plurality of origins, to identify the potholes that necessitate one of: maintenance and repair. Moreover, the system is also configured to generate and disseminate regular reports containing data regarding pothole repairs, which are utilized by road authorities. However, ensuring that the system is compatible with the multiple data sources and technologies is a limitation. The multiple data sources use different formats and protocols, requiring integration efforts. Further, the system accumulates surplus data about roadway anomalies. Henceforth, managing and processing large volumes of the surplus data efficiently is challenging.
There is a growing need for systems that may address these challenges by combining efficient data collection, accurate analysis, and timely reporting of the road conditions. Such systems may potentially improve the speed and accuracy of the road surveys, reduce reliance on manual processes, and enhance the overall management of the road infrastructure maintenance and improvement efforts.
SUMMARY
This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
In order to overcome the above deficiencies of the prior art, the present disclosure is to solve the technical problem by providing a computer-implemented method for performing one or more geospatial roadway surveys.
In accordance with an embodiment of the present invention, the computer-implemented method for performing the one or more geospatial roadway surveys is disclosed. In the first step, the computer-implemented method includes obtaining, by one or more hardware processors through a data-obtaining subsystem, first survey data. The first survey comprises at least one of: first sensor data, first visual data, and first geospatial coordinates data from one or more databases. The first survey data comprises at least one of: one or more time stamps, the one or more roadway conditions, predefined survey routes data, geofencing boundaries information, priority zone information, and one or more environmental conditions, stored in the one or more databases.
In the next step, the computer-implemented method includes determining, by the one or more hardware processors through a trigger analysis subsystem, one or more predefined trigger conditions based on at least one of: one or more primary triggering parameters and one or more secondary triggering parameters, associated with the first survey data. The one or more primary triggering parameters comprise at least one of: distances from first surveyed points, time elapsed since a first survey, entry into predefined geospatial priority zones, and seasonal requirements. The one or more secondary triggering parameters comprise at least one of: the one or more environmental conditions, traffic conditions, time of day, roadway type, and a roadway classification.
In the next step, the computer-implemented method includes triggering, by the one or more hardware processors through a survey equipment activation subsystem, at least one of: a surveillance vehicle, one or more sensors, and a space-based navigation unit to generate second survey data based on compliance with the one or more predefined trigger conditions.
In the next step, the computer-implemented method includes stitching, by the one or more hardware processors through a data stitching subsystem, between at least one of: the first visual data and second visual data associated with the second survey data, and the first sensor data and second sensor data associated with the second survey data to generate a unified dataset, and synchronisation with the first geospatial coordinate data and second geospatial coordinate data. The stitching of the first visual data and the second visual data comprises performing at least one of: extraction of one or more frames at predefined rates, lens distortion correction, resolution matching, exposure normalisation, and noise reduction. The stitching of the first visual data and the second visual data with the first geospatial coordinate data, and the second geospatial coordinate data respectively is achieved through at least one of: space-based navigation unit alignment, timestamp-based synchronization, and sensor-based positional metadata integration. The computer-implemented method includes processing, by the one or more hardware processors through a data pre-processing subsystem, the second geospatial coordinates data comprises: a) buffering the real-time obtained geospatial coordinates data to process signal interruptions, b) matching the obtained second geospatial coordinates data with the predefined survey routes data stored in the one or more databases, and c) determining one or more survey zones based on the geofencing boundaries information.
In the next step, the computer-implemented method includes extracting, by the one or more hardware processors through the data stitching subsystem, one or more features from the first survey data and the second survey data through at least one of: spatial context analysis and one or more feature detection models. Extracting the one or more features from the first survey data and the second survey data comprises implementing the one or more feature detection models including at least one of: Iterative Closest Point (ICP) models, Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Simultaneous Localization and Mapping (SLAM) techniques, Harris Corner Detection, and Edge Detection using a Canny model.
In the next step, the computer-implemented method includes aligning, by the one or more hardware processors through a data alignment subsystem, the one or more features based on at least one of: one or more geospatial alignment procedures, one or more feature matching techniques, and a homography estimation procedure. Aligning the one or more extracted features comprise: a) performing the geospatial alignment procedures through a Random Sample Consensus (RANSAC) model to eliminate one or more outliers, b) estimating a homography transformation matrix through the homography estimation procedure to correct perspective distortions in the one or more frames, c) altering scaling of the one or more frames, and d) compensating for rotation and alignment mismatches of the one or more frames.
In the next step, the computer-implemented method includes processing, by the one or more hardware processors through the data alignment subsystem, the one or more frames associated with the first survey data and the second survey data through one or more computer graphics techniques along with one or more seam finding models to maintain consistent optimal visual quality of at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data. Processing the one or more frames through one or more computer graphics techniques comprise: a) alpha blending to maintain seamless transitions between the one or more frames, b) applying the one or more seam finding models to optimise visual quality of at least one of: the first visual data and the second visual data, c) harmonizing colours associated with the one or more frames to correct illumination differences, and d) eliminating one or more ghost artifacts.
In the next step, the computer-implemented method includes performing, by the one or more hardware processors through a differential analysis subsystem, a comparative analysis between the first survey data and the second survey data to determine changes in the one or more roadway conditions based on at least one of: spatial registration of the first survey data and the second survey data, and feature matching of the first survey data and the second survey data. Performing the comparative analysis through at least one of: cloud-to-cloud distance computation procedures, surface normal analysis, volumetric change detection, a multiscale model to model cloud comparison (M3C2) model, photogrammetric comparison procedures, deep learning-based defect detection, texture analysis, and colour and intensity differencing, for: a) computing surface depth variations, b) identifying volumetric changes in one or more roadway deformities, c) determining defect growth across a roadway surface, and d) analysing the rate of deterioration trends.
In the next step, the computer-implemented method includes generating, by the one or more hardware processors through an output generation subsystem, at least one of: rapidly deteriorating areas, trend analysis data, and one or more recommendations for roadway survey scheduling in one or more color-coded differential maps highlighting at least one of: one or more changes between the first survey data and the second survey data, overlays marking new defects, and one or more statistical summaries of the one or more roadway conditions. The one or more statistical summaries of the one or more roadway conditions comprise at least one of: a total number of detected one or more roadway deformities, average defect growth rates, optimal depth changes, deterioration trends, and the priority zone information.
In accordance with an embodiment of the present invention, a computer-implemented system for performing the one or more geospatial roadway surveys is disclosed. The computer-implemented system comprises the one or more hardware processors and a memory unit. The memory unit is coupled to the one or more hardware processors, wherein the memory unit comprises a plurality of subsystems in form of programmable instructions executable by the one or more hardware processors. The plurality of subsystems comprises the data-obtaining subsystem, the trigger analysis subsystem, the survey equipment activation subsystem, the data stitching subsystem, the data alignment subsystem, the differential analysis subsystem, and the output generation subsystem.
Yet in another embodiment, the data-obtaining subsystem is configured to obtain the first survey data. The first survey data comprises at least one of: the first sensor data, the first visual data, and the first geospatial coordinates data from the one or more databases. Yet in another embodiment, the trigger analysis subsystem is configured to determine the one or more predefined trigger conditions based on at least one of: the one or more primary triggering parameters and the one or more secondary triggering parameters, associated with the first survey data.
Yet in another embodiment, the survey equipment activation subsystem is configured to trigger at least one of: the surveillance vehicle, the one or more sensors, and the space-based navigation unit to generate the second survey data based on compliance with the one or more predefined trigger conditions. Yet in another embodiment, the data stitching subsystem is configured to: a) stitch between at least one of: the first visual data and second visual data associated with the second survey data, and the first sensor data and second sensor data associated with the second survey data to generate the unified dataset, and synchronisation with the first geospatial coordinate data and the second geospatial coordinate data and b) extract the one or more features from the first survey data and the second survey data through at least one of: the spatial context analysis and the one or more feature detection models.
Yet in another embodiment, the data alignment subsystem is configured to: a) align the one or more features based on at least one of: the one or more geospatial alignment procedures, the one or more feature matching techniques, and the homography estimation procedure and b) process the one or more frames associated with the first survey data and the second survey data through the one or more computer graphics techniques along with the one or more seam finding models to maintain the consistent optimal visual quality of at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data.
Yet in another embodiment, the differential analysis subsystem is configured to perform the comparative analysis between the first survey data and the second survey data to determine changes in the one or more roadway conditions based on at least one of: the spatial registration of the first survey data and the second survey data, and feature matching of the first survey data and the second survey data.
Yet in another embodiment, the output generation subsystem is configured to generate at least one of: the rapidly deteriorating areas, the trend analysis data, and the one or more recommendations for roadway survey scheduling in the one or more color-coded differential maps highlighting at least one of: the one or more changes between the first survey data and the second survey data, the overlays marking new defects, and the one or more statistical summaries of the one or more roadway conditions.
To further clarify the advantages and features of the present invention, a more particular description of the invention will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the invention and are therefore not to be considered limiting in scope. The invention will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
Figure 1 illustrates an exemplary block diagram representation of a network architecture implementing a computer-implemented system for performing one or more geospatial roadway surveys, in accordance with an embodiment of the present invention;
Figure 2A illustrates an exemplary block diagram representation of the computer-implemented system as shown in Figure 1 for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention;
Figures 2B-2E illustrate exemplary flow chart representations depicting the computer-implemented system for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention; and
Figure 3 illustrates an exemplary flow diagram depicting a computer-implemented method for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the method steps, chemical compounds, equipment and parameters used herein may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more components, compounds, and ingredients preceded by "comprises... a" does not, without more constraints, preclude the existence of other components or compounds or ingredients or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present invention relate to a computer-implemented system for performing one or more geospatial roadway surveys.
Figure 1 illustrates an exemplary block diagram representation of a network architecture 100 implementing the computer-implemented system 102 for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention.
According to an exemplary embodiment of the disclosure, the computer-implemented system 102 (hereinafter referred to as the system 102) for performing the one or more geospatial roadway surveys disclosed. The network architecture 100 may include the system 102, one or more databases 118, and one or more communication devices 122. The system 102, the one or more databases 118, and the one or more communication devices 122 may be communicatively coupled via one or more communication networks 120, ensuring seamless data transmission, processing, and decision-making. The system 102 acts as a central processing unit within the network architecture 100, responsible for performing the one or more geospatial roadway surveys.
In an exemplary embodiment, the system 102 comprises one or more servers 104. The one or more servers 104 may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, or other suitable hardware. The one or more servers 104 comprises one or more hardware processors 106 and a memory unit 108. The memory unit 108 is operatively connected to the one or more hardware processors 106. The memory unit 108 comprises programmable instructions in the form of a plurality of subsystems 110, configured to be executed by the one or more hardware processors 106.
In an exemplary embodiment, the one or more hardware processors 106 may include, for example, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate data or signals based on operational instructions. Among other capabilities, the one or more hardware processors 106 may fetch and execute the programmable instructions in the memory unit 108 operationally coupled with the system 102 for performing tasks such as data processing, input/output processing, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data. The one or more hardware processors 106 are high-performance processors capable of handling large volumes of data and complex computations. The one or more hardware processors 106 may be, but not limited to, at least one of: multi-core central processing units (CPU), graphics processing units (GPUs), and the like, that enhance an ability of the system 102 to process real-time data from one or more sources simultaneously.
In an exemplary embodiment, the one or more databases 118 may configured to store and manage data related to various aspects of the system 102. The one or more databases 118 may store data associated with at least one of, but not limited to, roadways, scheduled inspection of the roadways, scheduled resurvey of the roadways, the one or more roadway deformities, routes, width of the roadways, turns, crossroads, markings, signages, side vegetation, any other information necessary for the functionality and optimization of the system 102, and the like. The one or more databases 118 may include different types of databases such as, but not limited to, relational databases (e.g., Structured Query Language (SQL) databases such as PostgresDB and Oracle® databases), non-Structured Query Language (NoSQL) databases (e.g., MongoDB, Cassandra), time-series databases (e.g., InfluxDB), an OpenSearch database, object storage systems (e.g., Amazon® S3), and the like.
In an exemplary embodiment, the one or more communication devices 122 are configured to enable one or more users to interact with the system 102. The one or more communication devices 122 may be digital devices, computing devices, and/or networks. The one or more communication devices 122 may include, but not limited to, a mobile device, a smartphone, a personal digital assistant (PDA), a tablet computer, a phablet computer, a wearable computing device, a virtual reality/augmented reality (VR/AR) device, a laptop, a desktop, and the like.
In an exemplary embodiment, the one or more communication networks 120 may be, but not limited to, a wired communication network and/or a wireless communication network, a local area network (LAN), a wide area network (WAN), a Wireless Local Area Network (WLAN), a metropolitan area network (MAN), a telephone network, such as the Public Switched Telephone Network (PSTN) or a cellular network, an intranet, the Internet, a fibre optic network, a satellite network, a cloud computing network, a combination of networks, and the like. The wired communication network may comprise, but not limited to, at least one of: Ethernet connections, Fiber Optics, Power Line Communications (PLCs), Serial Communications, Coaxial Cables, Quantum Communication, Advanced Fiber Optics, Hybrid Networks, and the like. The wireless communication network may comprise, but not limited to, at least one of: wireless fidelity (wi-fi), cellular networks (including fourth generation (4G) technologies and fifth generation (5G) technologies), Bluetooth®, ZigBee®, long-range wide area network (LoRaWAN), satellite communication, radio frequency identification (RFID), 6G (sixth generation) networks, advanced IoT protocols, mesh networks, non-terrestrial networks (NTNs), near field communication (NFC), and the like.
The system 102 integrates with a surveillance vehicle 112 that further comprises at least one of: a space-based radio-navigation unit 116, one or more sensors 114, and the like. The one or more road conditions are analysed by the system 102 based on one or more roadway deformities including, but not limited to, at least one of: potholes, cracks, surface wear, rutting, shoving, and the like. The surveillance vehicle 112 may include, but not limited to, at least one of: a four-wheeler automobile, a two-wheeler automobile, an unmanned aerial vehicle (UAV), and the like. The one or more sensors 114 may comprise, but not constrained to, at least one of: Light Detection and Ranging (LIDAR) 222 (as shown in Figure 2C), radar 226 (as shown in Figure 2C), one or more ultrasonic sensors 228 (as shown in Figure 2C), one or more environmental sensors, one or more Inertial Measurement Units (IMUs) 230 (as shown in Figure 2C), one or more laser sensors, one or more three-dimensional (3D) cameras 224 (as shown in Figure 2C), and other measurement devices to capture the one or more roadway conditions.
In an exemplary embodiment, the system 102 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. The system 102 may be implemented in hardware or a suitable combination of hardware and software.
Though few components and the plurality of subsystems 110 are disclosed in Figure 1, there may be additional components and subsystems which is not shown, such as, but not limited to, ports, routers, repeaters, firewall devices, network devices, the one or more databases 118, network attached storage devices, assets, machinery, instruments, facility equipment, emergency management devices, image capturing devices, any other devices, and combination thereof. The person skilled in the art should not be limiting the components/subsystems shown in Figure 1. Although Figure 1 illustrates the system 102, and the one or more communication devices 122 connected to the one or more databases 118, one skilled in the art can envision that the system 102, and the one or more communication devices 122 may be connected to several user devices located at various locations and several databases via the one or more communication networks 120.
Those of ordinary skilled in the art will appreciate that the hardware depicted in Figure 1 may vary for particular implementations. For example, other peripheral devices such as an optical disk drive and the like, the local area network (LAN), the wide area network (WAN), wireless (e.g., wireless-fidelity (Wi-Fi)) adapter, graphics adapter, disk controller, input/output (I/O) adapter also may be used in addition or place of the hardware depicted. The depicted example is provided for explanation only and is not meant to imply architectural limitations concerning the present disclosure.
Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure are not being depicted or described herein. Instead, only so much of the system 102 as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the system 102 may conform to any of the various current implementations and practices that were known in the art.
Figure 2A illustrates an exemplary block diagram representation 200A of the system 102 as shown in Figure 1 for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention; and
Figures 2B-2E illustrates exemplary flow chart representations (200B, 200C, 200D, 200E) depicting the system 102 for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention.
In an exemplary embodiment, the system 102 comprises the one or more servers 104, the memory unit 108, and a storage unit 204. The one or more hardware processors 106, the memory unit 108, and the storage unit 204 are communicatively coupled through a system bus 202 or any similar mechanism. The system bus 202 functions as a central conduit for data transfer and communication between the one or more hardware processors 106, the memory unit 108, and the storage unit 204. The system bus 202 facilitates the efficient exchange of information and instructions, enabling the coordinated operation of the system 102.
In an exemplary embodiment, the memory unit 108 is operatively connected to the one or more hardware processors 106. The memory unit 108 comprises the plurality of subsystems 110 in the form of the programmable instructions executable by the one or more hardware processors 106. The plurality of subsystems 110 comprises a data-obtaining subsystem 206, a trigger analysis subsystem 208, a survey equipment activation subsystem 210, a data stitching subsystem 212, a data pre-processing subsystem 214, a data alignment subsystem 216, a differential analysis subsystem 218, and an output generation subsystem 220. The one or more hardware processors 106 associated within the one or more servers 104, as used herein, means any type of computational circuit, such as, but not limited to, the microprocessor unit, microcontroller, complex instruction set computing microprocessor unit, reduced instruction set computing microprocessor unit, very long instruction word microprocessor unit, explicitly parallel instruction computing microprocessor unit, graphics processing unit, digital signal processing unit, or any other type of processing circuit. The one or more hardware processors 106 may also include embedded controllers, such as generic or programmable logic devices or arrays, application-specific integrated circuits, single-chip computers, and the like.
The memory unit 108 may be the non-transitory volatile memory unit and the non-volatile memory unit. The memory unit 108 may be coupled to communicate with the one or more hardware processors 106, such as being a computer-readable storage medium. The memory unit 108 may include any suitable elements for storing data and machine-readable instructions, such as read-only memory, random access memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, a hard drive, a removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. In the present embodiment, the memory unit 108 includes the plurality of subsystems 110 stored in the form of the programmable instructions on any of the above-mentioned storage media and may be in communication with and executed by the one or more hardware processors 106.
The storage unit 204 may be a cloud storage or the one or more databases 118 such as those shown in Figure 1. The storage unit 204 may store, but not limited to, recommended course of action sequences dynamically generated by the system 102. The action sequences comprise data-obtaining, trigger analysis, survey equipment activation, data stitching, data alignment, differential analysis, output generation, and the like. Additionally, the storage unit 204 may retain previous action sequences for comparison and future reference, enabling continuous refinement of the system 102 over time. The storage unit 204 may be any kind of database such as, but not limited to, relational databases, dedicated databases, dynamic databases, monetized databases, scalable databases, cloud databases, distributed databases, any other databases, and a combination thereof.
In an exemplary embodiment, the data-obtaining subsystem 206 is configured to obtain first survey data from the one or more databases 118. The first survey may comprise, but not restricted to, at least one of: first sensor data, first visual data, first geospatial coordinates data, and the like. The first sensor data is obtained from the one or more sensors 114. The first visual data is high-definition one or more images and one or more videos (first visual data and second visual data) of the roadways obtained through at least one of: the one or more 3D cameras 224, associated with the surveillance vehicle 112, and the like. The first geospatial coordinates data is location-specific information acquired from the space-based navigation unit 116 to ensure precise mapping of surveyed area. The space-based navigation unit 116 may be, but not limited to, at least one of: a Global Positioning System (GPS) 116a, a Global Navigation Satellite System (GNSS), Galileo, Apple® Maps, Here WeGo, NavIC (Navigation with Indian Constellation), Inertial Navigation Systems (INS), and the like. The space-based navigation unit 116 is configured to streamline the process of revisiting and inspecting specific sections of the roadways, ensuring an in-depth analysis of the one or more roadway conditions.
The first survey data is generated at the initial survey of the roadway through the surveillance vehicle 112 equipped with one or more sensors and data acquisition systems. The first survey data serves as a baseline dataset for subsequent surveys, enabling the detection of changes in roadway conditions. The first survey data is stored in one or more databases 118 and processed by the data-obtaining subsystem 206 to analyse initial roadway conditions, identify priority areas, and define triggering parameters for future surveys. The first survey data is at least one of: collected in real time and retrieved from the one or more databases 118. The first survey data may comprise, but not restricted to, at least one of: one or more time stamps, the one or more roadway conditions, predefined survey routes data, geofencing boundaries information, priority zone information, one or more environmental conditions, and the like, stored in a survey history database 118a associated with the one or more databases 118. Each data point includes metadata indicating the one or more timestamps, ensuring temporal accuracy in the analysis. The one or more roadway conditions are information about surface quality, defects, cracks, potholes, and other roadway characteristics. The predefined survey routes are specific routes designated for the one or more geospatial roadway surveys, ensuring that all priority areas are covered. The geofencing boundaries are details of virtual perimeters set around a survey area to focus on specific zones of interest. The priority zone information is identification of high-importance areas, such as frequently used highways and accident-prone zones, requiring focused attention. The one or more environmental conditions provide data on weather, lighting, and other external factors that may influence survey results and the one or more roadway conditions.
In an exemplary embodiment, the trigger analysis subsystem 208 is configured to analyse the first survey data to determine one or more predefined trigger conditions based on at least one of: one or more primary triggering parameters, one or more secondary triggering parameters, and the like, associated with the first survey data. The one or more primary triggering parameters are directly linked to progression and scheduling of the one or more geospatial roadway surveys. The one or more primary triggering parameters may comprise, but not restricted to, at least one of: distances (for instance, 50 meters to 100 meters) from first surveyed points, time elapsed since a first survey, entry into predefined geospatial priority zones, seasonal requirements, and the like. The distance from the first surveyed points identifies whether additional one or more geospatial roadway surveys are required based on the distance covered since a last surveyed location. The one or more predefined trigger conditions are determined based on the time elapsed since the first survey, ensuring regular updates and monitoring. The predefined geospatial priority zones are specific zones, marked for higher priority due to critical infrastructure and safety concerns. The seasonal variations such as monsoon and winter, may necessitate additional one or more geospatial roadway surveys to assess weather-related impacts on roadways. The one or more secondary triggering parameters consider contextual and external conditions that may influence survey scheduling and execution. The one or more secondary triggering parameters may comprise, but not constrained to, at least one of: the one or more environmental conditions, traffic conditions, time of day, roadway type, a roadway classification, and the like. One or more environmental factors such as weather, temperature, and humidity are evaluated to determine the impact of the one or more environmental conditions on the one or more roadway conditions. The one or more environmental conditions are also monitored using one or more external databases (e.g., OpenWeather) to determine the impact on the one or more roadway conditions. The traffic conditions may include congestion levels and traffic patterns that are analyzed to schedule the one or more geospatial roadway surveys during optimal times. The one or more geospatial roadway surveys are triggered based on at least one of: daytime conditions and nighttime conditions. Different types of roadways (e.g., highways, urban streets, and rural roads) are classified to tailor survey priorities. The roadways are categorized based on usage, condition, and importance to determine whether the roadways meet predefined thresholds for survey triggers. Also, the one or more geospatial roadway surveys are categorized based on severity classification. The severity classification identifies and prioritizes areas with critical defects and rapid deterioration, enabling timely maintenance and resource allocation.
In an exemplary embodiment, the survey equipment activation subsystem 210 is configured to trigger at least one of: the surveillance vehicle 112, the one or more sensors 114, and the space-based navigation unit 116 to generate second survey data based on compliance with the one or more predefined trigger conditions. When the one or more predefined trigger conditions are met, the survey equipment activation subsystem 210 automatically activates at least one of: the surveillance vehicle 112, the one or more sensors 114, and the space-based navigation unit 116, and starts data recording (as shown in Figure 2D) while sending one or more alerts to one or more users while operating the surveillance vehicle 112. During the next survey of the roadways, the survey equipment activation subsystem 210 is configured to perform one of: starts the re-survey from the same location after the start/manual command and starts the re-survey automatically once the surveillance vehicle 112 is positioned at the same location. A log entry is created to document events, and real-time status updates are provided for monitoring progress. To ensure safety and reliability, the system 102 conducts a speed check to verify optimal survey conditions, confirms equipment readiness, validates the one or more environmental conditions to account for potential external impacts, and monitors the overall system 102 status to prevent operational issues.
In another exemplary embodiment, the survey equipment activation subsystem 210 triggers a drone associated with the one or more drones. The drone is connected to the surveillance vehicle 112 via the one or more communication networks 120, thereby enabling aerial data collection for the one or more geospatial roadway surveys. The drone is equipped with the one or more sensors 114 and the one or more 3D cameras 224 to capture the high-resolution one or more images and the one or more videos. This integration ensures comprehensive coverage of the surveyed area, enhancing the accuracy and scope of the data collected during the one or more geospatial roadway surveys. The operation of the drone is seamlessly coordinated with the surveillance vehicle 112 to maintain synchronization in data gathering.
In an exemplary embodiment, the data stitching subsystem 212 is configured to stitch (merge) and align at least one of: the first visual data and the second visual data associated with the second survey data, and the first sensor data and second sensor data associated with the second survey data, thereby generating a unified dataset. Stitching the first visual data and the second visual data comprises performing at least one of: extraction of one or more frame at predefined rates, lens distortion correction, resolution matching, exposure normalisation, noise reduction, and the like. The extraction of the one or more frames at the predefined rates is configured to maintain consistency. The predefined rate may be 30 frames per second. The predefined rate may change according to conditions and requirements. The lens distortion correction is configured to correct optical distortions to ensure an accurate field of view. The resolution matching is configured to ensure uniform resolution across the first visual data and the second visual data. The exposure normalization is configured to balance exposure levels to maintain visual clarity. The noise reduction is configured to eliminate unwanted one or more artifacts and distortions in the first visual data and the second visual data. The first sensor data and the second sensor data are integrated to ensure a comprehensive analysis of the roadways and the one or more environmental conditions.
The first visual data and the second visual data are stitched and synchronized with the first geospatial coordinate data and second geospatial coordinate data through at least one of: space-based navigation unit alignment, timestamp-based synchronization, sensor-based positional metadata integration, and the like. The space-based navigation unit alignment ensures spatial accuracy by aligning data points with the first geospatial coordinate data and the second geospatial coordinate data. The timestamp-based synchronization matches the first visual data, the second visual data, the first sensor data, and the second sensor data using the one or more timestamps for temporal coherence. The sensor-based positional metadata integration incorporates metadata from the one or more sensors 114 to refine spatial alignment.
In an exemplary embodiment, the data pre-processing subsystem 214 is configured to handle the second geospatial coordinates data (GPS signal reception of 50 Hertz, and the like) effectively. The data pre-processing subsystem 214 ensures that the second geospatial coordinates data is accurate, aligned with predefined routes, and utilized for identifying relevant survey zones. The data pre-processing subsystem 214 is configured to buffer the real-time obtained second geospatial coordinates data for addressing potential interruptions in signal acquisition during the real-time one or more geospatial roadway surveys. The incoming second geospatial coordinates data is temporarily stored in a buffer. This buffering ensures that transient signal losses and delays may not disrupt the continuity of the second geospatial coordinates data collection. The buffered second geospatial coordinates data is processed to fill gaps caused by signal interruptions. The data pre-processing subsystem 214 is configured to match the obtained second geospatial coordinates data with the predefined survey routes for validating and aligning the collected geospatial coordinates data with the predefined survey routes stored in a route database 118b associated with the one or more databases 118. The data pre-processing subsystem 214 is configured to determine the one or more survey zones based on geofencing boundaries information for prioritizing specific areas of interest for data collection within defined geofencing boundaries.
In an exemplary embodiment, the data stitching subsystem 212 is configured to extract one or more features from the first survey data and the second survey data through at least one of: spatial context analysis, one or more feature detection models, and the like. At least one of: the spatial context analysis, the one or more feature detection models, and the like are configured to analyse spatial relationships and identify the one or more features in the first survey data and the second survey data. The spatial context analysis identifies patterns and relationships between objects in the first visual data, the second visual data, the first sensor data, and the second sensor data. At least one of: the predefined rates of the one or more frames, video duration, surveillance vehicle 112 speeds, and the like, are analysed to ensure temporal consistency. The one or more environmental factors, including the time of day, sky conditions, and illumination levels, are assessed to compensate for lighting variations between the one or more geospatial roadway surveys. The one or more feature detection models are implemented to extract key elements efficiently. The one or more feature detection models may comprise, but not constrained to, at least one of: Iterative Closest Point (ICP) models, Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Simultaneous Localization and Mapping (SLAM) techniques, Harris Corner Detection, Edge Detection using a Canny model, and the like. The ICP models are employed for aligning and matching three-dimensional (3D) data points (LIDAR point clouds) from consecutive one or more geospatial roadway surveys.
The ICP models are configured to ensure spatial alignment of the one or more features by minimizing distance errors. The SIFT detects and describes local one or more features in the one or more images. The SIFT operates efficiently under scale, rotation, and illumination variations. The SURF is a faster alternative to the SIFT for detecting and describing the one or more features. The SURF is employed in scenarios requiring real-time processing and higher efficiency. The SLAM techniques combine feature detection with mapping data to identify spatial one or more features. The SLAM techniques are employed in dynamic 3D environments for mapping the roadways and landmarks. The Harris Corner Detection identifies corners and intersection points within the first visual data and the second visual data. The Harris Corner Detection is employed for detecting sharp changes in intensity and geometric features. The Edge Detection using the Canny model detects edges and boundaries in the first visual data and the second visual data. The Edge Detection using the Canny model highlights transitions and contrasts between different roadway features. The one or more features (spatial features) are analyzed to ensure one of: real scenes and context matches across the one or more videos. The one or more features are measurable or identifiable attributes from the roadway survey data (both the first survey data and the second survey data). The one or more features may include at least one of: visual features, sensor features, geospatial features, and the like. The visual features may comprise key points such as, but not limited to, corners, edges, and high-contrast regions (e.g., cracks, potholes, surface markings), roadway lines or patterns (e.g., lane markings, zebra crossings), texture patterns (e.g., asphalt roughness, tar patterns), and colour variations (e.g., fading paint, discoloured patches). The sensor features may comprise at least one of: surface elevation data (e.g., height variations from LiDAR or radar), depth features (e.g., cracks' depth or potholes' depth from ultrasonic sensors), density or material composition (e.g., asphalt density measured through radar), and the like. The geospatial features may comprise latitude/longitude coordinates, surveyed routes and boundaries, priority zones or zones of interest, and the like.
In an exemplary embodiment, the data alignment subsystem 216 is configured to align the one or more features extracted from the first survey data and the second survey data based on at least one of: one or more geospatial alignment procedures, one or more feature matching techniques, a homography estimation procedure, and the like. The one or more geospatial alignment procedures align the one or more features based on geographic coordinates and spatial positioning. The RANSAC model is configured to identify and eliminate one or more outliers from the first survey data and the second survey data. The RANSAC model retains only reliable one or more features that match, to ensure robust alignment. The RANSAC model identifies data points inconsistent with the overall alignment pattern and filters out noise. The one or more feature matching techniques match the extracted one or more features across the first survey data and the second survey data. The homography estimation procedure is configured to estimate a homography transformation matrix that maps one perspective view to another. The homography transformation matrix is a mathematical representation of relationship between two planes used to correct perspective distortions in one or more frames associated with the first survey data and the second survey data. The homography estimation procedure addresses perspective distortions and ensures that aligned one or more features reflects true spatial relationships. The homography estimation procedure computes precise transformation matrices to correct perspective differences, adjust scaling, and compensate for rotation variations.
Aligning the one or more extracted features involves altering the scale of the first visual data and the second visual data to ensure consistency across the first survey data and the second survey data. The altering ensures that the one or more features are represented proportionately regardless of resolution differences. Aligning the one or more extracted features involves rotation and alignment compensation. The rotation and alignment compensation detect and adjust for angular mismatches between the one or more frames. The rotation and alignment compensation aligns the one or more features even when the one or more frames are captured at varying orientations.
In an exemplary embodiment, the data alignment subsystem 216 is configured to process the one or more frames using one or more computer graphics techniques along with one or more seam-finding models. The data alignment subsystem 216 ensures optimal visual quality and consistency across at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data, addressing transitions and the one or more artifacts between the one or more frames. The one or more computer graphics techniques comprise alpha blending. The alpha blending ensures seamless transitions between the one or more frames by blending overlapping areas. The alpha blending adjusts pixel transparency and intensity to create a seamless visual experience. The one or more seam finding models are configured to identify and minimize visible seams in overlapping areas of the one or more frames. The one or more seam finding models are configured to optimize transitions by dynamically detecting and correcting alignment issues between the one or more frames. The one or more seam finding models are configured to optimise the visual quality of at least one of: the first visual data and the second visual data. The one or more computer graphics techniques also comprises at least one of: colour harmonization, artifact elimination, and the like. The color harmonization balances illumination and color variations across the one or more frames to maintain visual consistency. The color harmonization corrects discrepancies caused by differing lighting conditions during data capture. The artifact elimination removes one or more ghost artifacts resulting from misaligned and overlapping one or more frames. The artifact elimination enhances clarity and fidelity of the first survey data and the second survey data.
In an exemplary embodiment, the differential analysis subsystem 218 is configured to perform a comparative analysis of the first survey data with the second survey data to determine changes in the one or more roadway conditions based on at least one of: spatial registration of the first survey data and the second survey data, and feature matching of the first survey data and the second survey data. The spatial registration aligns the first survey data and the second survey data to ensure accurate comparisons. The spatial registration corrects for positional and perspective misalignments between the first survey data and the second survey data. The feature matching identifies and correlates common one or more features across the first survey data and the second survey data to detect changes.
The comparative analysis is performed through at least one of: cloud-to-cloud distance computation procedures, surface normal analysis, volumetric change detection, a multiscale model to model cloud comparison (M3C2) model, photogrammetric comparison procedures, deep learning-based defect detection, texture analysis, colour and intensity differencing, and the like. The cloud-to-cloud distance computation measures point-wise distances between the first survey data and the second survey data to identify changes in surface topography. The surface normal analysis analyses angular variations in surface orientation to detect deformation and erosion. The volumetric change detection computes changes in volume within identified areas, such as the potholes and the cracks. The volumetric change detection identifies volumetric changes in the one or more roadway deformities. The M3C2 model provides a precise multiscale comparison of point cloud data to identify structural differences. The photogrammetric comparison procedures compare high-resolution imagery to detect visual and structural changes. The deep learning-based defect detection utilizes trained one or more neural networks to automatically identify and classify the one or more roadway deformities. The texture analysis examines surface texture patterns to identify roughness, cracking, and degradation. The colour and intensity differencing detects changes in surface appearance, such as discoloration and fading, which may indicate wear.
The comparative analysis of the first sensor data and the second sensor data obtained from at least one of: the radar 226 and the one or more ultrasonic sensors 228 involves one or more key techniques to detect and quantify changes in the one or more roadway conditions. The one or more key techniques may comprise, but not limited to, at least one of: a) amplitude comparison: examines variations in signal strength to identify the one or more roadway deformities such as cracks, voids, and material inconsistencies, b) time-of-flight analysis measures a travel time of signals to and from the surface of the roadways, allowing for precise calculation of surface depth changes and defect dimensions, c) signal pattern matching compares reflected signal patterns from the one or more geospatial roadway surveys to detect alterations in surface texture and geometry, thereby enabling accurate identification of at least one of: new defects and worsening defects.
The differential analysis subsystem 218 is configured to: a) compute differences in depth, such as subsidence and protrusions on the roadway surfaces, b) quantify changes in the size and shape of the one or more roadway deformities, c) tracks the progression of defects over time, such as widening cracks and deepening potholes, and d) analyses the rate of roadway degradation trends, enabling predictive maintenance planning.
In an exemplary embodiment, the output generation subsystem 220 is configured to generate at least one of: comprehensive reports and visual representations of roadway condition changes, At least one of: the comprehensive reports and the visual representations are displayed on a user interface associated with the one or more communication devices 122. The output generation subsystem 220 is configured to integrate data from at least one of: LIDAR point cloud analysis, image processing results, the GPS 116a and the GNSS, IMU orientation measurements, and the like. The comprehensive reports and the visual representations aid in understanding deteriorating areas, analysing trends, and scheduling future one or more geospatial roadway surveys effectively. The comprehensive reports and the visual representations may comprise, but not constrained to, at least one of: rapidly deteriorating areas, trend analysis data, and one or more recommendations for roadway survey scheduling in one or more color-coded differential maps highlighting at least one of: the one or more changes between the first survey data and the second survey data, overlays marking new defects, one or more statistical summaries of the one or more roadway conditions, and the like. The rapidly deteriorating areas identify zones with significant and accelerated roadway condition degradation. The trend analysis data provides insights into historical and predicted roadway deterioration trends. The one or more recommendations for survey scheduling suggests optimal times and locations for subsequent one or more geospatial roadway surveys based on observed changes and priorities.
The visual representations include the one or more color-coded differential maps that highlight key information including at least one of: a) the one or more changes between the first survey data and the second survey data. The one or more changes are differences such as newly formed defects and worsening of existing one or more roadway conditions, b) the overlays marking the new defects: clearly indicates newly detected one or more roadway deformities and the locations on the roadways, c) the one or more statistical summaries of the one or more roadway conditions: provides high-level summaries for quick understanding and actionable insights, and the like. The one or more statistical summaries of the one or more roadway conditions may comprise, but not limited to, at least one of: a total number of detected one or more roadway deformities, average defect growth rates, optimal depth changes, deterioration trends, and the priority zone information. The average defect growth rates quantify how quickly the one or more roadway deformities are growing, enabling prioritization of critical repairs. The optimal depth changes highlight the areas where depth variations exceed acceptable thresholds, indicating urgent issues. The deterioration trends analyse and project trends in roadway degradation to predict future conditions. The priority zone information emphasizes areas within geofenced boundaries and predefined zones requiring immediate attention. The comprehensive reports also comprise a coal tar compliance rate that is measurement of deviation from standard specifications.
The one or more color-coded differential maps provide appropriate directions and guidance, enabling seamless collaboration with the one or more users to reach the specific roadway that requires inspection. The one or more color-coded differential maps are generated by displaying a start point of the surveillance vehicle 112 to an endpoint of the roadways that requires resurveying. This display on the one or more color-coded differential maps is achieved through the use of one or more markers and one more polygons. The one or more markers are configured to graphically represent various attributes of the roadway including, but not limited to, at least one of: the routes, road width, turns, crossroads, markings, the signages, side vegetation, and the like. The one or more polygons are configured to graphically represent a quality condition of the roadways in a predefined segment, using distinct colours (such as red, yellow, and green). The predefined segment is in a range between 10 meters and 12 meters stretch. For instance, a red polygon of the one more polygons may signify a deteriorated road segment, a yellow polygon of the one more polygons may indicate a road in a moderate condition, and a green polygon of the one more polygons may represent an optimal quality road section. This approach allows the one or more users to quickly grasp and assess the attributes and quality of the roadways visually and intuitively, aiding in decision-making for maintenance and improvement efforts during a resurvey process.
The one or more statistical summaries are presented with at least one of: quantified values, cartography representation, a digital twin, and the like. The digital twin assists in illustrating a timeline chart, where the timeline chart depicts distance along an x-axis and a range of the first survey data along a y-axis. By visualizing data in this manner, the digital twin enables a comprehensive overview of the one or more roadway conditions, facilitating informed decision-making during the resurvey process.
The visual representations may also comprise one or more graphs (as shown in Figure 2B). The one or more graphs may include, but not limited to, at least one of: a line chart depicting surface density trends, a bar chart displaying roadway defect distribution, interactive tooltips for detailed data inspection, a responsive configuration that adapts to different screen sizes, and the like.
The system 102 enhances the visual representations by incorporating specific indicators for roadway analysis. A surface type is differentiated using distinct colours (e.g., black for asphalt, brown for gravel) to provide clear material identification. Condition ratings are represented with one of: shading gradients and icons, indicating the quality of the roadways, such as "optimal", "moderate", and "suboptimal". A traffic volume is visualized using one of: symbols and markers that reflect varying levels of traffic, providing insights into roadway usage and potential wear patterns.
The comprehensive reports include side-by-side comparisons, overlay visualizations, and highlighted differences, all subject to rigorous quality control measures including alignment accuracy verification, blending quality assessment, and GPS correlation checks. The system 102 employs confidence scoring (threshold) to assign reliability metrics to detected changes, ensuring only high-confidence results are considered for analysis and decision-making.
As shown in Figure 2E, a video processing method may begin with input videos V1(t) and V2(t) associated with the one or more videos. The input videos may undergo preprocessing at a rate of 30 frames per second (fps). The preprocessing may be performed using a transformation represented by Equation 1.
Equation 1:
I(x,y)=aI0(x,y)+ß
where I(x,y) represents a preprocessed image, I0(x,y) represents an original image associated with the one or more images, and a and ß are parameters that may be adjusted to control brightness and contrast. In some cases, the preprocessing may perform noise reduction on the input videos.
Following preprocessing, the video processing method may perform feature detection using the SIFT. The SIFT may utilize a scale space function L(x, y, s) and a Gaussian kernel G(x, y, s). The difference of Gaussian (DoG) calculation may be employed to identify key one or more features in the pre-processed images.
The video processing method may then proceed to feature matching. In the feature matching, detected one or more features may be matched using a distance metric represented by Equation 2.
Equation 2:
d=||f1-f2||2
where f1 and f2 represent feature vectors. The RANSAC model may be applied as shown in Equation 3.
Equation 3:
||p1i-H_(p_2i )||2
where p1i and p2i represent corresponding points in the two images, and H represents the homography transformation matrix. The homography estimation procedure may be performed as part of the video processing method. The homography may be represented by the homography transformation matrix (3x3 matrix H) with elements shown in Matrix 1.
Matrix 1:
H= ¦(h_11&h_12&h_13@h_21&h_22&h_23@h_31&h_32&h_33 )
The homography transformation matrix may be employed to align and transform the one or more images for further processing. Further, frame blending is performed in the video processing method using Equation 4.
Equation 4:
B(x,y)=aI1(x,y)+(1-a)I2(x,y)
where B(x,y) represents a blended image, I1(x,y) and I2(x,y) represent aligned images, and a is a blending parameter in the range [0,1].
The video processing method may conclude with quality control measurements. The quality control measurements may include the calculation of a Peak Signal-to-Noise Ratio (PSNR) using Equation 5.
Equation 5:
PSNR=20 Log_10 (MAX/vMSE)
where MAX represents a maximum possible pixel value and MSE represents a mean squared error.
Additionally, a Structural Similarity Index (SSIM) may be calculated using Equation 6.
Equation 6:
SSIM=I(x,y)·c(x,y)·s(x,y)
where I(x,y), c(x,y), and s(x,y) represent luminance, contrast, and structural components, respectively. The video processing method may provide a comprehensive approach to analyzing and processing the first survey data and the second survey data captured in video format.
Figure 3 illustrates an exemplary flow diagram depicting a computer-implemented method 300 for performing the one or more geospatial roadway surveys, in accordance with an embodiment of the present invention.
According to an exemplary embodiment of the disclosure, the computer-implemented method 300 (hereinafter referred to as the method 300) for performing the one or more geospatial roadway surveys is disclosed. At step 302, the method 300 includes obtaining the first survey data through the data-obtaining subsystem powered by the one or more hardware processors. The first survey data may include, at least one of: the first sensor data (from the one or more sensors), the first visual data (high-resolution one or more images and one or more videos), and the first geospatial coordinates data (location information). The first survey data is retrieved from the one or more databases 118 that store historical and real-time survey information, ensuring comprehensive and accurate data collection for the one or more geospatial roadway surveys.
At step 304, the method 300 includes determining the one or more predefined trigger conditions by using the one or more hardware processors through the trigger analysis subsystem. The one or more predefined trigger conditions are based on at least one of: the one or more primary triggering parameters and the one or more secondary triggering parameters. This process ensures that the one or more geospatial roadway surveys are triggered under optimal and relevant conditions for accurate and timely data collection.
At step 306, the method 300 includes triggering at least one of: the surveillance vehicle, the one or more sensors, and the space-based navigation unit by utilizing the one or more hardware processors through the survey equipment activation subsystem. At least one of: the surveillance vehicle, the one or more sensors, and the space-based navigation unit are activated when the system detects compliance with the one or more predefined trigger conditions, ensuring that the second survey data is collected accurately and at the appropriate time, based on the established criteria.
At step 308, the method 300 includes stitching through the one or more hardware processors, utilizing the data stitching subsystem, to combine at least one of: the first visual data, the second visual data, the first sensor data and the second sensor data. This stitching process ensures the creation of the unified dataset, seamlessly merging multiple data streams from the one or more geospatial roadway surveys. Additionally, the system synchronizes the unified dataset with the first geospatial coordinate data and the second geospatial coordinate data, ensuring the precise spatial alignment across at least one of: the first visual data, the second visual data, the first sensor data, and the second sensor data, which allows for accurate comparison and analysis of the surveyed area.
At step 310, the method 300 includes feature extraction by utilizing the one or more hardware processors through the data stitching subsystem. The data stitching subsystem extracts the one or more features from both the first survey data and the second survey data using at least one of: the spatial context analysis and the one or more feature detection models. At least one of: the spatial context analysis and the one or more feature detection models identify and highlight significant one or more features such as road surface changes, the one or more roadway deformities, and landmarks, enabling detailed comparison and analysis of the first survey data and the second survey data.
At step 312, the method 300 includes feature alignment through the one or more hardware processors via the data alignment subsystem. This process aligns the extracted one or more features from the first survey data and the second survey data using at least one of: the one or more geospatial alignment procedures to ensure correct positioning based on the first geospatial coordinates data and the second geospatial coordinates data, the one or more feature matching techniques to match corresponding points between the first survey data and the second survey data, and the homography estimation procedure to correct for perspective distortions between the first visual data and the second visual data.
At step 314, the method 300 includes processing the one or more frames associated with both the first survey data and the second survey data using the one or more hardware processors through the data alignment subsystem. This involves applying the one or more computer graphics techniques, such as alpha blending and the one or more seam finding models, to ensure seamless transitions and the consistent optimal visual quality across of at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data. The one or more computer graphics techniques and the one or more seam finding models are employed to harmonize the first visual data with the second visual data, as well as to align the first sensor data and the second sensor data, eliminating the one or more artifacts and distortions while maintaining optimal visual clarity for accurate analysis.
At step 316, the method 300 includes performing the comparative analysis between the first survey data and the second survey data through the differential analysis subsystem using the one or more hardware processors. This analysis is aimed at identifying the changes in the one or more roadway conditions by leveraging methods such as the spatial registration, which ensures accurate alignment of the first survey data and the second survey data, and feature matching, which identifies corresponding one or more features across the one or more geospatial roadway surveys.
At step 318, the method 300 includes generating at least one of: the comprehensive reports and the visual representations using the one or more hardware processors through the output generation subsystem. At least one of: the comprehensive reports and the visual representations may include, but not constrained to, at least one of: the rapidly deteriorating areas, the trend analysis data, and the one or more recommendations for roadway survey scheduling. The results are displayed in the one or more color-coded differential maps, which highlight key information such as the one or more changes between the first survey data and the second survey data, the overlays marking the new defects, and the one or more statistical summaries of the one or more roadway conditions.
Numerous advantages of the present disclosure may be apparent from the discussion above. In accordance with the present disclosure, the system for performing the one or more geospatial roadway surveys is disclosed. The system adjusts data to account for external factors such as weather, lighting, and temperature variations that may influence measurements. Additionally, the system ensures all equipment is accurately calibrated before and during the one or more geospatial roadway surveys, maintaining precision and consistency in the collected at least one of: second visual data, second sensor data, and second geospatial coordinate data. The system ensures efficient data management by enabling real-time upload of the first survey data and the second survey data to the one or more databases 118, ensuring immediate accessibility and storage. Each data is automatically tagged with the metadata, including the one or more timestamps and location details, for easy identification and organization. Survey quality indicators are generated to assess data reliability, while coverage tracking monitors the extent of surveyed areas, ensuring all the survey zones are comprehensively analysed.
The system may improve the safety of the road inspections by guiding the surveillance vehicle efficiently to the one or more survey zones, reducing time spent on the roadways, and minimizing the exposure of inspection personnel to traffic hazards. By optimizing a inspection process and making the ample use of existing data stored in the one or more databases 118, the system leads to cost savings in road maintenance and reduces the need for frequent manual surveys. The one or more primary triggering parameters and the one or more secondary triggering parameters are adjustable to accommodate surveying specifications provided by the one or more users. In a few instances, where the one or more roadway conditions are deemed suboptimal, the system proactively recommends that the resurveying be undertaken within a predetermined time period.
While specific language has been used to describe the invention, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
,CLAIMS:I/We Claim:
1. A computer-implemented method (300) for performing one or more geospatial roadway surveys, comprising:
obtaining (302), by one or more hardware processors (106) through a data-obtaining subsystem (206), first survey data comprises at least one of: first sensor data, first visual data, and first geospatial coordinates data from one or more databases (118);
determining (304), by the one or more hardware processors (106) through a trigger analysis subsystem (208), one or more predefined trigger conditions based on at least one of: one or more primary triggering parameters and one or more secondary triggering parameters, associated with the first survey data;
triggering (306), by the one or more hardware processors (106) through a survey equipment activation subsystem (210), at least one of: a surveillance vehicle (112), one or more sensors (114), and a space-based navigation unit (116) to generate second survey data based on compliance with the one or more predefined trigger conditions;
stitching (308), by the one or more hardware processors (106) through a data stitching subsystem (212), between at least one of: the first visual data and second visual data associated with the second survey data, and the first sensor data and second sensor data associated with the second survey data to generate a unified dataset, and synchronisation with the first geospatial coordinate data and second geospatial coordinate data;
extracting (310), by the one or more hardware processors (106) through the data stitching subsystem (212), one or more features from the first survey data and the second survey data through at least one of: spatial context analysis and one or more feature detection models;
aligning (312), by the one or more hardware processors (106) through a data alignment subsystem (216), the one or more features based on at least one of: one or more geospatial alignment procedures, one or more feature matching techniques, and a homography estimation procedure;
processing (314), by the one or more hardware processors (106) through the data alignment subsystem (216), one or more frames associated with the first survey data and the second survey data through one or more computer graphics techniques along with one or more seam finding models to maintain consistent optimal visual quality of at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data;
performing (316), by the one or more hardware processors (106) through a differential analysis subsystem (218), a comparative analysis between the first survey data and the second survey data to determine changes in the one or more roadway conditions based on at least one of: spatial registration of the first survey data and the second survey data, and feature matching of the first survey data and the second survey data; and
generating (318), by the one or more hardware processors (106) through an output generation subsystem (220), at least one of: rapidly deteriorating areas, trend analysis data, and one or more recommendations for roadway survey scheduling in one or more color-coded differential maps highlighting at least one of: one or more changes between the first survey data and the second survey data, overlays marking new defects, and one or more statistical summaries of the one or more roadway conditions.
2. The computer-implemented method (300) as claimed in claim 1, wherein the first survey data also comprise at least one of: one or more time stamps, the one or more roadway conditions, predefined survey routes data, geofencing boundaries information, priority zone information, and one or more environmental conditions, stored in the one or more databases (118).
3. The computer-implemented method (300) as claimed in claim 1, wherein processing, by the one or more hardware processors (106) through a data pre-processing subsystem (214), the second geospatial coordinates data comprises:
buffering the real-time obtained geospatial coordinates data to process signal interruptions;
matching the obtained second geospatial coordinates data with the predefined survey routes data stored in the one or more databases (118); and
determining one or more survey zones based on the geofencing boundaries information.
4. The computer-implemented method (300) as claimed in claim 1, wherein the one or more primary triggering parameters comprise at least one of: distances from first surveyed points, time elapsed since a first survey, entry into predefined geospatial priority zones, and seasonal requirements; and
the one or more secondary triggering parameters comprise at least one of: the one or more environmental conditions, traffic conditions, time of day, roadway type and a roadway classification.
5. The computer-implemented method (300) as claimed in claim 1, wherein stitching the first visual data and the second visual data comprises performing at least one of: extraction of the one or more frames at predefined rates, lens distortion correction, resolution matching, exposure normalisation, and noise reduction.
6. The computer-implemented method (300) as claimed in claim 5, wherein stitching the first visual data and second visual data with the first geospatial coordinate data, and the second geospatial coordinate data respectively through at least one of: space-based navigation unit alignment, timestamp-based synchronization, and sensor-based positional metadata integration.
7. The computer-implemented method (300) as claimed in claim 1, wherein extracting one or more features from the first survey data and the second survey data comprises implementing one or more feature detection models include at least one of: Iterative Closest Point (ICP) models, Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), Simultaneous Localization and Mapping (SLAM) techniques, Harris Corner Detection, and Edge Detection using a Canny model.
8. The computer-implemented method (300) as claimed in claim 1, wherein aligning the one or more extracted features comprise:
performing the geospatial alignment procedures through a Random Sample Consensus (RANSAC) model to eliminate one or more outliers;
estimating a homography transformation matrix through the homography estimation procedure to correct perspective distortions in the one or more frames;
altering scaling of the one or more frames; and
compensating for rotation and alignment mismatches of the one or more frames.
9. The computer-implemented method (300) as claimed in claim 1, wherein processing the one or more frames through one or more computer graphics techniques comprise:
alpha blending to maintain seamless transitions between the one or more frames;
applying the one or more seam finding models to optimise visual quality of at least one of: the first visual data and the second visual data;
harmonizing colours associated with the one or more frames to correct illumination differences; and
eliminating one or more ghost artifacts.
10. The computer-implemented method (300) as claimed in claim 1, wherein performing the comparative analysis through at least one of: cloud-to-cloud distance computation procedures, surface normal analysis, volumetric change detection, a multiscale model to model cloud comparison (M3C2) model, photogrammetric comparison procedures, deep learning-based defect detection, texture analysis, and colour and intensity differencing, for:
computing surface depth variations;
identifying volumetric changes in one or more roadway deformities;
determining defect growth across a roadway surface; and
analysing the rate of deterioration trends.
11. The computer-implemented method (300) as claimed in claim 1, wherein the one or more statistical summaries of the one or more roadway conditions comprise at least one of: a total number of detected one or more roadway deformities, average defect growth rates, optimal depth changes, deterioration trends, and the priority zone information.
12. A computer-implemented system (102) for performing one or more geospatial roadway surveys, comprising:
one or more hardware processors (106); and
a memory unit (108) coupled to the one or more hardware processors (106), wherein the memory unit (108) comprises a plurality of subsystems (110) in form of programmable instructions executable by the one or more hardware processors (106), and wherein the plurality of subsystems (110) comprises:
a data-obtaining subsystem (206) configured to obtain first survey data comprises at least one of: first sensor data, first visual data, and first geospatial coordinates data from one or more databases (118);
a trigger analysis subsystem (208) configured to determine one or more predefined trigger conditions based on at least one of: one or more primary triggering parameters and one or more secondary triggering parameters, associated with the first survey data;
a survey equipment activation subsystem (210) configured to trigger at least one of: a surveillance vehicle (112), one or more sensors (114), and a space-based navigation unit (116) to generate second survey data based on compliance with the one or more predefined trigger conditions;
a data stitching subsystem (212) configured to:
stitch between at least one of: the first visual data and second visual data associated with the second survey data, and the first sensor data and second sensor data associated with the second survey data to generate a unified dataset, and synchronisation with the first geospatial coordinate data and second geospatial coordinate data; and
extract one or more features from the first survey data and the second survey data through at least one of: spatial context analysis and one or more feature detection models;
a data alignment subsystem (216) configured to:
align the one or more features based on at least one of: one or more geospatial alignment procedures, one or more feature matching techniques, and a homography estimation procedure; and
process one or more frames associated with the first survey data and the second survey data through one or more computer graphics techniques along with one or more seam finding models to maintain consistent optimal visual quality of at least one of: the first visual data and the second visual data, and the first sensor data and the second sensor data;
a differential analysis subsystem (218) configured to perform a comparative analysis between the first survey data and the second survey data to determine changes in the one or more roadway conditions based on at least one of: spatial registration of the first survey data and the second survey data, and feature matching of the first survey data and the second survey data; and
an output generation subsystem (220) configured to generate at least one of: rapidly deteriorating areas, trend analysis data, and one or more recommendations for roadway survey scheduling in one or more color-coded differential maps highlighting at least one of: one or more changes between the first survey data and the second survey data, overlays marking new defects, and one or more statistical summaries of the one or more roadway conditions.
Dated this 24th day of January, 2025
Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
Agent for Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202311079934-STATEMENT OF UNDERTAKING (FORM 3) [24-11-2023(online)].pdf | 2023-11-24 |
| 2 | 202311079934-PROVISIONAL SPECIFICATION [24-11-2023(online)].pdf | 2023-11-24 |
| 3 | 202311079934-FORM FOR SMALL ENTITY(FORM-28) [24-11-2023(online)].pdf | 2023-11-24 |
| 4 | 202311079934-FORM FOR SMALL ENTITY [24-11-2023(online)].pdf | 2023-11-24 |
| 5 | 202311079934-FORM 1 [24-11-2023(online)].pdf | 2023-11-24 |
| 6 | 202311079934-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-11-2023(online)].pdf | 2023-11-24 |
| 7 | 202311079934-EVIDENCE FOR REGISTRATION UNDER SSI [24-11-2023(online)].pdf | 2023-11-24 |
| 8 | 202311079934-DRAWINGS [24-11-2023(online)].pdf | 2023-11-24 |
| 9 | 202311079934-APPLICATIONFORPOSTDATING [22-11-2024(online)].pdf | 2024-11-22 |
| 10 | 202311079934-FORM-26 [26-11-2024(online)].pdf | 2024-11-26 |
| 11 | 202311079934-FORM-5 [24-01-2025(online)].pdf | 2025-01-24 |
| 12 | 202311079934-FORM-26 [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202311079934-FORM FOR SMALL ENTITY [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202311079934-FORM 3 [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202311079934-EVIDENCE FOR REGISTRATION UNDER SSI [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202311079934-DRAWING [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202311079934-CORRESPONDENCE-OTHERS [24-01-2025(online)].pdf | 2025-01-24 |
| 18 | 202311079934-COMPLETE SPECIFICATION [24-01-2025(online)].pdf | 2025-01-24 |